chore: align slurp config and scaffolding
This commit is contained in:
94
docs/development/sec-slurp-ucxl-beacon-pin-steward.md
Normal file
94
docs/development/sec-slurp-ucxl-beacon-pin-steward.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# SEC-SLURP UCXL Beacon & Pin Steward Design Notes
|
||||
|
||||
## Purpose
|
||||
- Establish the authoritative UCXL context beacon that bridges SLURP persistence with WHOOSH/role-aware agents.
|
||||
- Define the Pin Steward responsibilities so DHT replication, healing, and telemetry satisfy SEC-SLURP 1.1a acceptance criteria.
|
||||
- Provide an incremental execution plan aligned with the Persistence Wiring Report and DHT Resilience Supplement.
|
||||
|
||||
## UCXL Beacon Data Model
|
||||
- **manifest_id** (`string`): deterministic hash of `project:task:address:version`.
|
||||
- **ucxl_address** (`ucxl.Address`): canonical address that produced the manifest.
|
||||
- **context_version** (`int`): monotonic version from SLURP temporal graph.
|
||||
- **source_hash** (`string`): content hash emitted by `persistContext` (LevelDB) for change detection.
|
||||
- **generated_by** (`string`): CHORUS agent id / role bundle that wrote the context.
|
||||
- **generated_at** (`time.Time`): timestamp from SLURP persistence event.
|
||||
- **replica_targets** (`[]string`): desired replica node ids (Pin Steward enforces `replication_factor`).
|
||||
- **replica_state** (`[]ReplicaInfo`): health snapshot (`node_id`, `provider_id`, `status`, `last_checked`, `latency_ms`).
|
||||
- **encryption** (`EncryptionMetadata`):
|
||||
- `dek_fingerprint` (`string`)
|
||||
- `kek_policy` (`string`): BACKBEAT rotation policy identifier.
|
||||
- `rotation_due` (`time.Time`)
|
||||
- **compliance_tags** (`[]string`): SHHH/WHOOSH governance hooks (e.g. `sec-high`, `audit-required`).
|
||||
- **beacon_metrics** (`BeaconMetrics`): summarized counters for cache hits, DHT retrieves, validation errors.
|
||||
|
||||
### Storage Strategy
|
||||
- Primary persistence in LevelDB (`pkg/slurp/slurp.go`) using key prefix `beacon::<manifest_id>`.
|
||||
- Secondary replication to DHT under `dht://beacon/<manifest_id>` enabling WHOOSH agents to read via Pin Steward API.
|
||||
- Optional export to UCXL Decision Record envelope for historical traceability.
|
||||
|
||||
## Beacon APIs
|
||||
| Endpoint | Purpose | Notes |
|
||||
|----------|---------|-------|
|
||||
| `Beacon.Upsert(manifest)` | Persist/update manifest | Called by SLURP after `persistContext` success. |
|
||||
| `Beacon.Get(ucxlAddress)` | Resolve latest manifest | Used by WHOOSH/agents to locate canonical context. |
|
||||
| `Beacon.List(filter)` | Query manifests by tags/roles/time | Backs dashboards and Pin Steward audits. |
|
||||
| `Beacon.StreamChanges(since)` | Provide change feed for Pin Steward anti-entropy jobs | Implements backpressure and bookmark tokens. |
|
||||
|
||||
All APIs return envelope with UCXL citation + checksum to make SLURP⇄WHOOSH handoff auditable.
|
||||
|
||||
## Pin Steward Responsibilities
|
||||
1. **Replication Planning**
|
||||
- Read manifests via `Beacon.StreamChanges`.
|
||||
- Evaluate current replica_state vs. `replication_factor` from configuration.
|
||||
- Produce queue of DHT store/refresh tasks (`storeAsync`, `storeSync`, `storeQuorum`).
|
||||
2. **Healing & Anti-Entropy**
|
||||
- Schedule `heal_under_replicated` jobs every `anti_entropy_interval`.
|
||||
- Re-announce providers on Pulse/Reverb when TTL < threshold.
|
||||
- Record outcomes back into manifest (`replica_state`).
|
||||
3. **Envelope Encryption Enforcement**
|
||||
- Request KEK material from KACHING/SHHH as described in SEC-SLURP 1.1a.
|
||||
- Ensure DEK fingerprints match `encryption` metadata; trigger rotation if stale.
|
||||
4. **Telemetry Export**
|
||||
- Emit Prometheus counters: `pin_steward_replica_heal_total`, `pin_steward_replica_unhealthy`, `pin_steward_encryption_rotations_total`.
|
||||
- Surface aggregated health to WHOOSH dashboards for council visibility.
|
||||
|
||||
## Interaction Flow
|
||||
1. **SLURP Persistence**
|
||||
- `UpsertContext` → LevelDB write → manifests assembled (`persistContext`).
|
||||
- Beacon `Upsert` called with manifest + context hash.
|
||||
2. **Pin Steward Intake**
|
||||
- `StreamChanges` yields manifest → steward verifies encryption metadata and schedules replication tasks.
|
||||
3. **DHT Coordination**
|
||||
- `ReplicationManager.EnsureReplication` invoked with target factor.
|
||||
- `defaultVectorClockManager` (temporary) to be replaced with libp2p-aware implementation for provider TTL tracking.
|
||||
4. **WHOOSH Consumption**
|
||||
- WHOOSH SLURP proxy fetches manifest via `Beacon.Get`, caches in WHOOSH DB, attaches to deliverable artifacts.
|
||||
- Council UI surfaces replication state + encryption posture for operator decisions.
|
||||
|
||||
## Incremental Delivery Plan
|
||||
1. **Sprint A (Persistence parity)**
|
||||
- Finalize LevelDB manifest schema + tests (extend `slurp_persistence_test.go`).
|
||||
- Implement Beacon interfaces within SLURP service (in-memory + LevelDB).
|
||||
- Add Prometheus metrics for persistence reads/misses.
|
||||
2. **Sprint B (Pin Steward MVP)**
|
||||
- Build steward worker with configurable reconciliation loop.
|
||||
- Wire to existing `DistributedStorage` stubs (`StoreAsync/Sync/Quorum`).
|
||||
- Emit health logs; integrate with CLI diagnostics.
|
||||
3. **Sprint C (DHT Resilience)**
|
||||
- Swap `defaultVectorClockManager` with libp2p implementation; add provider TTL probes.
|
||||
- Implement envelope encryption path leveraging KACHING/SHHH interfaces (replace stubs in `pkg/crypto`).
|
||||
- Add CI checks: replica factor assertions, provider refresh tests, beacon schema validation.
|
||||
4. **Sprint D (WHOOSH Integration)**
|
||||
- Expose REST/gRPC endpoint for WHOOSH to query manifests.
|
||||
- Update WHOOSH SLURPArtifactManager to require beacon confirmation before submission.
|
||||
- Surface Pin Steward alerts in WHOOSH admin UI.
|
||||
|
||||
## Open Questions
|
||||
- Confirm whether Beacon manifests should include DER signatures or rely on UCXL envelope hash.
|
||||
- Determine storage for historical manifests (append-only log vs. latest-only) to support temporal rewind.
|
||||
- Align Pin Steward job scheduling with existing BACKBEAT cadence to avoid conflicting rotations.
|
||||
|
||||
## Next Actions
|
||||
- Prototype `BeaconStore` interface + LevelDB implementation in SLURP package.
|
||||
- Document Pin Steward anti-entropy algorithm with pseudocode and integrate into SEC-SLURP test plan.
|
||||
- Sync with WHOOSH team on manifest query contract (REST vs. gRPC; pagination semantics).
|
||||
52
docs/development/sec-slurp-whoosh-integration-demo.md
Normal file
52
docs/development/sec-slurp-whoosh-integration-demo.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# WHOOSH ↔ CHORUS Integration Demo Plan (SEC-SLURP Track)
|
||||
|
||||
## Demo Objectives
|
||||
- Showcase end-to-end persistence → UCXL beacon → Pin Steward → WHOOSH artifact submission flow.
|
||||
- Validate role-based agent interactions with SLURP contexts (resolver + temporal graph) prior to DHT hardening.
|
||||
- Capture metrics/telemetry needed for SEC-SLURP exit criteria and WHOOSH Phase 1 sign-off.
|
||||
|
||||
## Sequenced Milestones
|
||||
1. **Persistence Validation Session**
|
||||
- Run `GOWORK=off go test ./pkg/slurp/...` with stubs patched; demo LevelDB warm/load using `slurp_persistence_test.go`.
|
||||
- Inspect beacon manifests via CLI (`slurpctl beacon list`).
|
||||
- Deliverable: test log + manifest sample archived in UCXL.
|
||||
|
||||
2. **Beacon → Pin Steward Dry Run**
|
||||
- Replay stored manifests through Pin Steward worker with mock DHT backend.
|
||||
- Show replication planner queue + telemetry counters (`pin_steward_replica_heal_total`).
|
||||
- Deliverable: decision record linking manifest to replication outcome.
|
||||
|
||||
3. **WHOOSH SLURP Proxy Alignment**
|
||||
- Point WHOOSH dev stack (`npm run dev`) at local SLURP with beacon API enabled.
|
||||
- Walk through council formation, capture SLURP artifact submission with beacon confirmation modal.
|
||||
- Deliverable: screen recording + WHOOSH DB entry referencing beacon manifest id.
|
||||
|
||||
4. **DHT Resilience Checkpoint**
|
||||
- Switch Pin Steward to libp2p DHT (once wired) and run replication + provider TTL check.
|
||||
- Fail one node intentionally, demonstrate heal path + alert surfaced in WHOOSH UI.
|
||||
- Deliverable: telemetry dump + alert screenshot.
|
||||
|
||||
5. **Governance & Telemetry Wrap-Up**
|
||||
- Export Prometheus metrics (cache hit/miss, beacon writes, replication heals) into KACHING dashboard.
|
||||
- Publish Decision Record documenting UCXL address flow, referencing SEC-SLURP docs.
|
||||
|
||||
## Roles & Responsibilities
|
||||
- **SLURP Team:** finalize persistence build, implement beacon APIs, own Pin Steward worker.
|
||||
- **WHOOSH Team:** wire beacon client, expose replication/encryption status in UI, capture council telemetry.
|
||||
- **KACHING/SHHH Stakeholders:** validate telemetry ingestion and encryption custody notes.
|
||||
- **Program Management:** schedule demo rehearsal, ensure Decision Records and UCXL addresses recorded.
|
||||
|
||||
## Tooling & Environments
|
||||
- Local cluster via `docker compose up slurp whoosh pin-steward` (to be scripted in `commands/`).
|
||||
- Use `make demo-sec-slurp` target to run integration harness (to be added).
|
||||
- Prometheus/Grafana docker compose for metrics validation.
|
||||
|
||||
## Success Criteria
|
||||
- Beacon manifest accessible from WHOOSH UI within 2s average latency.
|
||||
- Pin Steward resolves under-replicated manifest within demo timeline (<30s) and records healing event.
|
||||
- All demo steps logged with UCXL references and SHHH redaction checks passing.
|
||||
|
||||
## Open Items
|
||||
- Need sample repo/issues to feed WHOOSH analyzer (consider `project-queues/active/WHOOSH/demo-data`).
|
||||
- Determine minimal DHT cluster footprint for the demo (3 vs 5 nodes).
|
||||
- Align on telemetry retention window for demo (24h?).
|
||||
32
docs/progress/SEC-SLURP-1.1a-supplemental.md
Normal file
32
docs/progress/SEC-SLURP-1.1a-supplemental.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# SEC-SLURP 1.1a – DHT Resilience Supplement
|
||||
|
||||
## Requirements (derived from `docs/Modules/DHT.md`)
|
||||
|
||||
1. **Real DHT state & persistence**
|
||||
- Replace mock DHT usage with libp2p-based storage or equivalent real implementation.
|
||||
- Store DHT/blockstore data on persistent volumes (named volumes/ZFS/NFS) with node placement constraints.
|
||||
- Ensure bootstrap nodes are stateful and survive container churn.
|
||||
|
||||
2. **Pin Steward + replication policy**
|
||||
- Introduce a Pin Steward service that tracks UCXL CID manifests and enforces replication factor (e.g. 3–5 replicas).
|
||||
- Re-announce providers on Pulse/Reverb and heal under-replicated content.
|
||||
- Schedule anti-entropy jobs to verify and repair replicas.
|
||||
|
||||
3. **Envelope encryption & shared key custody**
|
||||
- Implement envelope encryption (DEK+KEK) with threshold/organizational custody rather than per-role ownership.
|
||||
- Store KEK metadata with UCXL manifests; rotate via BACKBEAT.
|
||||
- Update crypto/key-manager stubs to real implementations once available.
|
||||
|
||||
4. **Shared UCXL Beacon index**
|
||||
- Maintain an authoritative CID registry (DR/UCXL) replicated outside individual agents.
|
||||
- Ensure metadata updates are durable and role-agnostic to prevent stranded CIDs.
|
||||
|
||||
5. **CI/SLO validation**
|
||||
- Add automated tests/health checks covering provider refresh, replication factor, and persistent-storage guarantees.
|
||||
- Gate releases on DHT resilience checks (provider TTLs, replica counts).
|
||||
|
||||
## Integration Path for SEC-SLURP 1.1
|
||||
|
||||
- Incorporate the above requirements as acceptance criteria alongside LevelDB persistence.
|
||||
- Sequence work to: migrate DHT interactions, introduce Pin Steward, implement envelope crypto, and wire CI validation.
|
||||
- Attach artifacts (Pin Steward design, envelope crypto spec, CI scripts) to the Phase 1 deliverable checklist.
|
||||
@@ -5,10 +5,14 @@
|
||||
- Upgraded SLURP’s lifecycle so initialization bootstraps cached context data from disk, cache misses hydrate from persistence, successful `UpsertContext` calls write back to LevelDB, and shutdown closes the store with error telemetry.
|
||||
- Introduced `pkg/slurp/slurp_persistence_test.go` to confirm contexts survive process restarts and can be resolved after clearing in-memory caches.
|
||||
- Instrumented cache/persistence metrics so hit/miss ratios and storage failures are tracked for observability.
|
||||
- Attempted `GOWORK=off go test ./pkg/slurp`; execution was blocked by legacy references to `config.Authority*` symbols in `pkg/slurp/context`, so the new test did not run.
|
||||
- Implemented lightweight crypto/key-management stubs (`pkg/crypto/role_crypto_stub.go`, `pkg/crypto/key_manager_stub.go`) so SLURP modules compile while the production stack is ported.
|
||||
- Updated DHT distribution and encrypted storage layers (`pkg/slurp/distribution/dht_impl.go`, `pkg/slurp/storage/encrypted_storage.go`) to use the crypto stubs, adding per-role fingerprints and durable decoding logic.
|
||||
- Expanded storage metadata models (`pkg/slurp/storage/types.go`, `pkg/slurp/storage/backup_manager.go`) with fields referenced by backup/replication flows (progress, error messages, retention, data size).
|
||||
- Incrementally stubbed/simplified distributed storage helpers to inch toward a compilable SLURP package.
|
||||
- Attempted `GOWORK=off go test ./pkg/slurp`; the original authority-level blocker is resolved, but builds still fail in storage/index code due to remaining stub work (e.g., Bleve queries, DHT helpers).
|
||||
|
||||
## Recommended Next Steps
|
||||
- Address the `config.Authority*` symbol drift (or scope down the impacted packages) so the SLURP test suite can compile cleanly, then rerun `GOWORK=off go test ./pkg/slurp` to validate persistence changes.
|
||||
- Feed the durable store into the resolver and temporal graph implementations to finish the remaining Phase 1 SLURP roadmap items.
|
||||
- Expand Prometheus metrics and logging to track cache hit/miss ratios plus persistence errors for SEC-SLURP observability goals.
|
||||
- Review unrelated changes on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert to keep the branch focused.
|
||||
- Stub the remaining storage/index dependencies (Bleve query scaffolding, UCXL helpers, `errorCh` queues, cache regex usage) or neutralize the heavy modules so that `GOWORK=off go test ./pkg/slurp` compiles and runs.
|
||||
- Feed the durable store into the resolver and temporal graph implementations to finish the SEC-SLURP 1.1 milestone once the package builds cleanly.
|
||||
- Extend Prometheus metrics/logging to track cache hit/miss ratios plus persistence errors for observability alignment.
|
||||
- Review unrelated changes still tracked on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert for focus.
|
||||
|
||||
@@ -130,7 +130,27 @@ type ResolutionConfig struct {
|
||||
|
||||
// SlurpConfig defines SLURP settings
|
||||
type SlurpConfig struct {
|
||||
Enabled bool `yaml:"enabled"`
|
||||
Enabled bool `yaml:"enabled"`
|
||||
BaseURL string `yaml:"base_url"`
|
||||
APIKey string `yaml:"api_key"`
|
||||
Timeout time.Duration `yaml:"timeout"`
|
||||
RetryCount int `yaml:"retry_count"`
|
||||
RetryDelay time.Duration `yaml:"retry_delay"`
|
||||
TemporalAnalysis SlurpTemporalAnalysisConfig `yaml:"temporal_analysis"`
|
||||
Performance SlurpPerformanceConfig `yaml:"performance"`
|
||||
}
|
||||
|
||||
// SlurpTemporalAnalysisConfig captures temporal behaviour tuning for SLURP.
|
||||
type SlurpTemporalAnalysisConfig struct {
|
||||
MaxDecisionHops int `yaml:"max_decision_hops"`
|
||||
StalenessCheckInterval time.Duration `yaml:"staleness_check_interval"`
|
||||
StalenessThreshold float64 `yaml:"staleness_threshold"`
|
||||
}
|
||||
|
||||
// SlurpPerformanceConfig exposes performance related tunables for SLURP.
|
||||
type SlurpPerformanceConfig struct {
|
||||
MaxConcurrentResolutions int `yaml:"max_concurrent_resolutions"`
|
||||
MetricsCollectionInterval time.Duration `yaml:"metrics_collection_interval"`
|
||||
}
|
||||
|
||||
// WHOOSHAPIConfig defines WHOOSH API integration settings
|
||||
@@ -211,7 +231,21 @@ func LoadFromEnvironment() (*Config, error) {
|
||||
},
|
||||
},
|
||||
Slurp: SlurpConfig{
|
||||
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
|
||||
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
|
||||
BaseURL: getEnvOrDefault("CHORUS_SLURP_API_BASE_URL", "http://localhost:9090"),
|
||||
APIKey: getEnvOrFileContent("CHORUS_SLURP_API_KEY", "CHORUS_SLURP_API_KEY_FILE"),
|
||||
Timeout: getEnvDurationOrDefault("CHORUS_SLURP_API_TIMEOUT", 15*time.Second),
|
||||
RetryCount: getEnvIntOrDefault("CHORUS_SLURP_API_RETRY_COUNT", 3),
|
||||
RetryDelay: getEnvDurationOrDefault("CHORUS_SLURP_API_RETRY_DELAY", 2*time.Second),
|
||||
TemporalAnalysis: SlurpTemporalAnalysisConfig{
|
||||
MaxDecisionHops: getEnvIntOrDefault("CHORUS_SLURP_MAX_DECISION_HOPS", 5),
|
||||
StalenessCheckInterval: getEnvDurationOrDefault("CHORUS_SLURP_STALENESS_CHECK_INTERVAL", 5*time.Minute),
|
||||
StalenessThreshold: 0.2,
|
||||
},
|
||||
Performance: SlurpPerformanceConfig{
|
||||
MaxConcurrentResolutions: getEnvIntOrDefault("CHORUS_SLURP_MAX_CONCURRENT_RESOLUTIONS", 4),
|
||||
MetricsCollectionInterval: getEnvDurationOrDefault("CHORUS_SLURP_METRICS_COLLECTION_INTERVAL", time.Minute),
|
||||
},
|
||||
},
|
||||
Security: SecurityConfig{
|
||||
KeyRotationDays: getEnvIntOrDefault("CHORUS_KEY_ROTATION_DAYS", 30),
|
||||
|
||||
23
pkg/crypto/key_manager_stub.go
Normal file
23
pkg/crypto/key_manager_stub.go
Normal file
@@ -0,0 +1,23 @@
|
||||
package crypto
|
||||
|
||||
import "time"
|
||||
|
||||
// GenerateKey returns a deterministic placeholder key identifier for the given role.
|
||||
func (km *KeyManager) GenerateKey(role string) (string, error) {
|
||||
return "stub-key-" + role, nil
|
||||
}
|
||||
|
||||
// DeprecateKey is a no-op in the stub implementation.
|
||||
func (km *KeyManager) DeprecateKey(keyID string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetKeysForRotation mirrors SEC-SLURP-1.1 key rotation discovery while remaining inert.
|
||||
func (km *KeyManager) GetKeysForRotation(maxAge time.Duration) ([]*KeyInfo, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// ValidateKeyFingerprint accepts all fingerprints in the stubbed environment.
|
||||
func (km *KeyManager) ValidateKeyFingerprint(role, fingerprint string) bool {
|
||||
return true
|
||||
}
|
||||
75
pkg/crypto/role_crypto_stub.go
Normal file
75
pkg/crypto/role_crypto_stub.go
Normal file
@@ -0,0 +1,75 @@
|
||||
package crypto
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"chorus/pkg/config"
|
||||
)
|
||||
|
||||
type RoleCrypto struct {
|
||||
config *config.Config
|
||||
}
|
||||
|
||||
func NewRoleCrypto(cfg *config.Config, _ interface{}, _ interface{}, _ interface{}) (*RoleCrypto, error) {
|
||||
if cfg == nil {
|
||||
return nil, fmt.Errorf("config cannot be nil")
|
||||
}
|
||||
return &RoleCrypto{config: cfg}, nil
|
||||
}
|
||||
|
||||
func (rc *RoleCrypto) EncryptForRole(data []byte, role string) ([]byte, string, error) {
|
||||
if len(data) == 0 {
|
||||
return []byte{}, rc.fingerprint(data), nil
|
||||
}
|
||||
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
|
||||
base64.StdEncoding.Encode(encoded, data)
|
||||
return encoded, rc.fingerprint(data), nil
|
||||
}
|
||||
|
||||
func (rc *RoleCrypto) DecryptForRole(data []byte, role string, _ string) ([]byte, error) {
|
||||
if len(data) == 0 {
|
||||
return []byte{}, nil
|
||||
}
|
||||
decoded := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
|
||||
n, err := base64.StdEncoding.Decode(decoded, data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return decoded[:n], nil
|
||||
}
|
||||
|
||||
func (rc *RoleCrypto) EncryptContextForRoles(payload interface{}, roles []string, _ []string) ([]byte, error) {
|
||||
raw, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(raw)))
|
||||
base64.StdEncoding.Encode(encoded, raw)
|
||||
return encoded, nil
|
||||
}
|
||||
|
||||
func (rc *RoleCrypto) fingerprint(data []byte) string {
|
||||
sum := sha256.Sum256(data)
|
||||
return base64.StdEncoding.EncodeToString(sum[:])
|
||||
}
|
||||
|
||||
type StorageAccessController interface {
|
||||
CanStore(role, key string) bool
|
||||
CanRetrieve(role, key string) bool
|
||||
}
|
||||
|
||||
type StorageAuditLogger interface {
|
||||
LogEncryptionOperation(role, key, operation string, success bool)
|
||||
LogDecryptionOperation(role, key, operation string, success bool)
|
||||
LogKeyRotation(role, keyID string, success bool, message string)
|
||||
LogError(message string)
|
||||
LogAccessDenial(role, key, operation string)
|
||||
}
|
||||
|
||||
type KeyInfo struct {
|
||||
Role string
|
||||
KeyID string
|
||||
}
|
||||
284
pkg/slurp/alignment/stubs.go
Normal file
284
pkg/slurp/alignment/stubs.go
Normal file
@@ -0,0 +1,284 @@
|
||||
package alignment
|
||||
|
||||
import "time"
|
||||
|
||||
// GoalStatistics summarizes goal management metrics.
|
||||
type GoalStatistics struct {
|
||||
TotalGoals int
|
||||
ActiveGoals int
|
||||
Completed int
|
||||
Archived int
|
||||
LastUpdated time.Time
|
||||
}
|
||||
|
||||
// AlignmentGapAnalysis captures detected misalignments that require follow-up.
|
||||
type AlignmentGapAnalysis struct {
|
||||
Address string
|
||||
Severity string
|
||||
Findings []string
|
||||
DetectedAt time.Time
|
||||
}
|
||||
|
||||
// AlignmentComparison provides a simple comparison view between two contexts.
|
||||
type AlignmentComparison struct {
|
||||
PrimaryScore float64
|
||||
SecondaryScore float64
|
||||
Differences []string
|
||||
}
|
||||
|
||||
// AlignmentStatistics aggregates assessment metrics across contexts.
|
||||
type AlignmentStatistics struct {
|
||||
TotalAssessments int
|
||||
AverageScore float64
|
||||
SuccessRate float64
|
||||
FailureRate float64
|
||||
LastUpdated time.Time
|
||||
}
|
||||
|
||||
// ProgressHistory captures historical progress samples for a goal.
|
||||
type ProgressHistory struct {
|
||||
GoalID string
|
||||
Samples []ProgressSample
|
||||
}
|
||||
|
||||
// ProgressSample represents a single progress measurement.
|
||||
type ProgressSample struct {
|
||||
Timestamp time.Time
|
||||
Percentage float64
|
||||
}
|
||||
|
||||
// CompletionPrediction represents a simple completion forecast for a goal.
|
||||
type CompletionPrediction struct {
|
||||
GoalID string
|
||||
EstimatedFinish time.Time
|
||||
Confidence float64
|
||||
}
|
||||
|
||||
// ProgressStatistics aggregates goal progress metrics.
|
||||
type ProgressStatistics struct {
|
||||
AverageCompletion float64
|
||||
OpenGoals int
|
||||
OnTrackGoals int
|
||||
AtRiskGoals int
|
||||
}
|
||||
|
||||
// DriftHistory tracks historical drift events.
|
||||
type DriftHistory struct {
|
||||
Address string
|
||||
Events []DriftEvent
|
||||
}
|
||||
|
||||
// DriftEvent captures a single drift occurrence.
|
||||
type DriftEvent struct {
|
||||
Timestamp time.Time
|
||||
Severity DriftSeverity
|
||||
Details string
|
||||
}
|
||||
|
||||
// DriftThresholds defines sensitivity thresholds for drift detection.
|
||||
type DriftThresholds struct {
|
||||
SeverityThreshold DriftSeverity
|
||||
ScoreDelta float64
|
||||
ObservationWindow time.Duration
|
||||
}
|
||||
|
||||
// DriftPatternAnalysis summarizes detected drift patterns.
|
||||
type DriftPatternAnalysis struct {
|
||||
Patterns []string
|
||||
Summary string
|
||||
}
|
||||
|
||||
// DriftPrediction provides a lightweight stub for future drift forecasting.
|
||||
type DriftPrediction struct {
|
||||
Address string
|
||||
Horizon time.Duration
|
||||
Severity DriftSeverity
|
||||
Confidence float64
|
||||
}
|
||||
|
||||
// DriftAlert represents an alert emitted when drift exceeds thresholds.
|
||||
type DriftAlert struct {
|
||||
ID string
|
||||
Address string
|
||||
Severity DriftSeverity
|
||||
CreatedAt time.Time
|
||||
Message string
|
||||
}
|
||||
|
||||
// GoalRecommendation summarises next actions for a specific goal.
|
||||
type GoalRecommendation struct {
|
||||
GoalID string
|
||||
Title string
|
||||
Description string
|
||||
Priority int
|
||||
}
|
||||
|
||||
// StrategicRecommendation captures higher-level alignment guidance.
|
||||
type StrategicRecommendation struct {
|
||||
Theme string
|
||||
Summary string
|
||||
Impact string
|
||||
RecommendedBy string
|
||||
}
|
||||
|
||||
// PrioritizedRecommendation wraps a recommendation with ranking metadata.
|
||||
type PrioritizedRecommendation struct {
|
||||
Recommendation *AlignmentRecommendation
|
||||
Score float64
|
||||
Rank int
|
||||
}
|
||||
|
||||
// RecommendationHistory tracks lifecycle updates for a recommendation.
|
||||
type RecommendationHistory struct {
|
||||
RecommendationID string
|
||||
Entries []RecommendationHistoryEntry
|
||||
}
|
||||
|
||||
// RecommendationHistoryEntry represents a single change entry.
|
||||
type RecommendationHistoryEntry struct {
|
||||
Timestamp time.Time
|
||||
Status ImplementationStatus
|
||||
Notes string
|
||||
}
|
||||
|
||||
// ImplementationStatus reflects execution state for recommendations.
|
||||
type ImplementationStatus string
|
||||
|
||||
const (
|
||||
ImplementationPending ImplementationStatus = "pending"
|
||||
ImplementationActive ImplementationStatus = "active"
|
||||
ImplementationBlocked ImplementationStatus = "blocked"
|
||||
ImplementationDone ImplementationStatus = "completed"
|
||||
)
|
||||
|
||||
// RecommendationEffectiveness offers coarse metrics on outcome quality.
|
||||
type RecommendationEffectiveness struct {
|
||||
SuccessRate float64
|
||||
AverageTime time.Duration
|
||||
Feedback []string
|
||||
}
|
||||
|
||||
// RecommendationStatistics aggregates recommendation issuance metrics.
|
||||
type RecommendationStatistics struct {
|
||||
TotalCreated int
|
||||
TotalCompleted int
|
||||
AveragePriority float64
|
||||
LastUpdated time.Time
|
||||
}
|
||||
|
||||
// AlignmentMetrics is a lightweight placeholder exported for engine integration.
|
||||
type AlignmentMetrics struct {
|
||||
Assessments int
|
||||
SuccessRate float64
|
||||
FailureRate float64
|
||||
AverageScore float64
|
||||
}
|
||||
|
||||
// GoalMetrics is a stub summarising per-goal metrics.
|
||||
type GoalMetrics struct {
|
||||
GoalID string
|
||||
AverageScore float64
|
||||
SuccessRate float64
|
||||
LastUpdated time.Time
|
||||
}
|
||||
|
||||
// ProgressMetrics is a stub capturing aggregate progress data.
|
||||
type ProgressMetrics struct {
|
||||
OverallCompletion float64
|
||||
ActiveGoals int
|
||||
CompletedGoals int
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
|
||||
// MetricsTrends wraps high-level trend information.
|
||||
type MetricsTrends struct {
|
||||
Metric string
|
||||
TrendLine []float64
|
||||
Timestamp time.Time
|
||||
}
|
||||
|
||||
// MetricsReport represents a generated metrics report placeholder.
|
||||
type MetricsReport struct {
|
||||
ID string
|
||||
Generated time.Time
|
||||
Summary string
|
||||
}
|
||||
|
||||
// MetricsConfiguration reflects configuration for metrics collection.
|
||||
type MetricsConfiguration struct {
|
||||
Enabled bool
|
||||
Interval time.Duration
|
||||
}
|
||||
|
||||
// SyncResult summarises a synchronisation run.
|
||||
type SyncResult struct {
|
||||
SyncedItems int
|
||||
Errors []string
|
||||
}
|
||||
|
||||
// ImportResult summarises the outcome of an import operation.
|
||||
type ImportResult struct {
|
||||
Imported int
|
||||
Skipped int
|
||||
Errors []string
|
||||
}
|
||||
|
||||
// SyncSettings captures synchronisation preferences.
|
||||
type SyncSettings struct {
|
||||
Enabled bool
|
||||
Interval time.Duration
|
||||
}
|
||||
|
||||
// SyncStatus provides health information about sync processes.
|
||||
type SyncStatus struct {
|
||||
LastSync time.Time
|
||||
Healthy bool
|
||||
Message string
|
||||
}
|
||||
|
||||
// AssessmentValidation provides validation results for assessments.
|
||||
type AssessmentValidation struct {
|
||||
Valid bool
|
||||
Issues []string
|
||||
CheckedAt time.Time
|
||||
}
|
||||
|
||||
// ConfigurationValidation summarises configuration validation status.
|
||||
type ConfigurationValidation struct {
|
||||
Valid bool
|
||||
Messages []string
|
||||
}
|
||||
|
||||
// WeightsValidation describes validation for weighting schemes.
|
||||
type WeightsValidation struct {
|
||||
Normalized bool
|
||||
Adjustments map[string]float64
|
||||
}
|
||||
|
||||
// ConsistencyIssue represents a detected consistency issue.
|
||||
type ConsistencyIssue struct {
|
||||
Description string
|
||||
Severity DriftSeverity
|
||||
DetectedAt time.Time
|
||||
}
|
||||
|
||||
// AlignmentHealthCheck is a stub for health check outputs.
|
||||
type AlignmentHealthCheck struct {
|
||||
Status string
|
||||
Details string
|
||||
CheckedAt time.Time
|
||||
}
|
||||
|
||||
// NotificationRules captures notification configuration stubs.
|
||||
type NotificationRules struct {
|
||||
Enabled bool
|
||||
Channels []string
|
||||
}
|
||||
|
||||
// NotificationRecord represents a delivered notification.
|
||||
type NotificationRecord struct {
|
||||
ID string
|
||||
Timestamp time.Time
|
||||
Recipient string
|
||||
Status string
|
||||
}
|
||||
@@ -4,176 +4,175 @@ import (
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
)
|
||||
|
||||
// ProjectGoal represents a high-level project objective
|
||||
type ProjectGoal struct {
|
||||
ID string `json:"id"` // Unique identifier
|
||||
Name string `json:"name"` // Goal name
|
||||
Description string `json:"description"` // Detailed description
|
||||
Keywords []string `json:"keywords"` // Associated keywords
|
||||
Priority int `json:"priority"` // Priority level (1=highest)
|
||||
Phase string `json:"phase"` // Project phase
|
||||
Category string `json:"category"` // Goal category
|
||||
Owner string `json:"owner"` // Goal owner
|
||||
Status GoalStatus `json:"status"` // Current status
|
||||
ID string `json:"id"` // Unique identifier
|
||||
Name string `json:"name"` // Goal name
|
||||
Description string `json:"description"` // Detailed description
|
||||
Keywords []string `json:"keywords"` // Associated keywords
|
||||
Priority int `json:"priority"` // Priority level (1=highest)
|
||||
Phase string `json:"phase"` // Project phase
|
||||
Category string `json:"category"` // Goal category
|
||||
Owner string `json:"owner"` // Goal owner
|
||||
Status GoalStatus `json:"status"` // Current status
|
||||
|
||||
// Success criteria
|
||||
Metrics []string `json:"metrics"` // Success metrics
|
||||
SuccessCriteria []*SuccessCriterion `json:"success_criteria"` // Detailed success criteria
|
||||
AcceptanceCriteria []string `json:"acceptance_criteria"` // Acceptance criteria
|
||||
Metrics []string `json:"metrics"` // Success metrics
|
||||
SuccessCriteria []*SuccessCriterion `json:"success_criteria"` // Detailed success criteria
|
||||
AcceptanceCriteria []string `json:"acceptance_criteria"` // Acceptance criteria
|
||||
|
||||
// Timeline
|
||||
StartDate *time.Time `json:"start_date,omitempty"` // Goal start date
|
||||
TargetDate *time.Time `json:"target_date,omitempty"` // Target completion date
|
||||
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
|
||||
StartDate *time.Time `json:"start_date,omitempty"` // Goal start date
|
||||
TargetDate *time.Time `json:"target_date,omitempty"` // Target completion date
|
||||
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
|
||||
|
||||
// Relationships
|
||||
ParentGoalID *string `json:"parent_goal_id,omitempty"` // Parent goal
|
||||
ChildGoalIDs []string `json:"child_goal_ids"` // Child goals
|
||||
Dependencies []string `json:"dependencies"` // Goal dependencies
|
||||
ParentGoalID *string `json:"parent_goal_id,omitempty"` // Parent goal
|
||||
ChildGoalIDs []string `json:"child_goal_ids"` // Child goals
|
||||
Dependencies []string `json:"dependencies"` // Goal dependencies
|
||||
|
||||
// Configuration
|
||||
Weights *GoalWeights `json:"weights"` // Assessment weights
|
||||
ThresholdScore float64 `json:"threshold_score"` // Minimum alignment score
|
||||
Weights *GoalWeights `json:"weights"` // Assessment weights
|
||||
ThresholdScore float64 `json:"threshold_score"` // Minimum alignment score
|
||||
|
||||
// Metadata
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||
CreatedBy string `json:"created_by"` // Who created it
|
||||
Tags []string `json:"tags"` // Goal tags
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||
CreatedBy string `json:"created_by"` // Who created it
|
||||
Tags []string `json:"tags"` // Goal tags
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// GoalStatus represents the current status of a goal
|
||||
type GoalStatus string
|
||||
|
||||
const (
|
||||
GoalStatusDraft GoalStatus = "draft" // Goal is in draft state
|
||||
GoalStatusActive GoalStatus = "active" // Goal is active
|
||||
GoalStatusOnHold GoalStatus = "on_hold" // Goal is on hold
|
||||
GoalStatusCompleted GoalStatus = "completed" // Goal is completed
|
||||
GoalStatusCancelled GoalStatus = "cancelled" // Goal is cancelled
|
||||
GoalStatusArchived GoalStatus = "archived" // Goal is archived
|
||||
GoalStatusDraft GoalStatus = "draft" // Goal is in draft state
|
||||
GoalStatusActive GoalStatus = "active" // Goal is active
|
||||
GoalStatusOnHold GoalStatus = "on_hold" // Goal is on hold
|
||||
GoalStatusCompleted GoalStatus = "completed" // Goal is completed
|
||||
GoalStatusCancelled GoalStatus = "cancelled" // Goal is cancelled
|
||||
GoalStatusArchived GoalStatus = "archived" // Goal is archived
|
||||
)
|
||||
|
||||
// SuccessCriterion represents a specific success criterion for a goal
|
||||
type SuccessCriterion struct {
|
||||
ID string `json:"id"` // Criterion ID
|
||||
Description string `json:"description"` // Criterion description
|
||||
MetricName string `json:"metric_name"` // Associated metric
|
||||
TargetValue interface{} `json:"target_value"` // Target value
|
||||
CurrentValue interface{} `json:"current_value"` // Current value
|
||||
Unit string `json:"unit"` // Value unit
|
||||
ComparisonOp string `json:"comparison_op"` // Comparison operator (>=, <=, ==, etc.)
|
||||
Weight float64 `json:"weight"` // Criterion weight
|
||||
Achieved bool `json:"achieved"` // Whether achieved
|
||||
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
|
||||
ID string `json:"id"` // Criterion ID
|
||||
Description string `json:"description"` // Criterion description
|
||||
MetricName string `json:"metric_name"` // Associated metric
|
||||
TargetValue interface{} `json:"target_value"` // Target value
|
||||
CurrentValue interface{} `json:"current_value"` // Current value
|
||||
Unit string `json:"unit"` // Value unit
|
||||
ComparisonOp string `json:"comparison_op"` // Comparison operator (>=, <=, ==, etc.)
|
||||
Weight float64 `json:"weight"` // Criterion weight
|
||||
Achieved bool `json:"achieved"` // Whether achieved
|
||||
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
|
||||
}
|
||||
|
||||
// GoalWeights represents weights for different aspects of goal alignment assessment
|
||||
type GoalWeights struct {
|
||||
KeywordMatch float64 `json:"keyword_match"` // Weight for keyword matching
|
||||
SemanticAlignment float64 `json:"semantic_alignment"` // Weight for semantic alignment
|
||||
PurposeAlignment float64 `json:"purpose_alignment"` // Weight for purpose alignment
|
||||
TechnologyMatch float64 `json:"technology_match"` // Weight for technology matching
|
||||
QualityScore float64 `json:"quality_score"` // Weight for context quality
|
||||
RecentActivity float64 `json:"recent_activity"` // Weight for recent activity
|
||||
ImportanceScore float64 `json:"importance_score"` // Weight for component importance
|
||||
KeywordMatch float64 `json:"keyword_match"` // Weight for keyword matching
|
||||
SemanticAlignment float64 `json:"semantic_alignment"` // Weight for semantic alignment
|
||||
PurposeAlignment float64 `json:"purpose_alignment"` // Weight for purpose alignment
|
||||
TechnologyMatch float64 `json:"technology_match"` // Weight for technology matching
|
||||
QualityScore float64 `json:"quality_score"` // Weight for context quality
|
||||
RecentActivity float64 `json:"recent_activity"` // Weight for recent activity
|
||||
ImportanceScore float64 `json:"importance_score"` // Weight for component importance
|
||||
}
|
||||
|
||||
// AlignmentAssessment represents overall alignment assessment for a context
|
||||
type AlignmentAssessment struct {
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
OverallScore float64 `json:"overall_score"` // Overall alignment score (0-1)
|
||||
GoalAlignments []*GoalAlignment `json:"goal_alignments"` // Individual goal alignments
|
||||
StrengthAreas []string `json:"strength_areas"` // Areas of strong alignment
|
||||
WeaknessAreas []string `json:"weakness_areas"` // Areas of weak alignment
|
||||
Recommendations []*AlignmentRecommendation `json:"recommendations"` // Improvement recommendations
|
||||
AssessedAt time.Time `json:"assessed_at"` // When assessment was performed
|
||||
AssessmentVersion string `json:"assessment_version"` // Assessment algorithm version
|
||||
Confidence float64 `json:"confidence"` // Assessment confidence (0-1)
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
OverallScore float64 `json:"overall_score"` // Overall alignment score (0-1)
|
||||
GoalAlignments []*GoalAlignment `json:"goal_alignments"` // Individual goal alignments
|
||||
StrengthAreas []string `json:"strength_areas"` // Areas of strong alignment
|
||||
WeaknessAreas []string `json:"weakness_areas"` // Areas of weak alignment
|
||||
Recommendations []*AlignmentRecommendation `json:"recommendations"` // Improvement recommendations
|
||||
AssessedAt time.Time `json:"assessed_at"` // When assessment was performed
|
||||
AssessmentVersion string `json:"assessment_version"` // Assessment algorithm version
|
||||
Confidence float64 `json:"confidence"` // Assessment confidence (0-1)
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// GoalAlignment represents alignment assessment for a specific goal
|
||||
type GoalAlignment struct {
|
||||
GoalID string `json:"goal_id"` // Goal identifier
|
||||
GoalName string `json:"goal_name"` // Goal name
|
||||
AlignmentScore float64 `json:"alignment_score"` // Alignment score (0-1)
|
||||
ComponentScores *AlignmentScores `json:"component_scores"` // Component-wise scores
|
||||
MatchedKeywords []string `json:"matched_keywords"` // Keywords that matched
|
||||
MatchedCriteria []string `json:"matched_criteria"` // Criteria that matched
|
||||
Explanation string `json:"explanation"` // Alignment explanation
|
||||
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in assessment
|
||||
ImprovementAreas []string `json:"improvement_areas"` // Areas for improvement
|
||||
Strengths []string `json:"strengths"` // Alignment strengths
|
||||
GoalID string `json:"goal_id"` // Goal identifier
|
||||
GoalName string `json:"goal_name"` // Goal name
|
||||
AlignmentScore float64 `json:"alignment_score"` // Alignment score (0-1)
|
||||
ComponentScores *AlignmentScores `json:"component_scores"` // Component-wise scores
|
||||
MatchedKeywords []string `json:"matched_keywords"` // Keywords that matched
|
||||
MatchedCriteria []string `json:"matched_criteria"` // Criteria that matched
|
||||
Explanation string `json:"explanation"` // Alignment explanation
|
||||
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in assessment
|
||||
ImprovementAreas []string `json:"improvement_areas"` // Areas for improvement
|
||||
Strengths []string `json:"strengths"` // Alignment strengths
|
||||
}
|
||||
|
||||
// AlignmentScores represents component scores for alignment assessment
|
||||
type AlignmentScores struct {
|
||||
KeywordScore float64 `json:"keyword_score"` // Keyword matching score
|
||||
SemanticScore float64 `json:"semantic_score"` // Semantic alignment score
|
||||
PurposeScore float64 `json:"purpose_score"` // Purpose alignment score
|
||||
TechnologyScore float64 `json:"technology_score"` // Technology alignment score
|
||||
QualityScore float64 `json:"quality_score"` // Context quality score
|
||||
ActivityScore float64 `json:"activity_score"` // Recent activity score
|
||||
ImportanceScore float64 `json:"importance_score"` // Component importance score
|
||||
KeywordScore float64 `json:"keyword_score"` // Keyword matching score
|
||||
SemanticScore float64 `json:"semantic_score"` // Semantic alignment score
|
||||
PurposeScore float64 `json:"purpose_score"` // Purpose alignment score
|
||||
TechnologyScore float64 `json:"technology_score"` // Technology alignment score
|
||||
QualityScore float64 `json:"quality_score"` // Context quality score
|
||||
ActivityScore float64 `json:"activity_score"` // Recent activity score
|
||||
ImportanceScore float64 `json:"importance_score"` // Component importance score
|
||||
}
|
||||
|
||||
// AlignmentRecommendation represents a recommendation for improving alignment
|
||||
type AlignmentRecommendation struct {
|
||||
ID string `json:"id"` // Recommendation ID
|
||||
Type RecommendationType `json:"type"` // Recommendation type
|
||||
Priority int `json:"priority"` // Priority (1=highest)
|
||||
Title string `json:"title"` // Recommendation title
|
||||
Description string `json:"description"` // Detailed description
|
||||
GoalID *string `json:"goal_id,omitempty"` // Related goal
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
ID string `json:"id"` // Recommendation ID
|
||||
Type RecommendationType `json:"type"` // Recommendation type
|
||||
Priority int `json:"priority"` // Priority (1=highest)
|
||||
Title string `json:"title"` // Recommendation title
|
||||
Description string `json:"description"` // Detailed description
|
||||
GoalID *string `json:"goal_id,omitempty"` // Related goal
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
|
||||
// Implementation details
|
||||
ActionItems []string `json:"action_items"` // Specific actions
|
||||
EstimatedEffort EffortLevel `json:"estimated_effort"` // Estimated effort
|
||||
ExpectedImpact ImpactLevel `json:"expected_impact"` // Expected impact
|
||||
RequiredRoles []string `json:"required_roles"` // Required roles
|
||||
Prerequisites []string `json:"prerequisites"` // Prerequisites
|
||||
ActionItems []string `json:"action_items"` // Specific actions
|
||||
EstimatedEffort EffortLevel `json:"estimated_effort"` // Estimated effort
|
||||
ExpectedImpact ImpactLevel `json:"expected_impact"` // Expected impact
|
||||
RequiredRoles []string `json:"required_roles"` // Required roles
|
||||
Prerequisites []string `json:"prerequisites"` // Prerequisites
|
||||
|
||||
// Status tracking
|
||||
Status RecommendationStatus `json:"status"` // Implementation status
|
||||
AssignedTo []string `json:"assigned_to"` // Assigned team members
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
DueDate *time.Time `json:"due_date,omitempty"` // Implementation due date
|
||||
CompletedAt *time.Time `json:"completed_at,omitempty"` // When completed
|
||||
Status RecommendationStatus `json:"status"` // Implementation status
|
||||
AssignedTo []string `json:"assigned_to"` // Assigned team members
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
DueDate *time.Time `json:"due_date,omitempty"` // Implementation due date
|
||||
CompletedAt *time.Time `json:"completed_at,omitempty"` // When completed
|
||||
|
||||
// Metadata
|
||||
Tags []string `json:"tags"` // Recommendation tags
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
Tags []string `json:"tags"` // Recommendation tags
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// RecommendationType represents types of alignment recommendations
|
||||
type RecommendationType string
|
||||
|
||||
const (
|
||||
RecommendationKeywordImprovement RecommendationType = "keyword_improvement" // Improve keyword matching
|
||||
RecommendationPurposeAlignment RecommendationType = "purpose_alignment" // Align purpose better
|
||||
RecommendationTechnologyUpdate RecommendationType = "technology_update" // Update technology usage
|
||||
RecommendationQualityImprovement RecommendationType = "quality_improvement" // Improve context quality
|
||||
RecommendationDocumentation RecommendationType = "documentation" // Add/improve documentation
|
||||
RecommendationRefactoring RecommendationType = "refactoring" // Code refactoring
|
||||
RecommendationArchitectural RecommendationType = "architectural" // Architectural changes
|
||||
RecommendationTesting RecommendationType = "testing" // Testing improvements
|
||||
RecommendationPerformance RecommendationType = "performance" // Performance optimization
|
||||
RecommendationSecurity RecommendationType = "security" // Security enhancements
|
||||
RecommendationKeywordImprovement RecommendationType = "keyword_improvement" // Improve keyword matching
|
||||
RecommendationPurposeAlignment RecommendationType = "purpose_alignment" // Align purpose better
|
||||
RecommendationTechnologyUpdate RecommendationType = "technology_update" // Update technology usage
|
||||
RecommendationQualityImprovement RecommendationType = "quality_improvement" // Improve context quality
|
||||
RecommendationDocumentation RecommendationType = "documentation" // Add/improve documentation
|
||||
RecommendationRefactoring RecommendationType = "refactoring" // Code refactoring
|
||||
RecommendationArchitectural RecommendationType = "architectural" // Architectural changes
|
||||
RecommendationTesting RecommendationType = "testing" // Testing improvements
|
||||
RecommendationPerformance RecommendationType = "performance" // Performance optimization
|
||||
RecommendationSecurity RecommendationType = "security" // Security enhancements
|
||||
)
|
||||
|
||||
// EffortLevel represents estimated effort levels
|
||||
type EffortLevel string
|
||||
|
||||
const (
|
||||
EffortLow EffortLevel = "low" // Low effort (1-2 hours)
|
||||
EffortMedium EffortLevel = "medium" // Medium effort (1-2 days)
|
||||
EffortHigh EffortLevel = "high" // High effort (1-2 weeks)
|
||||
EffortLow EffortLevel = "low" // Low effort (1-2 hours)
|
||||
EffortMedium EffortLevel = "medium" // Medium effort (1-2 days)
|
||||
EffortHigh EffortLevel = "high" // High effort (1-2 weeks)
|
||||
EffortVeryHigh EffortLevel = "very_high" // Very high effort (>2 weeks)
|
||||
)
|
||||
|
||||
@@ -181,9 +180,9 @@ const (
|
||||
type ImpactLevel string
|
||||
|
||||
const (
|
||||
ImpactLow ImpactLevel = "low" // Low impact
|
||||
ImpactMedium ImpactLevel = "medium" // Medium impact
|
||||
ImpactHigh ImpactLevel = "high" // High impact
|
||||
ImpactLow ImpactLevel = "low" // Low impact
|
||||
ImpactMedium ImpactLevel = "medium" // Medium impact
|
||||
ImpactHigh ImpactLevel = "high" // High impact
|
||||
ImpactCritical ImpactLevel = "critical" // Critical impact
|
||||
)
|
||||
|
||||
@@ -201,38 +200,38 @@ const (
|
||||
|
||||
// GoalProgress represents progress toward goal achievement
|
||||
type GoalProgress struct {
|
||||
GoalID string `json:"goal_id"` // Goal identifier
|
||||
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage (0-100)
|
||||
CriteriaProgress []*CriterionProgress `json:"criteria_progress"` // Progress for each criterion
|
||||
Milestones []*MilestoneProgress `json:"milestones"` // Milestone progress
|
||||
Velocity float64 `json:"velocity"` // Progress velocity (% per day)
|
||||
EstimatedCompletion *time.Time `json:"estimated_completion,omitempty"` // Estimated completion date
|
||||
RiskFactors []string `json:"risk_factors"` // Identified risk factors
|
||||
Blockers []string `json:"blockers"` // Current blockers
|
||||
LastUpdated time.Time `json:"last_updated"` // When last updated
|
||||
UpdatedBy string `json:"updated_by"` // Who last updated
|
||||
GoalID string `json:"goal_id"` // Goal identifier
|
||||
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage (0-100)
|
||||
CriteriaProgress []*CriterionProgress `json:"criteria_progress"` // Progress for each criterion
|
||||
Milestones []*MilestoneProgress `json:"milestones"` // Milestone progress
|
||||
Velocity float64 `json:"velocity"` // Progress velocity (% per day)
|
||||
EstimatedCompletion *time.Time `json:"estimated_completion,omitempty"` // Estimated completion date
|
||||
RiskFactors []string `json:"risk_factors"` // Identified risk factors
|
||||
Blockers []string `json:"blockers"` // Current blockers
|
||||
LastUpdated time.Time `json:"last_updated"` // When last updated
|
||||
UpdatedBy string `json:"updated_by"` // Who last updated
|
||||
}
|
||||
|
||||
// CriterionProgress represents progress for a specific success criterion
|
||||
type CriterionProgress struct {
|
||||
CriterionID string `json:"criterion_id"` // Criterion ID
|
||||
CurrentValue interface{} `json:"current_value"` // Current value
|
||||
TargetValue interface{} `json:"target_value"` // Target value
|
||||
ProgressPercentage float64 `json:"progress_percentage"` // Progress percentage
|
||||
Achieved bool `json:"achieved"` // Whether achieved
|
||||
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
|
||||
Notes string `json:"notes"` // Progress notes
|
||||
CriterionID string `json:"criterion_id"` // Criterion ID
|
||||
CurrentValue interface{} `json:"current_value"` // Current value
|
||||
TargetValue interface{} `json:"target_value"` // Target value
|
||||
ProgressPercentage float64 `json:"progress_percentage"` // Progress percentage
|
||||
Achieved bool `json:"achieved"` // Whether achieved
|
||||
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
|
||||
Notes string `json:"notes"` // Progress notes
|
||||
}
|
||||
|
||||
// MilestoneProgress represents progress for a goal milestone
|
||||
type MilestoneProgress struct {
|
||||
MilestoneID string `json:"milestone_id"` // Milestone ID
|
||||
Name string `json:"name"` // Milestone name
|
||||
Status MilestoneStatus `json:"status"` // Current status
|
||||
MilestoneID string `json:"milestone_id"` // Milestone ID
|
||||
Name string `json:"name"` // Milestone name
|
||||
Status MilestoneStatus `json:"status"` // Current status
|
||||
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage
|
||||
PlannedDate time.Time `json:"planned_date"` // Planned completion date
|
||||
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
|
||||
DelayReason string `json:"delay_reason"` // Reason for delay if applicable
|
||||
PlannedDate time.Time `json:"planned_date"` // Planned completion date
|
||||
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
|
||||
DelayReason string `json:"delay_reason"` // Reason for delay if applicable
|
||||
}
|
||||
|
||||
// MilestoneStatus represents status of a milestone
|
||||
@@ -248,27 +247,27 @@ const (
|
||||
|
||||
// AlignmentDrift represents detected alignment drift
|
||||
type AlignmentDrift struct {
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
DriftType DriftType `json:"drift_type"` // Type of drift
|
||||
Severity DriftSeverity `json:"severity"` // Drift severity
|
||||
CurrentScore float64 `json:"current_score"` // Current alignment score
|
||||
PreviousScore float64 `json:"previous_score"` // Previous alignment score
|
||||
ScoreDelta float64 `json:"score_delta"` // Change in score
|
||||
AffectedGoals []string `json:"affected_goals"` // Goals affected by drift
|
||||
DetectedAt time.Time `json:"detected_at"` // When drift was detected
|
||||
DriftReason []string `json:"drift_reason"` // Reasons for drift
|
||||
RecommendedActions []string `json:"recommended_actions"` // Recommended actions
|
||||
Priority DriftPriority `json:"priority"` // Priority for addressing
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
DriftType DriftType `json:"drift_type"` // Type of drift
|
||||
Severity DriftSeverity `json:"severity"` // Drift severity
|
||||
CurrentScore float64 `json:"current_score"` // Current alignment score
|
||||
PreviousScore float64 `json:"previous_score"` // Previous alignment score
|
||||
ScoreDelta float64 `json:"score_delta"` // Change in score
|
||||
AffectedGoals []string `json:"affected_goals"` // Goals affected by drift
|
||||
DetectedAt time.Time `json:"detected_at"` // When drift was detected
|
||||
DriftReason []string `json:"drift_reason"` // Reasons for drift
|
||||
RecommendedActions []string `json:"recommended_actions"` // Recommended actions
|
||||
Priority DriftPriority `json:"priority"` // Priority for addressing
|
||||
}
|
||||
|
||||
// DriftType represents types of alignment drift
|
||||
type DriftType string
|
||||
|
||||
const (
|
||||
DriftTypeGradual DriftType = "gradual" // Gradual drift over time
|
||||
DriftTypeSudden DriftType = "sudden" // Sudden drift
|
||||
DriftTypeOscillating DriftType = "oscillating" // Oscillating drift pattern
|
||||
DriftTypeGoalChange DriftType = "goal_change" // Due to goal changes
|
||||
DriftTypeGradual DriftType = "gradual" // Gradual drift over time
|
||||
DriftTypeSudden DriftType = "sudden" // Sudden drift
|
||||
DriftTypeOscillating DriftType = "oscillating" // Oscillating drift pattern
|
||||
DriftTypeGoalChange DriftType = "goal_change" // Due to goal changes
|
||||
DriftTypeContextChange DriftType = "context_change" // Due to context changes
|
||||
)
|
||||
|
||||
@@ -286,68 +285,68 @@ const (
|
||||
type DriftPriority string
|
||||
|
||||
const (
|
||||
DriftPriorityLow DriftPriority = "low" // Low priority
|
||||
DriftPriorityMedium DriftPriority = "medium" // Medium priority
|
||||
DriftPriorityHigh DriftPriority = "high" // High priority
|
||||
DriftPriorityUrgent DriftPriority = "urgent" // Urgent priority
|
||||
DriftPriorityLow DriftPriority = "low" // Low priority
|
||||
DriftPriorityMedium DriftPriority = "medium" // Medium priority
|
||||
DriftPriorityHigh DriftPriority = "high" // High priority
|
||||
DriftPriorityUrgent DriftPriority = "urgent" // Urgent priority
|
||||
)
|
||||
|
||||
// AlignmentTrends represents alignment trends over time
|
||||
type AlignmentTrends struct {
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
TimeRange time.Duration `json:"time_range"` // Analyzed time range
|
||||
DataPoints []*TrendDataPoint `json:"data_points"` // Trend data points
|
||||
OverallTrend TrendDirection `json:"overall_trend"` // Overall trend direction
|
||||
TrendStrength float64 `json:"trend_strength"` // Trend strength (0-1)
|
||||
Volatility float64 `json:"volatility"` // Score volatility
|
||||
SeasonalPatterns []*SeasonalPattern `json:"seasonal_patterns"` // Detected seasonal patterns
|
||||
AnomalousPoints []*AnomalousPoint `json:"anomalous_points"` // Anomalous data points
|
||||
Predictions []*TrendPrediction `json:"predictions"` // Future trend predictions
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
TimeRange time.Duration `json:"time_range"` // Analyzed time range
|
||||
DataPoints []*TrendDataPoint `json:"data_points"` // Trend data points
|
||||
OverallTrend TrendDirection `json:"overall_trend"` // Overall trend direction
|
||||
TrendStrength float64 `json:"trend_strength"` // Trend strength (0-1)
|
||||
Volatility float64 `json:"volatility"` // Score volatility
|
||||
SeasonalPatterns []*SeasonalPattern `json:"seasonal_patterns"` // Detected seasonal patterns
|
||||
AnomalousPoints []*AnomalousPoint `json:"anomalous_points"` // Anomalous data points
|
||||
Predictions []*TrendPrediction `json:"predictions"` // Future trend predictions
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
}
|
||||
|
||||
// TrendDataPoint represents a single data point in alignment trends
|
||||
type TrendDataPoint struct {
|
||||
Timestamp time.Time `json:"timestamp"` // Data point timestamp
|
||||
AlignmentScore float64 `json:"alignment_score"` // Alignment score at this time
|
||||
GoalScores map[string]float64 `json:"goal_scores"` // Individual goal scores
|
||||
Events []string `json:"events"` // Events that occurred around this time
|
||||
Timestamp time.Time `json:"timestamp"` // Data point timestamp
|
||||
AlignmentScore float64 `json:"alignment_score"` // Alignment score at this time
|
||||
GoalScores map[string]float64 `json:"goal_scores"` // Individual goal scores
|
||||
Events []string `json:"events"` // Events that occurred around this time
|
||||
}
|
||||
|
||||
// TrendDirection represents direction of alignment trends
|
||||
type TrendDirection string
|
||||
|
||||
const (
|
||||
TrendDirectionImproving TrendDirection = "improving" // Improving trend
|
||||
TrendDirectionDeclining TrendDirection = "declining" // Declining trend
|
||||
TrendDirectionStable TrendDirection = "stable" // Stable trend
|
||||
TrendDirectionVolatile TrendDirection = "volatile" // Volatile trend
|
||||
TrendDirectionImproving TrendDirection = "improving" // Improving trend
|
||||
TrendDirectionDeclining TrendDirection = "declining" // Declining trend
|
||||
TrendDirectionStable TrendDirection = "stable" // Stable trend
|
||||
TrendDirectionVolatile TrendDirection = "volatile" // Volatile trend
|
||||
)
|
||||
|
||||
// SeasonalPattern represents a detected seasonal pattern in alignment
|
||||
type SeasonalPattern struct {
|
||||
PatternType string `json:"pattern_type"` // Type of pattern (weekly, monthly, etc.)
|
||||
Period time.Duration `json:"period"` // Pattern period
|
||||
Amplitude float64 `json:"amplitude"` // Pattern amplitude
|
||||
Confidence float64 `json:"confidence"` // Pattern confidence
|
||||
Description string `json:"description"` // Pattern description
|
||||
PatternType string `json:"pattern_type"` // Type of pattern (weekly, monthly, etc.)
|
||||
Period time.Duration `json:"period"` // Pattern period
|
||||
Amplitude float64 `json:"amplitude"` // Pattern amplitude
|
||||
Confidence float64 `json:"confidence"` // Pattern confidence
|
||||
Description string `json:"description"` // Pattern description
|
||||
}
|
||||
|
||||
// AnomalousPoint represents an anomalous data point
|
||||
type AnomalousPoint struct {
|
||||
Timestamp time.Time `json:"timestamp"` // When anomaly occurred
|
||||
ExpectedScore float64 `json:"expected_score"` // Expected alignment score
|
||||
ActualScore float64 `json:"actual_score"` // Actual alignment score
|
||||
AnomalyScore float64 `json:"anomaly_score"` // Anomaly score
|
||||
PossibleCauses []string `json:"possible_causes"` // Possible causes
|
||||
Timestamp time.Time `json:"timestamp"` // When anomaly occurred
|
||||
ExpectedScore float64 `json:"expected_score"` // Expected alignment score
|
||||
ActualScore float64 `json:"actual_score"` // Actual alignment score
|
||||
AnomalyScore float64 `json:"anomaly_score"` // Anomaly score
|
||||
PossibleCauses []string `json:"possible_causes"` // Possible causes
|
||||
}
|
||||
|
||||
// TrendPrediction represents a prediction of future alignment trends
|
||||
type TrendPrediction struct {
|
||||
Timestamp time.Time `json:"timestamp"` // Predicted timestamp
|
||||
PredictedScore float64 `json:"predicted_score"` // Predicted alignment score
|
||||
Timestamp time.Time `json:"timestamp"` // Predicted timestamp
|
||||
PredictedScore float64 `json:"predicted_score"` // Predicted alignment score
|
||||
ConfidenceInterval *ConfidenceInterval `json:"confidence_interval"` // Confidence interval
|
||||
Probability float64 `json:"probability"` // Prediction probability
|
||||
Probability float64 `json:"probability"` // Prediction probability
|
||||
}
|
||||
|
||||
// ConfidenceInterval represents a confidence interval for predictions
|
||||
@@ -359,21 +358,21 @@ type ConfidenceInterval struct {
|
||||
|
||||
// AlignmentWeights represents weights for alignment calculation
|
||||
type AlignmentWeights struct {
|
||||
GoalWeights map[string]float64 `json:"goal_weights"` // Weights by goal ID
|
||||
CategoryWeights map[string]float64 `json:"category_weights"` // Weights by goal category
|
||||
PriorityWeights map[int]float64 `json:"priority_weights"` // Weights by priority level
|
||||
PhaseWeights map[string]float64 `json:"phase_weights"` // Weights by project phase
|
||||
RoleWeights map[string]float64 `json:"role_weights"` // Weights by role
|
||||
ComponentWeights *AlignmentScores `json:"component_weights"` // Weights for score components
|
||||
TemporalWeights *TemporalWeights `json:"temporal_weights"` // Temporal weighting factors
|
||||
GoalWeights map[string]float64 `json:"goal_weights"` // Weights by goal ID
|
||||
CategoryWeights map[string]float64 `json:"category_weights"` // Weights by goal category
|
||||
PriorityWeights map[int]float64 `json:"priority_weights"` // Weights by priority level
|
||||
PhaseWeights map[string]float64 `json:"phase_weights"` // Weights by project phase
|
||||
RoleWeights map[string]float64 `json:"role_weights"` // Weights by role
|
||||
ComponentWeights *AlignmentScores `json:"component_weights"` // Weights for score components
|
||||
TemporalWeights *TemporalWeights `json:"temporal_weights"` // Temporal weighting factors
|
||||
}
|
||||
|
||||
// TemporalWeights represents temporal weighting factors
|
||||
type TemporalWeights struct {
|
||||
RecentWeight float64 `json:"recent_weight"` // Weight for recent changes
|
||||
DecayFactor float64 `json:"decay_factor"` // Score decay factor over time
|
||||
RecencyWindow time.Duration `json:"recency_window"` // Window for considering recent activity
|
||||
HistoricalWeight float64 `json:"historical_weight"` // Weight for historical alignment
|
||||
RecentWeight float64 `json:"recent_weight"` // Weight for recent changes
|
||||
DecayFactor float64 `json:"decay_factor"` // Score decay factor over time
|
||||
RecencyWindow time.Duration `json:"recency_window"` // Window for considering recent activity
|
||||
HistoricalWeight float64 `json:"historical_weight"` // Weight for historical alignment
|
||||
}
|
||||
|
||||
// GoalFilter represents filtering criteria for goal listing
|
||||
@@ -393,55 +392,55 @@ type GoalFilter struct {
|
||||
|
||||
// GoalHierarchy represents the hierarchical structure of goals
|
||||
type GoalHierarchy struct {
|
||||
RootGoals []*GoalNode `json:"root_goals"` // Root level goals
|
||||
MaxDepth int `json:"max_depth"` // Maximum hierarchy depth
|
||||
TotalGoals int `json:"total_goals"` // Total number of goals
|
||||
GeneratedAt time.Time `json:"generated_at"` // When hierarchy was generated
|
||||
RootGoals []*GoalNode `json:"root_goals"` // Root level goals
|
||||
MaxDepth int `json:"max_depth"` // Maximum hierarchy depth
|
||||
TotalGoals int `json:"total_goals"` // Total number of goals
|
||||
GeneratedAt time.Time `json:"generated_at"` // When hierarchy was generated
|
||||
}
|
||||
|
||||
// GoalNode represents a node in the goal hierarchy
|
||||
type GoalNode struct {
|
||||
Goal *ProjectGoal `json:"goal"` // Goal information
|
||||
Children []*GoalNode `json:"children"` // Child goals
|
||||
Depth int `json:"depth"` // Depth in hierarchy
|
||||
Path []string `json:"path"` // Path from root
|
||||
Goal *ProjectGoal `json:"goal"` // Goal information
|
||||
Children []*GoalNode `json:"children"` // Child goals
|
||||
Depth int `json:"depth"` // Depth in hierarchy
|
||||
Path []string `json:"path"` // Path from root
|
||||
}
|
||||
|
||||
// GoalValidation represents validation results for a goal
|
||||
type GoalValidation struct {
|
||||
Valid bool `json:"valid"` // Whether goal is valid
|
||||
Issues []*ValidationIssue `json:"issues"` // Validation issues
|
||||
Warnings []*ValidationWarning `json:"warnings"` // Validation warnings
|
||||
ValidatedAt time.Time `json:"validated_at"` // When validated
|
||||
Valid bool `json:"valid"` // Whether goal is valid
|
||||
Issues []*ValidationIssue `json:"issues"` // Validation issues
|
||||
Warnings []*ValidationWarning `json:"warnings"` // Validation warnings
|
||||
ValidatedAt time.Time `json:"validated_at"` // When validated
|
||||
}
|
||||
|
||||
// ValidationIssue represents a validation issue
|
||||
type ValidationIssue struct {
|
||||
Field string `json:"field"` // Affected field
|
||||
Code string `json:"code"` // Issue code
|
||||
Message string `json:"message"` // Issue message
|
||||
Severity string `json:"severity"` // Issue severity
|
||||
Suggestion string `json:"suggestion"` // Suggested fix
|
||||
Field string `json:"field"` // Affected field
|
||||
Code string `json:"code"` // Issue code
|
||||
Message string `json:"message"` // Issue message
|
||||
Severity string `json:"severity"` // Issue severity
|
||||
Suggestion string `json:"suggestion"` // Suggested fix
|
||||
}
|
||||
|
||||
// ValidationWarning represents a validation warning
|
||||
type ValidationWarning struct {
|
||||
Field string `json:"field"` // Affected field
|
||||
Code string `json:"code"` // Warning code
|
||||
Message string `json:"message"` // Warning message
|
||||
Suggestion string `json:"suggestion"` // Suggested improvement
|
||||
Field string `json:"field"` // Affected field
|
||||
Code string `json:"code"` // Warning code
|
||||
Message string `json:"message"` // Warning message
|
||||
Suggestion string `json:"suggestion"` // Suggested improvement
|
||||
}
|
||||
|
||||
// GoalMilestone represents a milestone for goal tracking
|
||||
type GoalMilestone struct {
|
||||
ID string `json:"id"` // Milestone ID
|
||||
Name string `json:"name"` // Milestone name
|
||||
Description string `json:"description"` // Milestone description
|
||||
PlannedDate time.Time `json:"planned_date"` // Planned completion date
|
||||
Weight float64 `json:"weight"` // Milestone weight
|
||||
Criteria []string `json:"criteria"` // Completion criteria
|
||||
Dependencies []string `json:"dependencies"` // Milestone dependencies
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
ID string `json:"id"` // Milestone ID
|
||||
Name string `json:"name"` // Milestone name
|
||||
Description string `json:"description"` // Milestone description
|
||||
PlannedDate time.Time `json:"planned_date"` // Planned completion date
|
||||
Weight float64 `json:"weight"` // Milestone weight
|
||||
Criteria []string `json:"criteria"` // Completion criteria
|
||||
Dependencies []string `json:"dependencies"` // Milestone dependencies
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
}
|
||||
|
||||
// MilestoneStatus represents status of a milestone (duplicate removed)
|
||||
@@ -449,39 +448,39 @@ type GoalMilestone struct {
|
||||
|
||||
// ProgressUpdate represents an update to goal progress
|
||||
type ProgressUpdate struct {
|
||||
UpdateType ProgressUpdateType `json:"update_type"` // Type of update
|
||||
CompletionDelta float64 `json:"completion_delta"` // Change in completion percentage
|
||||
CriteriaUpdates []*CriterionUpdate `json:"criteria_updates"` // Updates to criteria
|
||||
MilestoneUpdates []*MilestoneUpdate `json:"milestone_updates"` // Updates to milestones
|
||||
Notes string `json:"notes"` // Update notes
|
||||
UpdatedBy string `json:"updated_by"` // Who made the update
|
||||
Evidence []string `json:"evidence"` // Evidence for progress
|
||||
RiskFactors []string `json:"risk_factors"` // New risk factors
|
||||
Blockers []string `json:"blockers"` // New blockers
|
||||
UpdateType ProgressUpdateType `json:"update_type"` // Type of update
|
||||
CompletionDelta float64 `json:"completion_delta"` // Change in completion percentage
|
||||
CriteriaUpdates []*CriterionUpdate `json:"criteria_updates"` // Updates to criteria
|
||||
MilestoneUpdates []*MilestoneUpdate `json:"milestone_updates"` // Updates to milestones
|
||||
Notes string `json:"notes"` // Update notes
|
||||
UpdatedBy string `json:"updated_by"` // Who made the update
|
||||
Evidence []string `json:"evidence"` // Evidence for progress
|
||||
RiskFactors []string `json:"risk_factors"` // New risk factors
|
||||
Blockers []string `json:"blockers"` // New blockers
|
||||
}
|
||||
|
||||
// ProgressUpdateType represents types of progress updates
|
||||
type ProgressUpdateType string
|
||||
|
||||
const (
|
||||
ProgressUpdateTypeIncrement ProgressUpdateType = "increment" // Incremental progress
|
||||
ProgressUpdateTypeAbsolute ProgressUpdateType = "absolute" // Absolute progress value
|
||||
ProgressUpdateTypeMilestone ProgressUpdateType = "milestone" // Milestone completion
|
||||
ProgressUpdateTypeCriterion ProgressUpdateType = "criterion" // Criterion achievement
|
||||
ProgressUpdateTypeIncrement ProgressUpdateType = "increment" // Incremental progress
|
||||
ProgressUpdateTypeAbsolute ProgressUpdateType = "absolute" // Absolute progress value
|
||||
ProgressUpdateTypeMilestone ProgressUpdateType = "milestone" // Milestone completion
|
||||
ProgressUpdateTypeCriterion ProgressUpdateType = "criterion" // Criterion achievement
|
||||
)
|
||||
|
||||
// CriterionUpdate represents an update to a success criterion
|
||||
type CriterionUpdate struct {
|
||||
CriterionID string `json:"criterion_id"` // Criterion ID
|
||||
NewValue interface{} `json:"new_value"` // New current value
|
||||
Achieved bool `json:"achieved"` // Whether now achieved
|
||||
Notes string `json:"notes"` // Update notes
|
||||
CriterionID string `json:"criterion_id"` // Criterion ID
|
||||
NewValue interface{} `json:"new_value"` // New current value
|
||||
Achieved bool `json:"achieved"` // Whether now achieved
|
||||
Notes string `json:"notes"` // Update notes
|
||||
}
|
||||
|
||||
// MilestoneUpdate represents an update to a milestone
|
||||
type MilestoneUpdate struct {
|
||||
MilestoneID string `json:"milestone_id"` // Milestone ID
|
||||
NewStatus MilestoneStatus `json:"new_status"` // New status
|
||||
MilestoneID string `json:"milestone_id"` // Milestone ID
|
||||
NewStatus MilestoneStatus `json:"new_status"` // New status
|
||||
CompletedDate *time.Time `json:"completed_date,omitempty"` // Completion date if completed
|
||||
Notes string `json:"notes"` // Update notes
|
||||
Notes string `json:"notes"` // Update notes
|
||||
}
|
||||
@@ -26,12 +26,25 @@ type ContextNode struct {
|
||||
Insights []string `json:"insights"` // Analytical insights
|
||||
|
||||
// Hierarchy control
|
||||
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
|
||||
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
|
||||
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
|
||||
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
|
||||
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
|
||||
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
|
||||
AppliesTo ContextScope `json:"applies_to"` // Scope of application within hierarchy
|
||||
Parent *string `json:"parent,omitempty"` // Parent context path
|
||||
Children []string `json:"children,omitempty"` // Child context paths
|
||||
|
||||
// Metadata
|
||||
// File metadata
|
||||
FileType string `json:"file_type"` // File extension or type
|
||||
Language *string `json:"language,omitempty"` // Programming language
|
||||
Size *int64 `json:"size,omitempty"` // File size in bytes
|
||||
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification timestamp
|
||||
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
|
||||
|
||||
// Temporal metadata
|
||||
GeneratedAt time.Time `json:"generated_at"` // When context was generated
|
||||
UpdatedAt time.Time `json:"updated_at"` // Last update timestamp
|
||||
CreatedBy string `json:"created_by"` // Who created the context
|
||||
WhoUpdated string `json:"who_updated"` // Who performed the last update
|
||||
RAGConfidence float64 `json:"rag_confidence"` // RAG system confidence (0-1)
|
||||
|
||||
// Access control
|
||||
|
||||
@@ -261,11 +261,11 @@ func (ch *ConsistentHashingImpl) GetMetrics() *ConsistentHashMetrics {
|
||||
defer ch.mu.RUnlock()
|
||||
|
||||
return &ConsistentHashMetrics{
|
||||
TotalKeys: 0, // Would be maintained by usage tracking
|
||||
NodeUtilization: ch.GetNodeDistribution(),
|
||||
RebalanceEvents: 0, // Would be maintained by event tracking
|
||||
AverageSeekTime: 0.1, // Placeholder - would be measured
|
||||
LoadBalanceScore: ch.calculateLoadBalance(),
|
||||
TotalKeys: 0, // Would be maintained by usage tracking
|
||||
NodeUtilization: ch.GetNodeDistribution(),
|
||||
RebalanceEvents: 0, // Would be maintained by event tracking
|
||||
AverageSeekTime: 0.1, // Placeholder - would be measured
|
||||
LoadBalanceScore: ch.calculateLoadBalance(),
|
||||
LastRebalanceTime: 0, // Would be maintained by event tracking
|
||||
}
|
||||
}
|
||||
@@ -364,8 +364,8 @@ func (ch *ConsistentHashingImpl) FindClosestNodes(key string, count int) ([]stri
|
||||
if hash >= keyHash {
|
||||
distance = hash - keyHash
|
||||
} else {
|
||||
// Wrap around distance
|
||||
distance = (1<<32 - keyHash) + hash
|
||||
// Wrap around distance without overflowing 32-bit space
|
||||
distance = uint32((uint64(1)<<32 - uint64(keyHash)) + uint64(hash))
|
||||
}
|
||||
|
||||
distances = append(distances, struct {
|
||||
|
||||
@@ -7,38 +7,38 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/crypto"
|
||||
"chorus/pkg/election"
|
||||
"chorus/pkg/config"
|
||||
"chorus/pkg/ucxl"
|
||||
"chorus/pkg/crypto"
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/election"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// DistributionCoordinator orchestrates distributed context operations across the cluster
|
||||
type DistributionCoordinator struct {
|
||||
mu sync.RWMutex
|
||||
config *config.Config
|
||||
dht *dht.DHT
|
||||
roleCrypto *crypto.RoleCrypto
|
||||
election election.Election
|
||||
distributor ContextDistributor
|
||||
replicationMgr ReplicationManager
|
||||
conflictResolver ConflictResolver
|
||||
gossipProtocol GossipProtocol
|
||||
networkMgr NetworkManager
|
||||
mu sync.RWMutex
|
||||
config *config.Config
|
||||
dht dht.DHT
|
||||
roleCrypto *crypto.RoleCrypto
|
||||
election election.Election
|
||||
distributor ContextDistributor
|
||||
replicationMgr ReplicationManager
|
||||
conflictResolver ConflictResolver
|
||||
gossipProtocol GossipProtocol
|
||||
networkMgr NetworkManager
|
||||
|
||||
// Coordination state
|
||||
isLeader bool
|
||||
leaderID string
|
||||
coordinationTasks chan *CoordinationTask
|
||||
distributionQueue chan *DistributionRequest
|
||||
roleFilters map[string]*RoleFilter
|
||||
healthMonitors map[string]*HealthMonitor
|
||||
isLeader bool
|
||||
leaderID string
|
||||
coordinationTasks chan *CoordinationTask
|
||||
distributionQueue chan *DistributionRequest
|
||||
roleFilters map[string]*RoleFilter
|
||||
healthMonitors map[string]*HealthMonitor
|
||||
|
||||
// Statistics and metrics
|
||||
stats *CoordinationStatistics
|
||||
performanceMetrics *PerformanceMetrics
|
||||
stats *CoordinationStatistics
|
||||
performanceMetrics *PerformanceMetrics
|
||||
|
||||
// Configuration
|
||||
maxConcurrentTasks int
|
||||
@@ -49,14 +49,14 @@ type DistributionCoordinator struct {
|
||||
|
||||
// CoordinationTask represents a task for the coordinator
|
||||
type CoordinationTask struct {
|
||||
TaskID string `json:"task_id"`
|
||||
TaskType CoordinationTaskType `json:"task_type"`
|
||||
Priority Priority `json:"priority"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
RequestedBy string `json:"requested_by"`
|
||||
Payload interface{} `json:"payload"`
|
||||
Context context.Context `json:"-"`
|
||||
Callback func(error) `json:"-"`
|
||||
TaskID string `json:"task_id"`
|
||||
TaskType CoordinationTaskType `json:"task_type"`
|
||||
Priority Priority `json:"priority"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
RequestedBy string `json:"requested_by"`
|
||||
Payload interface{} `json:"payload"`
|
||||
Context context.Context `json:"-"`
|
||||
Callback func(error) `json:"-"`
|
||||
}
|
||||
|
||||
// CoordinationTaskType represents different types of coordination tasks
|
||||
@@ -74,55 +74,55 @@ const (
|
||||
|
||||
// DistributionRequest represents a request for context distribution
|
||||
type DistributionRequest struct {
|
||||
RequestID string `json:"request_id"`
|
||||
ContextNode *slurpContext.ContextNode `json:"context_node"`
|
||||
TargetRoles []string `json:"target_roles"`
|
||||
Priority Priority `json:"priority"`
|
||||
RequesterID string `json:"requester_id"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
Options *DistributionOptions `json:"options"`
|
||||
Callback func(*DistributionResult, error) `json:"-"`
|
||||
RequestID string `json:"request_id"`
|
||||
ContextNode *slurpContext.ContextNode `json:"context_node"`
|
||||
TargetRoles []string `json:"target_roles"`
|
||||
Priority Priority `json:"priority"`
|
||||
RequesterID string `json:"requester_id"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
Options *DistributionOptions `json:"options"`
|
||||
Callback func(*DistributionResult, error) `json:"-"`
|
||||
}
|
||||
|
||||
// DistributionOptions contains options for context distribution
|
||||
type DistributionOptions struct {
|
||||
ReplicationFactor int `json:"replication_factor"`
|
||||
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
|
||||
EncryptionLevel crypto.AccessLevel `json:"encryption_level"`
|
||||
TTL *time.Duration `json:"ttl,omitempty"`
|
||||
PreferredZones []string `json:"preferred_zones"`
|
||||
ExcludedNodes []string `json:"excluded_nodes"`
|
||||
ConflictResolution ResolutionType `json:"conflict_resolution"`
|
||||
ReplicationFactor int `json:"replication_factor"`
|
||||
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
|
||||
EncryptionLevel crypto.AccessLevel `json:"encryption_level"`
|
||||
TTL *time.Duration `json:"ttl,omitempty"`
|
||||
PreferredZones []string `json:"preferred_zones"`
|
||||
ExcludedNodes []string `json:"excluded_nodes"`
|
||||
ConflictResolution ResolutionType `json:"conflict_resolution"`
|
||||
}
|
||||
|
||||
// DistributionResult represents the result of a distribution operation
|
||||
type DistributionResult struct {
|
||||
RequestID string `json:"request_id"`
|
||||
Success bool `json:"success"`
|
||||
DistributedNodes []string `json:"distributed_nodes"`
|
||||
ReplicationFactor int `json:"replication_factor"`
|
||||
ProcessingTime time.Duration `json:"processing_time"`
|
||||
Errors []string `json:"errors"`
|
||||
ConflictResolved *ConflictResolution `json:"conflict_resolved,omitempty"`
|
||||
CompletedAt time.Time `json:"completed_at"`
|
||||
RequestID string `json:"request_id"`
|
||||
Success bool `json:"success"`
|
||||
DistributedNodes []string `json:"distributed_nodes"`
|
||||
ReplicationFactor int `json:"replication_factor"`
|
||||
ProcessingTime time.Duration `json:"processing_time"`
|
||||
Errors []string `json:"errors"`
|
||||
ConflictResolved *ConflictResolution `json:"conflict_resolved,omitempty"`
|
||||
CompletedAt time.Time `json:"completed_at"`
|
||||
}
|
||||
|
||||
// RoleFilter manages role-based filtering for context access
|
||||
type RoleFilter struct {
|
||||
RoleID string `json:"role_id"`
|
||||
AccessLevel crypto.AccessLevel `json:"access_level"`
|
||||
AllowedCompartments []string `json:"allowed_compartments"`
|
||||
FilterRules []*FilterRule `json:"filter_rules"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
RoleID string `json:"role_id"`
|
||||
AccessLevel crypto.AccessLevel `json:"access_level"`
|
||||
AllowedCompartments []string `json:"allowed_compartments"`
|
||||
FilterRules []*FilterRule `json:"filter_rules"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
|
||||
// FilterRule represents a single filtering rule
|
||||
type FilterRule struct {
|
||||
RuleID string `json:"rule_id"`
|
||||
RuleType FilterRuleType `json:"rule_type"`
|
||||
Pattern string `json:"pattern"`
|
||||
Action FilterAction `json:"action"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
RuleID string `json:"rule_id"`
|
||||
RuleType FilterRuleType `json:"rule_type"`
|
||||
Pattern string `json:"pattern"`
|
||||
Action FilterAction `json:"action"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
}
|
||||
|
||||
// FilterRuleType represents different types of filter rules
|
||||
@@ -139,10 +139,10 @@ const (
|
||||
type FilterAction string
|
||||
|
||||
const (
|
||||
FilterActionAllow FilterAction = "allow"
|
||||
FilterActionDeny FilterAction = "deny"
|
||||
FilterActionModify FilterAction = "modify"
|
||||
FilterActionAudit FilterAction = "audit"
|
||||
FilterActionAllow FilterAction = "allow"
|
||||
FilterActionDeny FilterAction = "deny"
|
||||
FilterActionModify FilterAction = "modify"
|
||||
FilterActionAudit FilterAction = "audit"
|
||||
)
|
||||
|
||||
// HealthMonitor monitors the health of a specific component
|
||||
@@ -160,10 +160,10 @@ type HealthMonitor struct {
|
||||
type ComponentType string
|
||||
|
||||
const (
|
||||
ComponentTypeDHT ComponentType = "dht"
|
||||
ComponentTypeReplication ComponentType = "replication"
|
||||
ComponentTypeGossip ComponentType = "gossip"
|
||||
ComponentTypeNetwork ComponentType = "network"
|
||||
ComponentTypeDHT ComponentType = "dht"
|
||||
ComponentTypeReplication ComponentType = "replication"
|
||||
ComponentTypeGossip ComponentType = "gossip"
|
||||
ComponentTypeNetwork ComponentType = "network"
|
||||
ComponentTypeConflictResolver ComponentType = "conflict_resolver"
|
||||
)
|
||||
|
||||
@@ -190,13 +190,13 @@ type CoordinationStatistics struct {
|
||||
|
||||
// PerformanceMetrics tracks detailed performance metrics
|
||||
type PerformanceMetrics struct {
|
||||
ThroughputPerSecond float64 `json:"throughput_per_second"`
|
||||
LatencyPercentiles map[string]float64 `json:"latency_percentiles"`
|
||||
ErrorRateByType map[string]float64 `json:"error_rate_by_type"`
|
||||
ResourceUtilization map[string]float64 `json:"resource_utilization"`
|
||||
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
|
||||
StorageMetrics *StorageMetrics `json:"storage_metrics"`
|
||||
LastCalculated time.Time `json:"last_calculated"`
|
||||
ThroughputPerSecond float64 `json:"throughput_per_second"`
|
||||
LatencyPercentiles map[string]float64 `json:"latency_percentiles"`
|
||||
ErrorRateByType map[string]float64 `json:"error_rate_by_type"`
|
||||
ResourceUtilization map[string]float64 `json:"resource_utilization"`
|
||||
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
|
||||
StorageMetrics *StorageMetrics `json:"storage_metrics"`
|
||||
LastCalculated time.Time `json:"last_calculated"`
|
||||
}
|
||||
|
||||
// NetworkMetrics tracks network-related performance
|
||||
@@ -210,24 +210,24 @@ type NetworkMetrics struct {
|
||||
|
||||
// StorageMetrics tracks storage-related performance
|
||||
type StorageMetrics struct {
|
||||
TotalContexts int64 `json:"total_contexts"`
|
||||
StorageUtilization float64 `json:"storage_utilization"`
|
||||
CompressionRatio float64 `json:"compression_ratio"`
|
||||
TotalContexts int64 `json:"total_contexts"`
|
||||
StorageUtilization float64 `json:"storage_utilization"`
|
||||
CompressionRatio float64 `json:"compression_ratio"`
|
||||
ReplicationEfficiency float64 `json:"replication_efficiency"`
|
||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||
}
|
||||
|
||||
// NewDistributionCoordinator creates a new distribution coordinator
|
||||
func NewDistributionCoordinator(
|
||||
config *config.Config,
|
||||
dht *dht.DHT,
|
||||
dhtInstance dht.DHT,
|
||||
roleCrypto *crypto.RoleCrypto,
|
||||
election election.Election,
|
||||
) (*DistributionCoordinator, error) {
|
||||
if config == nil {
|
||||
return nil, fmt.Errorf("config is required")
|
||||
}
|
||||
if dht == nil {
|
||||
if dhtInstance == nil {
|
||||
return nil, fmt.Errorf("DHT instance is required")
|
||||
}
|
||||
if roleCrypto == nil {
|
||||
@@ -238,14 +238,14 @@ func NewDistributionCoordinator(
|
||||
}
|
||||
|
||||
// Create distributor
|
||||
distributor, err := NewDHTContextDistributor(dht, roleCrypto, election, config)
|
||||
distributor, err := NewDHTContextDistributor(dhtInstance, roleCrypto, election, config)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create context distributor: %w", err)
|
||||
}
|
||||
|
||||
coord := &DistributionCoordinator{
|
||||
config: config,
|
||||
dht: dht,
|
||||
dht: dhtInstance,
|
||||
roleCrypto: roleCrypto,
|
||||
election: election,
|
||||
distributor: distributor,
|
||||
@@ -264,9 +264,9 @@ func NewDistributionCoordinator(
|
||||
LatencyPercentiles: make(map[string]float64),
|
||||
ErrorRateByType: make(map[string]float64),
|
||||
ResourceUtilization: make(map[string]float64),
|
||||
NetworkMetrics: &NetworkMetrics{},
|
||||
StorageMetrics: &StorageMetrics{},
|
||||
LastCalculated: time.Now(),
|
||||
NetworkMetrics: &NetworkMetrics{},
|
||||
StorageMetrics: &StorageMetrics{},
|
||||
LastCalculated: time.Now(),
|
||||
},
|
||||
}
|
||||
|
||||
@@ -356,7 +356,7 @@ func (dc *DistributionCoordinator) CoordinateReplication(
|
||||
CreatedAt: time.Now(),
|
||||
RequestedBy: dc.config.Agent.ID,
|
||||
Payload: map[string]interface{}{
|
||||
"address": address,
|
||||
"address": address,
|
||||
"target_factor": targetFactor,
|
||||
},
|
||||
Context: ctx,
|
||||
@@ -398,14 +398,14 @@ func (dc *DistributionCoordinator) GetClusterHealth() (*ClusterHealth, error) {
|
||||
defer dc.mu.RUnlock()
|
||||
|
||||
health := &ClusterHealth{
|
||||
OverallStatus: dc.calculateOverallHealth(),
|
||||
NodeCount: len(dc.dht.GetConnectedPeers()) + 1, // +1 for current node
|
||||
HealthyNodes: 0,
|
||||
UnhealthyNodes: 0,
|
||||
ComponentHealth: make(map[string]*ComponentHealth),
|
||||
LastUpdated: time.Now(),
|
||||
Alerts: []string{},
|
||||
Recommendations: []string{},
|
||||
OverallStatus: dc.calculateOverallHealth(),
|
||||
NodeCount: len(dc.healthMonitors) + 1, // Placeholder count including current node
|
||||
HealthyNodes: 0,
|
||||
UnhealthyNodes: 0,
|
||||
ComponentHealth: make(map[string]*ComponentHealth),
|
||||
LastUpdated: time.Now(),
|
||||
Alerts: []string{},
|
||||
Recommendations: []string{},
|
||||
}
|
||||
|
||||
// Calculate component health
|
||||
@@ -598,8 +598,8 @@ func (dc *DistributionCoordinator) initializeHealthMonitors() {
|
||||
components := map[string]ComponentType{
|
||||
"dht": ComponentTypeDHT,
|
||||
"replication": ComponentTypeReplication,
|
||||
"gossip": ComponentTypeGossip,
|
||||
"network": ComponentTypeNetwork,
|
||||
"gossip": ComponentTypeGossip,
|
||||
"network": ComponentTypeNetwork,
|
||||
"conflict_resolver": ComponentTypeConflictResolver,
|
||||
}
|
||||
|
||||
@@ -682,8 +682,8 @@ func (dc *DistributionCoordinator) executeDistribution(ctx context.Context, requ
|
||||
Success: false,
|
||||
DistributedNodes: []string{},
|
||||
ProcessingTime: 0,
|
||||
Errors: []string{},
|
||||
CompletedAt: time.Now(),
|
||||
Errors: []string{},
|
||||
CompletedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Execute distribution via distributor
|
||||
@@ -703,14 +703,14 @@ func (dc *DistributionCoordinator) executeDistribution(ctx context.Context, requ
|
||||
|
||||
// ClusterHealth represents overall cluster health
|
||||
type ClusterHealth struct {
|
||||
OverallStatus HealthStatus `json:"overall_status"`
|
||||
NodeCount int `json:"node_count"`
|
||||
HealthyNodes int `json:"healthy_nodes"`
|
||||
UnhealthyNodes int `json:"unhealthy_nodes"`
|
||||
ComponentHealth map[string]*ComponentHealth `json:"component_health"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
Alerts []string `json:"alerts"`
|
||||
Recommendations []string `json:"recommendations"`
|
||||
OverallStatus HealthStatus `json:"overall_status"`
|
||||
NodeCount int `json:"node_count"`
|
||||
HealthyNodes int `json:"healthy_nodes"`
|
||||
UnhealthyNodes int `json:"unhealthy_nodes"`
|
||||
ComponentHealth map[string]*ComponentHealth `json:"component_health"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
Alerts []string `json:"alerts"`
|
||||
Recommendations []string `json:"recommendations"`
|
||||
}
|
||||
|
||||
// ComponentHealth represents individual component health
|
||||
@@ -736,14 +736,14 @@ func (dc *DistributionCoordinator) getDefaultDistributionOptions() *Distribution
|
||||
return &DistributionOptions{
|
||||
ReplicationFactor: 3,
|
||||
ConsistencyLevel: ConsistencyEventual,
|
||||
EncryptionLevel: crypto.AccessMedium,
|
||||
EncryptionLevel: crypto.AccessLevel(slurpContext.AccessMedium),
|
||||
ConflictResolution: ResolutionMerged,
|
||||
}
|
||||
}
|
||||
|
||||
func (dc *DistributionCoordinator) getAccessLevelForRole(role string) crypto.AccessLevel {
|
||||
// Placeholder implementation
|
||||
return crypto.AccessMedium
|
||||
return crypto.AccessLevel(slurpContext.AccessMedium)
|
||||
}
|
||||
|
||||
func (dc *DistributionCoordinator) getAllowedCompartments(role string) []string {
|
||||
@@ -796,11 +796,11 @@ func (dc *DistributionCoordinator) updatePerformanceMetrics() {
|
||||
|
||||
func (dc *DistributionCoordinator) priorityFromSeverity(severity ConflictSeverity) Priority {
|
||||
switch severity {
|
||||
case SeverityCritical:
|
||||
case ConflictSeverityCritical:
|
||||
return PriorityCritical
|
||||
case SeverityHigh:
|
||||
case ConflictSeverityHigh:
|
||||
return PriorityHigh
|
||||
case SeverityMedium:
|
||||
case ConflictSeverityMedium:
|
||||
return PriorityNormal
|
||||
default:
|
||||
return PriorityLow
|
||||
|
||||
@@ -9,12 +9,12 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/crypto"
|
||||
"chorus/pkg/election"
|
||||
"chorus/pkg/ucxl"
|
||||
"chorus/pkg/config"
|
||||
"chorus/pkg/crypto"
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/election"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// ContextDistributor handles distributed context operations via DHT
|
||||
@@ -61,6 +61,12 @@ type ContextDistributor interface {
|
||||
|
||||
// SetReplicationPolicy configures replication behavior
|
||||
SetReplicationPolicy(policy *ReplicationPolicy) error
|
||||
|
||||
// Start initializes background distribution routines
|
||||
Start(ctx context.Context) error
|
||||
|
||||
// Stop releases distribution resources
|
||||
Stop(ctx context.Context) error
|
||||
}
|
||||
|
||||
// DHTStorage provides direct DHT storage operations for context data
|
||||
@@ -175,59 +181,59 @@ type NetworkManager interface {
|
||||
|
||||
// DistributionCriteria represents criteria for listing distributed contexts
|
||||
type DistributionCriteria struct {
|
||||
Tags []string `json:"tags"` // Required tags
|
||||
Technologies []string `json:"technologies"` // Required technologies
|
||||
MinReplicas int `json:"min_replicas"` // Minimum replica count
|
||||
MaxAge *time.Duration `json:"max_age"` // Maximum age
|
||||
HealthyOnly bool `json:"healthy_only"` // Only healthy replicas
|
||||
Limit int `json:"limit"` // Maximum results
|
||||
Offset int `json:"offset"` // Result offset
|
||||
Tags []string `json:"tags"` // Required tags
|
||||
Technologies []string `json:"technologies"` // Required technologies
|
||||
MinReplicas int `json:"min_replicas"` // Minimum replica count
|
||||
MaxAge *time.Duration `json:"max_age"` // Maximum age
|
||||
HealthyOnly bool `json:"healthy_only"` // Only healthy replicas
|
||||
Limit int `json:"limit"` // Maximum results
|
||||
Offset int `json:"offset"` // Result offset
|
||||
}
|
||||
|
||||
// DistributedContextInfo represents information about distributed context
|
||||
type DistributedContextInfo struct {
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
Roles []string `json:"roles"` // Accessible roles
|
||||
ReplicaCount int `json:"replica_count"` // Number of replicas
|
||||
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
|
||||
LastUpdated time.Time `json:"last_updated"` // Last update time
|
||||
Version int64 `json:"version"` // Version number
|
||||
Size int64 `json:"size"` // Data size
|
||||
Checksum string `json:"checksum"` // Data checksum
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
Roles []string `json:"roles"` // Accessible roles
|
||||
ReplicaCount int `json:"replica_count"` // Number of replicas
|
||||
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
|
||||
LastUpdated time.Time `json:"last_updated"` // Last update time
|
||||
Version int64 `json:"version"` // Version number
|
||||
Size int64 `json:"size"` // Data size
|
||||
Checksum string `json:"checksum"` // Data checksum
|
||||
}
|
||||
|
||||
// ConflictResolution represents the result of conflict resolution
|
||||
type ConflictResolution struct {
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
ResolutionType ResolutionType `json:"resolution_type"` // How conflict was resolved
|
||||
MergedContext *slurpContext.ContextNode `json:"merged_context"` // Resulting merged context
|
||||
ConflictingSources []string `json:"conflicting_sources"` // Sources of conflict
|
||||
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
|
||||
ResolvedAt time.Time `json:"resolved_at"` // When resolved
|
||||
Confidence float64 `json:"confidence"` // Confidence in resolution
|
||||
ManualReview bool `json:"manual_review"` // Whether manual review needed
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
ResolutionType ResolutionType `json:"resolution_type"` // How conflict was resolved
|
||||
MergedContext *slurpContext.ContextNode `json:"merged_context"` // Resulting merged context
|
||||
ConflictingSources []string `json:"conflicting_sources"` // Sources of conflict
|
||||
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
|
||||
ResolvedAt time.Time `json:"resolved_at"` // When resolved
|
||||
Confidence float64 `json:"confidence"` // Confidence in resolution
|
||||
ManualReview bool `json:"manual_review"` // Whether manual review needed
|
||||
}
|
||||
|
||||
// ResolutionType represents different types of conflict resolution
|
||||
type ResolutionType string
|
||||
|
||||
const (
|
||||
ResolutionMerged ResolutionType = "merged" // Contexts were merged
|
||||
ResolutionLastWriter ResolutionType = "last_writer" // Last writer wins
|
||||
ResolutionMerged ResolutionType = "merged" // Contexts were merged
|
||||
ResolutionLastWriter ResolutionType = "last_writer" // Last writer wins
|
||||
ResolutionLeaderDecision ResolutionType = "leader_decision" // Leader made decision
|
||||
ResolutionManual ResolutionType = "manual" // Manual resolution required
|
||||
ResolutionFailed ResolutionType = "failed" // Resolution failed
|
||||
ResolutionManual ResolutionType = "manual" // Manual resolution required
|
||||
ResolutionFailed ResolutionType = "failed" // Resolution failed
|
||||
)
|
||||
|
||||
// PotentialConflict represents a detected potential conflict
|
||||
type PotentialConflict struct {
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
ConflictType ConflictType `json:"conflict_type"` // Type of conflict
|
||||
Description string `json:"description"` // Conflict description
|
||||
Severity ConflictSeverity `json:"severity"` // Conflict severity
|
||||
AffectedFields []string `json:"affected_fields"` // Fields in conflict
|
||||
Suggestions []string `json:"suggestions"` // Resolution suggestions
|
||||
DetectedAt time.Time `json:"detected_at"` // When detected
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
ConflictType ConflictType `json:"conflict_type"` // Type of conflict
|
||||
Description string `json:"description"` // Conflict description
|
||||
Severity ConflictSeverity `json:"severity"` // Conflict severity
|
||||
AffectedFields []string `json:"affected_fields"` // Fields in conflict
|
||||
Suggestions []string `json:"suggestions"` // Resolution suggestions
|
||||
DetectedAt time.Time `json:"detected_at"` // When detected
|
||||
}
|
||||
|
||||
// ConflictType represents different types of conflicts
|
||||
@@ -245,88 +251,88 @@ const (
|
||||
type ConflictSeverity string
|
||||
|
||||
const (
|
||||
SeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
|
||||
SeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
|
||||
SeverityHigh ConflictSeverity = "high" // High severity - needs attention
|
||||
SeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
|
||||
ConflictSeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
|
||||
ConflictSeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
|
||||
ConflictSeverityHigh ConflictSeverity = "high" // High severity - needs attention
|
||||
ConflictSeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
|
||||
)
|
||||
|
||||
// ResolutionStrategy represents conflict resolution strategy configuration
|
||||
type ResolutionStrategy struct {
|
||||
DefaultResolution ResolutionType `json:"default_resolution"` // Default resolution method
|
||||
FieldPriorities map[string]int `json:"field_priorities"` // Field priority mapping
|
||||
AutoMergeEnabled bool `json:"auto_merge_enabled"` // Enable automatic merging
|
||||
RequireConsensus bool `json:"require_consensus"` // Require node consensus
|
||||
LeaderBreaksTies bool `json:"leader_breaks_ties"` // Leader resolves ties
|
||||
MaxConflictAge time.Duration `json:"max_conflict_age"` // Max age before escalation
|
||||
EscalationRoles []string `json:"escalation_roles"` // Roles for manual escalation
|
||||
DefaultResolution ResolutionType `json:"default_resolution"` // Default resolution method
|
||||
FieldPriorities map[string]int `json:"field_priorities"` // Field priority mapping
|
||||
AutoMergeEnabled bool `json:"auto_merge_enabled"` // Enable automatic merging
|
||||
RequireConsensus bool `json:"require_consensus"` // Require node consensus
|
||||
LeaderBreaksTies bool `json:"leader_breaks_ties"` // Leader resolves ties
|
||||
MaxConflictAge time.Duration `json:"max_conflict_age"` // Max age before escalation
|
||||
EscalationRoles []string `json:"escalation_roles"` // Roles for manual escalation
|
||||
}
|
||||
|
||||
// SyncResult represents the result of synchronization operation
|
||||
type SyncResult struct {
|
||||
SyncedContexts int `json:"synced_contexts"` // Contexts synchronized
|
||||
ConflictsResolved int `json:"conflicts_resolved"` // Conflicts resolved
|
||||
Errors []string `json:"errors"` // Synchronization errors
|
||||
SyncTime time.Duration `json:"sync_time"` // Total sync time
|
||||
PeersContacted int `json:"peers_contacted"` // Number of peers contacted
|
||||
DataTransferred int64 `json:"data_transferred"` // Bytes transferred
|
||||
SyncedAt time.Time `json:"synced_at"` // When sync completed
|
||||
SyncedContexts int `json:"synced_contexts"` // Contexts synchronized
|
||||
ConflictsResolved int `json:"conflicts_resolved"` // Conflicts resolved
|
||||
Errors []string `json:"errors"` // Synchronization errors
|
||||
SyncTime time.Duration `json:"sync_time"` // Total sync time
|
||||
PeersContacted int `json:"peers_contacted"` // Number of peers contacted
|
||||
DataTransferred int64 `json:"data_transferred"` // Bytes transferred
|
||||
SyncedAt time.Time `json:"synced_at"` // When sync completed
|
||||
}
|
||||
|
||||
// ReplicaHealth represents health status of context replicas
|
||||
type ReplicaHealth struct {
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
TotalReplicas int `json:"total_replicas"` // Total replica count
|
||||
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
|
||||
FailedReplicas int `json:"failed_replicas"` // Failed replica count
|
||||
ReplicaNodes []*ReplicaNode `json:"replica_nodes"` // Individual replica status
|
||||
OverallHealth HealthStatus `json:"overall_health"` // Overall health status
|
||||
LastChecked time.Time `json:"last_checked"` // When last checked
|
||||
RepairNeeded bool `json:"repair_needed"` // Whether repair is needed
|
||||
Address ucxl.Address `json:"address"` // Context address
|
||||
TotalReplicas int `json:"total_replicas"` // Total replica count
|
||||
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
|
||||
FailedReplicas int `json:"failed_replicas"` // Failed replica count
|
||||
ReplicaNodes []*ReplicaNode `json:"replica_nodes"` // Individual replica status
|
||||
OverallHealth HealthStatus `json:"overall_health"` // Overall health status
|
||||
LastChecked time.Time `json:"last_checked"` // When last checked
|
||||
RepairNeeded bool `json:"repair_needed"` // Whether repair is needed
|
||||
}
|
||||
|
||||
// ReplicaNode represents status of individual replica node
|
||||
type ReplicaNode struct {
|
||||
NodeID string `json:"node_id"` // Node identifier
|
||||
Status ReplicaStatus `json:"status"` // Replica status
|
||||
LastSeen time.Time `json:"last_seen"` // When last seen
|
||||
Version int64 `json:"version"` // Context version
|
||||
Checksum string `json:"checksum"` // Data checksum
|
||||
Latency time.Duration `json:"latency"` // Network latency
|
||||
NetworkAddress string `json:"network_address"` // Network address
|
||||
NodeID string `json:"node_id"` // Node identifier
|
||||
Status ReplicaStatus `json:"status"` // Replica status
|
||||
LastSeen time.Time `json:"last_seen"` // When last seen
|
||||
Version int64 `json:"version"` // Context version
|
||||
Checksum string `json:"checksum"` // Data checksum
|
||||
Latency time.Duration `json:"latency"` // Network latency
|
||||
NetworkAddress string `json:"network_address"` // Network address
|
||||
}
|
||||
|
||||
// ReplicaStatus represents status of individual replica
|
||||
type ReplicaStatus string
|
||||
|
||||
const (
|
||||
ReplicaHealthy ReplicaStatus = "healthy" // Replica is healthy
|
||||
ReplicaStale ReplicaStatus = "stale" // Replica is stale
|
||||
ReplicaCorrupted ReplicaStatus = "corrupted" // Replica is corrupted
|
||||
ReplicaUnreachable ReplicaStatus = "unreachable" // Replica is unreachable
|
||||
ReplicaSyncing ReplicaStatus = "syncing" // Replica is syncing
|
||||
ReplicaHealthy ReplicaStatus = "healthy" // Replica is healthy
|
||||
ReplicaStale ReplicaStatus = "stale" // Replica is stale
|
||||
ReplicaCorrupted ReplicaStatus = "corrupted" // Replica is corrupted
|
||||
ReplicaUnreachable ReplicaStatus = "unreachable" // Replica is unreachable
|
||||
ReplicaSyncing ReplicaStatus = "syncing" // Replica is syncing
|
||||
)
|
||||
|
||||
// HealthStatus represents overall health status
|
||||
type HealthStatus string
|
||||
|
||||
const (
|
||||
HealthHealthy HealthStatus = "healthy" // All replicas healthy
|
||||
HealthDegraded HealthStatus = "degraded" // Some replicas unhealthy
|
||||
HealthCritical HealthStatus = "critical" // Most replicas unhealthy
|
||||
HealthFailed HealthStatus = "failed" // All replicas failed
|
||||
HealthHealthy HealthStatus = "healthy" // All replicas healthy
|
||||
HealthDegraded HealthStatus = "degraded" // Some replicas unhealthy
|
||||
HealthCritical HealthStatus = "critical" // Most replicas unhealthy
|
||||
HealthFailed HealthStatus = "failed" // All replicas failed
|
||||
)
|
||||
|
||||
// ReplicationPolicy represents replication behavior configuration
|
||||
type ReplicationPolicy struct {
|
||||
DefaultFactor int `json:"default_factor"` // Default replication factor
|
||||
MinFactor int `json:"min_factor"` // Minimum replication factor
|
||||
MaxFactor int `json:"max_factor"` // Maximum replication factor
|
||||
PreferredZones []string `json:"preferred_zones"` // Preferred availability zones
|
||||
AvoidSameNode bool `json:"avoid_same_node"` // Avoid same physical node
|
||||
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` // Consistency requirements
|
||||
RepairThreshold float64 `json:"repair_threshold"` // Health threshold for repair
|
||||
RebalanceInterval time.Duration `json:"rebalance_interval"` // Rebalancing frequency
|
||||
DefaultFactor int `json:"default_factor"` // Default replication factor
|
||||
MinFactor int `json:"min_factor"` // Minimum replication factor
|
||||
MaxFactor int `json:"max_factor"` // Maximum replication factor
|
||||
PreferredZones []string `json:"preferred_zones"` // Preferred availability zones
|
||||
AvoidSameNode bool `json:"avoid_same_node"` // Avoid same physical node
|
||||
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` // Consistency requirements
|
||||
RepairThreshold float64 `json:"repair_threshold"` // Health threshold for repair
|
||||
RebalanceInterval time.Duration `json:"rebalance_interval"` // Rebalancing frequency
|
||||
}
|
||||
|
||||
// ConsistencyLevel represents consistency requirements
|
||||
@@ -340,12 +346,12 @@ const (
|
||||
|
||||
// DHTStoreOptions represents options for DHT storage operations
|
||||
type DHTStoreOptions struct {
|
||||
ReplicationFactor int `json:"replication_factor"` // Number of replicas
|
||||
TTL *time.Duration `json:"ttl,omitempty"` // Time to live
|
||||
Priority Priority `json:"priority"` // Storage priority
|
||||
Compress bool `json:"compress"` // Whether to compress
|
||||
Checksum bool `json:"checksum"` // Whether to checksum
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
ReplicationFactor int `json:"replication_factor"` // Number of replicas
|
||||
TTL *time.Duration `json:"ttl,omitempty"` // Time to live
|
||||
Priority Priority `json:"priority"` // Storage priority
|
||||
Compress bool `json:"compress"` // Whether to compress
|
||||
Checksum bool `json:"checksum"` // Whether to checksum
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// Priority represents storage operation priority
|
||||
@@ -360,12 +366,12 @@ const (
|
||||
|
||||
// DHTMetadata represents metadata for DHT stored data
|
||||
type DHTMetadata struct {
|
||||
StoredAt time.Time `json:"stored_at"` // When stored
|
||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||
Version int64 `json:"version"` // Version number
|
||||
Size int64 `json:"size"` // Data size
|
||||
Checksum string `json:"checksum"` // Data checksum
|
||||
ReplicationFactor int `json:"replication_factor"` // Number of replicas
|
||||
TTL *time.Time `json:"ttl,omitempty"` // Time to live
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
StoredAt time.Time `json:"stored_at"` // When stored
|
||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||
Version int64 `json:"version"` // Version number
|
||||
Size int64 `json:"size"` // Data size
|
||||
Checksum string `json:"checksum"` // Data checksum
|
||||
ReplicationFactor int `json:"replication_factor"` // Number of replicas
|
||||
TTL *time.Time `json:"ttl,omitempty"` // Time to live
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
@@ -10,18 +10,18 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/crypto"
|
||||
"chorus/pkg/election"
|
||||
"chorus/pkg/ucxl"
|
||||
"chorus/pkg/config"
|
||||
"chorus/pkg/crypto"
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/election"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// DHTContextDistributor implements ContextDistributor using CHORUS DHT infrastructure
|
||||
type DHTContextDistributor struct {
|
||||
mu sync.RWMutex
|
||||
dht *dht.DHT
|
||||
dht dht.DHT
|
||||
roleCrypto *crypto.RoleCrypto
|
||||
election election.Election
|
||||
config *config.Config
|
||||
@@ -37,7 +37,7 @@ type DHTContextDistributor struct {
|
||||
|
||||
// NewDHTContextDistributor creates a new DHT-based context distributor
|
||||
func NewDHTContextDistributor(
|
||||
dht *dht.DHT,
|
||||
dht dht.DHT,
|
||||
roleCrypto *crypto.RoleCrypto,
|
||||
election election.Election,
|
||||
config *config.Config,
|
||||
@@ -147,36 +147,43 @@ func (d *DHTContextDistributor) DistributeContext(ctx context.Context, node *slu
|
||||
return d.recordError(fmt.Sprintf("failed to get vector clock: %v", err))
|
||||
}
|
||||
|
||||
// Encrypt context for roles
|
||||
encryptedData, err := d.roleCrypto.EncryptContextForRoles(node, roles, []string{})
|
||||
// Prepare context payload for role encryption
|
||||
rawContext, err := json.Marshal(node)
|
||||
if err != nil {
|
||||
return d.recordError(fmt.Sprintf("failed to encrypt context: %v", err))
|
||||
return d.recordError(fmt.Sprintf("failed to marshal context: %v", err))
|
||||
}
|
||||
|
||||
// Create distribution metadata
|
||||
// Create distribution metadata (checksum calculated per-role below)
|
||||
metadata := &DistributionMetadata{
|
||||
Address: node.UCXLAddress,
|
||||
Roles: roles,
|
||||
Version: 1,
|
||||
VectorClock: clock,
|
||||
DistributedBy: d.config.Agent.ID,
|
||||
DistributedAt: time.Now(),
|
||||
Roles: roles,
|
||||
Version: 1,
|
||||
VectorClock: clock,
|
||||
DistributedBy: d.config.Agent.ID,
|
||||
DistributedAt: time.Now(),
|
||||
ReplicationFactor: d.getReplicationFactor(),
|
||||
Checksum: d.calculateChecksum(encryptedData),
|
||||
}
|
||||
|
||||
// Store encrypted data in DHT for each role
|
||||
for _, role := range roles {
|
||||
key := d.keyGenerator.GenerateContextKey(node.UCXLAddress.String(), role)
|
||||
|
||||
cipher, fingerprint, err := d.roleCrypto.EncryptForRole(rawContext, role)
|
||||
if err != nil {
|
||||
return d.recordError(fmt.Sprintf("failed to encrypt context for role %s: %v", role, err))
|
||||
}
|
||||
|
||||
// Create role-specific storage package
|
||||
storagePackage := &ContextStoragePackage{
|
||||
EncryptedData: encryptedData,
|
||||
Metadata: metadata,
|
||||
Role: role,
|
||||
StoredAt: time.Now(),
|
||||
EncryptedData: cipher,
|
||||
KeyFingerprint: fingerprint,
|
||||
Metadata: metadata,
|
||||
Role: role,
|
||||
StoredAt: time.Now(),
|
||||
}
|
||||
|
||||
metadata.Checksum = d.calculateChecksum(cipher)
|
||||
|
||||
// Serialize for storage
|
||||
storageBytes, err := json.Marshal(storagePackage)
|
||||
if err != nil {
|
||||
@@ -252,25 +259,30 @@ func (d *DHTContextDistributor) RetrieveContext(ctx context.Context, address ucx
|
||||
}
|
||||
|
||||
// Decrypt context for role
|
||||
contextNode, err := d.roleCrypto.DecryptContextForRole(storagePackage.EncryptedData, role)
|
||||
plain, err := d.roleCrypto.DecryptForRole(storagePackage.EncryptedData, role, storagePackage.KeyFingerprint)
|
||||
if err != nil {
|
||||
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decrypt context: %v", err))
|
||||
}
|
||||
|
||||
var contextNode slurpContext.ContextNode
|
||||
if err := json.Unmarshal(plain, &contextNode); err != nil {
|
||||
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decode context: %v", err))
|
||||
}
|
||||
|
||||
// Convert to resolved context
|
||||
resolvedContext := &slurpContext.ResolvedContext{
|
||||
UCXLAddress: contextNode.UCXLAddress,
|
||||
Summary: contextNode.Summary,
|
||||
Purpose: contextNode.Purpose,
|
||||
Technologies: contextNode.Technologies,
|
||||
Tags: contextNode.Tags,
|
||||
Insights: contextNode.Insights,
|
||||
ContextSourcePath: contextNode.Path,
|
||||
InheritanceChain: []string{contextNode.Path},
|
||||
ResolutionConfidence: contextNode.RAGConfidence,
|
||||
BoundedDepth: 1,
|
||||
GlobalContextsApplied: false,
|
||||
ResolvedAt: time.Now(),
|
||||
UCXLAddress: contextNode.UCXLAddress,
|
||||
Summary: contextNode.Summary,
|
||||
Purpose: contextNode.Purpose,
|
||||
Technologies: contextNode.Technologies,
|
||||
Tags: contextNode.Tags,
|
||||
Insights: contextNode.Insights,
|
||||
ContextSourcePath: contextNode.Path,
|
||||
InheritanceChain: []string{contextNode.Path},
|
||||
ResolutionConfidence: contextNode.RAGConfidence,
|
||||
BoundedDepth: 1,
|
||||
GlobalContextsApplied: false,
|
||||
ResolvedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Update statistics
|
||||
@@ -304,15 +316,15 @@ func (d *DHTContextDistributor) UpdateContext(ctx context.Context, node *slurpCo
|
||||
|
||||
// Convert existing resolved context back to context node for comparison
|
||||
existingNode := &slurpContext.ContextNode{
|
||||
Path: existingContext.ContextSourcePath,
|
||||
UCXLAddress: existingContext.UCXLAddress,
|
||||
Summary: existingContext.Summary,
|
||||
Purpose: existingContext.Purpose,
|
||||
Technologies: existingContext.Technologies,
|
||||
Tags: existingContext.Tags,
|
||||
Insights: existingContext.Insights,
|
||||
RAGConfidence: existingContext.ResolutionConfidence,
|
||||
GeneratedAt: existingContext.ResolvedAt,
|
||||
Path: existingContext.ContextSourcePath,
|
||||
UCXLAddress: existingContext.UCXLAddress,
|
||||
Summary: existingContext.Summary,
|
||||
Purpose: existingContext.Purpose,
|
||||
Technologies: existingContext.Technologies,
|
||||
Tags: existingContext.Tags,
|
||||
Insights: existingContext.Insights,
|
||||
RAGConfidence: existingContext.ResolutionConfidence,
|
||||
GeneratedAt: existingContext.ResolvedAt,
|
||||
}
|
||||
|
||||
// Use conflict resolver to handle the update
|
||||
@@ -380,13 +392,13 @@ func (d *DHTContextDistributor) Sync(ctx context.Context) (*SyncResult, error) {
|
||||
}
|
||||
|
||||
result := &SyncResult{
|
||||
SyncedContexts: 0, // Would be populated in real implementation
|
||||
SyncedContexts: 0, // Would be populated in real implementation
|
||||
ConflictsResolved: 0,
|
||||
Errors: []string{},
|
||||
SyncTime: time.Since(start),
|
||||
PeersContacted: len(d.dht.GetConnectedPeers()),
|
||||
DataTransferred: 0,
|
||||
SyncedAt: time.Now(),
|
||||
Errors: []string{},
|
||||
SyncTime: time.Since(start),
|
||||
PeersContacted: len(d.dht.GetConnectedPeers()),
|
||||
DataTransferred: 0,
|
||||
SyncedAt: time.Now(),
|
||||
}
|
||||
|
||||
return result, nil
|
||||
@@ -453,28 +465,13 @@ func (d *DHTContextDistributor) calculateChecksum(data interface{}) string {
|
||||
return hex.EncodeToString(hash[:])
|
||||
}
|
||||
|
||||
// Ensure DHT is bootstrapped before operations
|
||||
func (d *DHTContextDistributor) ensureDHTReady() error {
|
||||
if !d.dht.IsBootstrapped() {
|
||||
return fmt.Errorf("DHT not bootstrapped")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start starts the distribution service
|
||||
func (d *DHTContextDistributor) Start(ctx context.Context) error {
|
||||
// Bootstrap DHT if not already done
|
||||
if !d.dht.IsBootstrapped() {
|
||||
if err := d.dht.Bootstrap(); err != nil {
|
||||
return fmt.Errorf("failed to bootstrap DHT: %w", err)
|
||||
if d.gossipProtocol != nil {
|
||||
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
|
||||
return fmt.Errorf("failed to start gossip protocol: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Start gossip protocol
|
||||
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
|
||||
return fmt.Errorf("failed to start gossip protocol: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -488,22 +485,23 @@ func (d *DHTContextDistributor) Stop(ctx context.Context) error {
|
||||
|
||||
// ContextStoragePackage represents a complete package for DHT storage
|
||||
type ContextStoragePackage struct {
|
||||
EncryptedData *crypto.EncryptedContextData `json:"encrypted_data"`
|
||||
Metadata *DistributionMetadata `json:"metadata"`
|
||||
Role string `json:"role"`
|
||||
StoredAt time.Time `json:"stored_at"`
|
||||
EncryptedData []byte `json:"encrypted_data"`
|
||||
KeyFingerprint string `json:"key_fingerprint,omitempty"`
|
||||
Metadata *DistributionMetadata `json:"metadata"`
|
||||
Role string `json:"role"`
|
||||
StoredAt time.Time `json:"stored_at"`
|
||||
}
|
||||
|
||||
// DistributionMetadata contains metadata for distributed context
|
||||
type DistributionMetadata struct {
|
||||
Address ucxl.Address `json:"address"`
|
||||
Roles []string `json:"roles"`
|
||||
Version int64 `json:"version"`
|
||||
VectorClock *VectorClock `json:"vector_clock"`
|
||||
DistributedBy string `json:"distributed_by"`
|
||||
DistributedAt time.Time `json:"distributed_at"`
|
||||
ReplicationFactor int `json:"replication_factor"`
|
||||
Checksum string `json:"checksum"`
|
||||
Address ucxl.Address `json:"address"`
|
||||
Roles []string `json:"roles"`
|
||||
Version int64 `json:"version"`
|
||||
VectorClock *VectorClock `json:"vector_clock"`
|
||||
DistributedBy string `json:"distributed_by"`
|
||||
DistributedAt time.Time `json:"distributed_at"`
|
||||
ReplicationFactor int `json:"replication_factor"`
|
||||
Checksum string `json:"checksum"`
|
||||
}
|
||||
|
||||
// DHTKeyGenerator implements KeyGenerator interface
|
||||
@@ -532,65 +530,124 @@ func (kg *DHTKeyGenerator) GenerateReplicationKey(address string) string {
|
||||
// Component constructors - these would be implemented in separate files
|
||||
|
||||
// NewReplicationManager creates a new replication manager
|
||||
func NewReplicationManager(dht *dht.DHT, config *config.Config) (ReplicationManager, error) {
|
||||
// Placeholder implementation
|
||||
return &ReplicationManagerImpl{}, nil
|
||||
func NewReplicationManager(dht dht.DHT, config *config.Config) (ReplicationManager, error) {
|
||||
impl, err := NewReplicationManagerImpl(dht, config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return impl, nil
|
||||
}
|
||||
|
||||
// NewConflictResolver creates a new conflict resolver
|
||||
func NewConflictResolver(dht *dht.DHT, config *config.Config) (ConflictResolver, error) {
|
||||
// Placeholder implementation
|
||||
func NewConflictResolver(dht dht.DHT, config *config.Config) (ConflictResolver, error) {
|
||||
// Placeholder implementation until full resolver is wired
|
||||
return &ConflictResolverImpl{}, nil
|
||||
}
|
||||
|
||||
// NewGossipProtocol creates a new gossip protocol
|
||||
func NewGossipProtocol(dht *dht.DHT, config *config.Config) (GossipProtocol, error) {
|
||||
// Placeholder implementation
|
||||
return &GossipProtocolImpl{}, nil
|
||||
func NewGossipProtocol(dht dht.DHT, config *config.Config) (GossipProtocol, error) {
|
||||
impl, err := NewGossipProtocolImpl(dht, config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return impl, nil
|
||||
}
|
||||
|
||||
// NewNetworkManager creates a new network manager
|
||||
func NewNetworkManager(dht *dht.DHT, config *config.Config) (NetworkManager, error) {
|
||||
// Placeholder implementation
|
||||
return &NetworkManagerImpl{}, nil
|
||||
func NewNetworkManager(dht dht.DHT, config *config.Config) (NetworkManager, error) {
|
||||
impl, err := NewNetworkManagerImpl(dht, config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return impl, nil
|
||||
}
|
||||
|
||||
// NewVectorClockManager creates a new vector clock manager
|
||||
func NewVectorClockManager(dht *dht.DHT, nodeID string) (VectorClockManager, error) {
|
||||
// Placeholder implementation
|
||||
return &VectorClockManagerImpl{}, nil
|
||||
func NewVectorClockManager(dht dht.DHT, nodeID string) (VectorClockManager, error) {
|
||||
return &defaultVectorClockManager{
|
||||
clocks: make(map[string]*VectorClock),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Placeholder structs for components - these would be properly implemented
|
||||
|
||||
type ReplicationManagerImpl struct{}
|
||||
func (rm *ReplicationManagerImpl) EnsureReplication(ctx context.Context, address ucxl.Address, factor int) error { return nil }
|
||||
func (rm *ReplicationManagerImpl) GetReplicationStatus(ctx context.Context, address ucxl.Address) (*ReplicaHealth, error) {
|
||||
return &ReplicaHealth{}, nil
|
||||
}
|
||||
func (rm *ReplicationManagerImpl) SetReplicationFactor(factor int) error { return nil }
|
||||
|
||||
// ConflictResolverImpl is a temporary stub until the full resolver is implemented
|
||||
type ConflictResolverImpl struct{}
|
||||
|
||||
func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remote *slurpContext.ContextNode) (*ConflictResolution, error) {
|
||||
return &ConflictResolution{
|
||||
Address: local.UCXLAddress,
|
||||
Address: local.UCXLAddress,
|
||||
ResolutionType: ResolutionMerged,
|
||||
MergedContext: local,
|
||||
MergedContext: local,
|
||||
ResolutionTime: time.Millisecond,
|
||||
ResolvedAt: time.Now(),
|
||||
Confidence: 0.95,
|
||||
ResolvedAt: time.Now(),
|
||||
Confidence: 0.95,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type GossipProtocolImpl struct{}
|
||||
func (gp *GossipProtocolImpl) StartGossip(ctx context.Context) error { return nil }
|
||||
// defaultVectorClockManager provides a minimal vector clock store for SEC-SLURP scaffolding.
|
||||
type defaultVectorClockManager struct {
|
||||
mu sync.Mutex
|
||||
clocks map[string]*VectorClock
|
||||
}
|
||||
|
||||
type NetworkManagerImpl struct{}
|
||||
func (vcm *defaultVectorClockManager) GetClock(nodeID string) (*VectorClock, error) {
|
||||
vcm.mu.Lock()
|
||||
defer vcm.mu.Unlock()
|
||||
|
||||
type VectorClockManagerImpl struct{}
|
||||
func (vcm *VectorClockManagerImpl) GetClock(nodeID string) (*VectorClock, error) {
|
||||
return &VectorClock{
|
||||
Clock: map[string]int64{nodeID: time.Now().Unix()},
|
||||
if clock, ok := vcm.clocks[nodeID]; ok {
|
||||
return clock, nil
|
||||
}
|
||||
clock := &VectorClock{
|
||||
Clock: map[string]int64{nodeID: time.Now().Unix()},
|
||||
UpdatedAt: time.Now(),
|
||||
}, nil
|
||||
}
|
||||
vcm.clocks[nodeID] = clock
|
||||
return clock, nil
|
||||
}
|
||||
|
||||
func (vcm *defaultVectorClockManager) UpdateClock(nodeID string, clock *VectorClock) error {
|
||||
vcm.mu.Lock()
|
||||
defer vcm.mu.Unlock()
|
||||
|
||||
vcm.clocks[nodeID] = clock
|
||||
return nil
|
||||
}
|
||||
|
||||
func (vcm *defaultVectorClockManager) CompareClock(clock1, clock2 *VectorClock) ClockRelation {
|
||||
if clock1 == nil || clock2 == nil {
|
||||
return ClockConcurrent
|
||||
}
|
||||
if clock1.UpdatedAt.Before(clock2.UpdatedAt) {
|
||||
return ClockBefore
|
||||
}
|
||||
if clock1.UpdatedAt.After(clock2.UpdatedAt) {
|
||||
return ClockAfter
|
||||
}
|
||||
return ClockEqual
|
||||
}
|
||||
|
||||
func (vcm *defaultVectorClockManager) MergeClock(clocks []*VectorClock) *VectorClock {
|
||||
if len(clocks) == 0 {
|
||||
return &VectorClock{
|
||||
Clock: map[string]int64{},
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
}
|
||||
merged := &VectorClock{
|
||||
Clock: make(map[string]int64),
|
||||
UpdatedAt: clocks[0].UpdatedAt,
|
||||
}
|
||||
for _, clock := range clocks {
|
||||
if clock == nil {
|
||||
continue
|
||||
}
|
||||
if clock.UpdatedAt.After(merged.UpdatedAt) {
|
||||
merged.UpdatedAt = clock.UpdatedAt
|
||||
}
|
||||
for node, value := range clock.Clock {
|
||||
if existing, ok := merged.Clock[node]; !ok || value > existing {
|
||||
merged.Clock[node] = value
|
||||
}
|
||||
}
|
||||
}
|
||||
return merged
|
||||
}
|
||||
@@ -15,48 +15,48 @@ import (
|
||||
|
||||
// MonitoringSystem provides comprehensive monitoring for the distributed context system
|
||||
type MonitoringSystem struct {
|
||||
mu sync.RWMutex
|
||||
config *config.Config
|
||||
metrics *MetricsCollector
|
||||
healthChecks *HealthCheckManager
|
||||
alertManager *AlertManager
|
||||
dashboard *DashboardServer
|
||||
logManager *LogManager
|
||||
traceManager *TraceManager
|
||||
mu sync.RWMutex
|
||||
config *config.Config
|
||||
metrics *MetricsCollector
|
||||
healthChecks *HealthCheckManager
|
||||
alertManager *AlertManager
|
||||
dashboard *DashboardServer
|
||||
logManager *LogManager
|
||||
traceManager *TraceManager
|
||||
|
||||
// State
|
||||
running bool
|
||||
monitoringPort int
|
||||
updateInterval time.Duration
|
||||
retentionPeriod time.Duration
|
||||
running bool
|
||||
monitoringPort int
|
||||
updateInterval time.Duration
|
||||
retentionPeriod time.Duration
|
||||
}
|
||||
|
||||
// MetricsCollector collects and aggregates system metrics
|
||||
type MetricsCollector struct {
|
||||
mu sync.RWMutex
|
||||
timeSeries map[string]*TimeSeries
|
||||
counters map[string]*Counter
|
||||
gauges map[string]*Gauge
|
||||
histograms map[string]*Histogram
|
||||
customMetrics map[string]*CustomMetric
|
||||
aggregatedStats *AggregatedStatistics
|
||||
exporters []MetricsExporter
|
||||
lastCollection time.Time
|
||||
mu sync.RWMutex
|
||||
timeSeries map[string]*TimeSeries
|
||||
counters map[string]*Counter
|
||||
gauges map[string]*Gauge
|
||||
histograms map[string]*Histogram
|
||||
customMetrics map[string]*CustomMetric
|
||||
aggregatedStats *AggregatedStatistics
|
||||
exporters []MetricsExporter
|
||||
lastCollection time.Time
|
||||
}
|
||||
|
||||
// TimeSeries represents a time-series metric
|
||||
type TimeSeries struct {
|
||||
Name string `json:"name"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
DataPoints []*TimeSeriesPoint `json:"data_points"`
|
||||
Name string `json:"name"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
DataPoints []*TimeSeriesPoint `json:"data_points"`
|
||||
RetentionTTL time.Duration `json:"retention_ttl"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
|
||||
// TimeSeriesPoint represents a single data point in a time series
|
||||
type TimeSeriesPoint struct {
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Value float64 `json:"value"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Value float64 `json:"value"`
|
||||
Labels map[string]string `json:"labels,omitempty"`
|
||||
}
|
||||
|
||||
@@ -64,7 +64,7 @@ type TimeSeriesPoint struct {
|
||||
type Counter struct {
|
||||
Name string `json:"name"`
|
||||
Value int64 `json:"value"`
|
||||
Rate float64 `json:"rate"` // per second
|
||||
Rate float64 `json:"rate"` // per second
|
||||
Labels map[string]string `json:"labels"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
@@ -82,13 +82,13 @@ type Gauge struct {
|
||||
|
||||
// Histogram represents distribution of values
|
||||
type Histogram struct {
|
||||
Name string `json:"name"`
|
||||
Buckets map[float64]int64 `json:"buckets"`
|
||||
Count int64 `json:"count"`
|
||||
Sum float64 `json:"sum"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
Name string `json:"name"`
|
||||
Buckets map[float64]int64 `json:"buckets"`
|
||||
Count int64 `json:"count"`
|
||||
Sum float64 `json:"sum"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
Percentiles map[float64]float64 `json:"percentiles"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
|
||||
// CustomMetric represents application-specific metrics
|
||||
@@ -114,81 +114,81 @@ const (
|
||||
|
||||
// AggregatedStatistics provides high-level system statistics
|
||||
type AggregatedStatistics struct {
|
||||
SystemOverview *SystemOverview `json:"system_overview"`
|
||||
PerformanceMetrics *PerformanceOverview `json:"performance_metrics"`
|
||||
HealthMetrics *HealthOverview `json:"health_metrics"`
|
||||
ErrorMetrics *ErrorOverview `json:"error_metrics"`
|
||||
ResourceMetrics *ResourceOverview `json:"resource_metrics"`
|
||||
NetworkMetrics *NetworkOverview `json:"network_metrics"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
SystemOverview *SystemOverview `json:"system_overview"`
|
||||
PerformanceMetrics *PerformanceOverview `json:"performance_metrics"`
|
||||
HealthMetrics *HealthOverview `json:"health_metrics"`
|
||||
ErrorMetrics *ErrorOverview `json:"error_metrics"`
|
||||
ResourceMetrics *ResourceOverview `json:"resource_metrics"`
|
||||
NetworkMetrics *NetworkOverview `json:"network_metrics"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
|
||||
// SystemOverview provides system-wide overview metrics
|
||||
type SystemOverview struct {
|
||||
TotalNodes int `json:"total_nodes"`
|
||||
HealthyNodes int `json:"healthy_nodes"`
|
||||
TotalContexts int64 `json:"total_contexts"`
|
||||
DistributedContexts int64 `json:"distributed_contexts"`
|
||||
ReplicationFactor float64 `json:"average_replication_factor"`
|
||||
SystemUptime time.Duration `json:"system_uptime"`
|
||||
ClusterVersion string `json:"cluster_version"`
|
||||
LastRestart time.Time `json:"last_restart"`
|
||||
TotalNodes int `json:"total_nodes"`
|
||||
HealthyNodes int `json:"healthy_nodes"`
|
||||
TotalContexts int64 `json:"total_contexts"`
|
||||
DistributedContexts int64 `json:"distributed_contexts"`
|
||||
ReplicationFactor float64 `json:"average_replication_factor"`
|
||||
SystemUptime time.Duration `json:"system_uptime"`
|
||||
ClusterVersion string `json:"cluster_version"`
|
||||
LastRestart time.Time `json:"last_restart"`
|
||||
}
|
||||
|
||||
// PerformanceOverview provides performance metrics
|
||||
type PerformanceOverview struct {
|
||||
RequestsPerSecond float64 `json:"requests_per_second"`
|
||||
AverageResponseTime time.Duration `json:"average_response_time"`
|
||||
P95ResponseTime time.Duration `json:"p95_response_time"`
|
||||
P99ResponseTime time.Duration `json:"p99_response_time"`
|
||||
Throughput float64 `json:"throughput_mbps"`
|
||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||
QueueDepth int `json:"queue_depth"`
|
||||
ActiveConnections int `json:"active_connections"`
|
||||
RequestsPerSecond float64 `json:"requests_per_second"`
|
||||
AverageResponseTime time.Duration `json:"average_response_time"`
|
||||
P95ResponseTime time.Duration `json:"p95_response_time"`
|
||||
P99ResponseTime time.Duration `json:"p99_response_time"`
|
||||
Throughput float64 `json:"throughput_mbps"`
|
||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||
QueueDepth int `json:"queue_depth"`
|
||||
ActiveConnections int `json:"active_connections"`
|
||||
}
|
||||
|
||||
// HealthOverview provides health-related metrics
|
||||
type HealthOverview struct {
|
||||
OverallHealthScore float64 `json:"overall_health_score"`
|
||||
ComponentHealth map[string]float64 `json:"component_health"`
|
||||
FailedHealthChecks int `json:"failed_health_checks"`
|
||||
LastHealthCheck time.Time `json:"last_health_check"`
|
||||
HealthTrend string `json:"health_trend"` // improving, stable, degrading
|
||||
CriticalAlerts int `json:"critical_alerts"`
|
||||
WarningAlerts int `json:"warning_alerts"`
|
||||
OverallHealthScore float64 `json:"overall_health_score"`
|
||||
ComponentHealth map[string]float64 `json:"component_health"`
|
||||
FailedHealthChecks int `json:"failed_health_checks"`
|
||||
LastHealthCheck time.Time `json:"last_health_check"`
|
||||
HealthTrend string `json:"health_trend"` // improving, stable, degrading
|
||||
CriticalAlerts int `json:"critical_alerts"`
|
||||
WarningAlerts int `json:"warning_alerts"`
|
||||
}
|
||||
|
||||
// ErrorOverview provides error-related metrics
|
||||
type ErrorOverview struct {
|
||||
TotalErrors int64 `json:"total_errors"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
ErrorsByType map[string]int64 `json:"errors_by_type"`
|
||||
ErrorsByComponent map[string]int64 `json:"errors_by_component"`
|
||||
LastError *ErrorEvent `json:"last_error"`
|
||||
ErrorTrend string `json:"error_trend"` // increasing, stable, decreasing
|
||||
TotalErrors int64 `json:"total_errors"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
ErrorsByType map[string]int64 `json:"errors_by_type"`
|
||||
ErrorsByComponent map[string]int64 `json:"errors_by_component"`
|
||||
LastError *ErrorEvent `json:"last_error"`
|
||||
ErrorTrend string `json:"error_trend"` // increasing, stable, decreasing
|
||||
}
|
||||
|
||||
// ResourceOverview provides resource utilization metrics
|
||||
type ResourceOverview struct {
|
||||
CPUUtilization float64 `json:"cpu_utilization"`
|
||||
MemoryUtilization float64 `json:"memory_utilization"`
|
||||
DiskUtilization float64 `json:"disk_utilization"`
|
||||
NetworkUtilization float64 `json:"network_utilization"`
|
||||
StorageUsed int64 `json:"storage_used_bytes"`
|
||||
StorageAvailable int64 `json:"storage_available_bytes"`
|
||||
FileDescriptors int `json:"open_file_descriptors"`
|
||||
Goroutines int `json:"goroutines"`
|
||||
CPUUtilization float64 `json:"cpu_utilization"`
|
||||
MemoryUtilization float64 `json:"memory_utilization"`
|
||||
DiskUtilization float64 `json:"disk_utilization"`
|
||||
NetworkUtilization float64 `json:"network_utilization"`
|
||||
StorageUsed int64 `json:"storage_used_bytes"`
|
||||
StorageAvailable int64 `json:"storage_available_bytes"`
|
||||
FileDescriptors int `json:"open_file_descriptors"`
|
||||
Goroutines int `json:"goroutines"`
|
||||
}
|
||||
|
||||
// NetworkOverview provides network-related metrics
|
||||
type NetworkOverview struct {
|
||||
TotalConnections int `json:"total_connections"`
|
||||
ActiveConnections int `json:"active_connections"`
|
||||
BandwidthUtilization float64 `json:"bandwidth_utilization"`
|
||||
PacketLossRate float64 `json:"packet_loss_rate"`
|
||||
AverageLatency time.Duration `json:"average_latency"`
|
||||
NetworkPartitions int `json:"network_partitions"`
|
||||
DataTransferred int64 `json:"data_transferred_bytes"`
|
||||
TotalConnections int `json:"total_connections"`
|
||||
ActiveConnections int `json:"active_connections"`
|
||||
BandwidthUtilization float64 `json:"bandwidth_utilization"`
|
||||
PacketLossRate float64 `json:"packet_loss_rate"`
|
||||
AverageLatency time.Duration `json:"average_latency"`
|
||||
NetworkPartitions int `json:"network_partitions"`
|
||||
DataTransferred int64 `json:"data_transferred_bytes"`
|
||||
}
|
||||
|
||||
// MetricsExporter exports metrics to external systems
|
||||
@@ -200,49 +200,49 @@ type MetricsExporter interface {
|
||||
|
||||
// HealthCheckManager manages system health checks
|
||||
type HealthCheckManager struct {
|
||||
mu sync.RWMutex
|
||||
healthChecks map[string]*HealthCheck
|
||||
checkResults map[string]*HealthCheckResult
|
||||
schedules map[string]*HealthCheckSchedule
|
||||
running bool
|
||||
mu sync.RWMutex
|
||||
healthChecks map[string]*HealthCheck
|
||||
checkResults map[string]*HealthCheckResult
|
||||
schedules map[string]*HealthCheckSchedule
|
||||
running bool
|
||||
}
|
||||
|
||||
// HealthCheck represents a single health check
|
||||
type HealthCheck struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
CheckType HealthCheckType `json:"check_type"`
|
||||
Target string `json:"target"`
|
||||
Timeout time.Duration `json:"timeout"`
|
||||
Interval time.Duration `json:"interval"`
|
||||
Retries int `json:"retries"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
Enabled bool `json:"enabled"`
|
||||
CheckFunction func(context.Context) (*HealthCheckResult, error) `json:"-"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
CheckType HealthCheckType `json:"check_type"`
|
||||
Target string `json:"target"`
|
||||
Timeout time.Duration `json:"timeout"`
|
||||
Interval time.Duration `json:"interval"`
|
||||
Retries int `json:"retries"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
Enabled bool `json:"enabled"`
|
||||
CheckFunction func(context.Context) (*HealthCheckResult, error) `json:"-"`
|
||||
}
|
||||
|
||||
// HealthCheckType represents different types of health checks
|
||||
type HealthCheckType string
|
||||
|
||||
const (
|
||||
HealthCheckTypeHTTP HealthCheckType = "http"
|
||||
HealthCheckTypeTCP HealthCheckType = "tcp"
|
||||
HealthCheckTypeCustom HealthCheckType = "custom"
|
||||
HealthCheckTypeComponent HealthCheckType = "component"
|
||||
HealthCheckTypeDatabase HealthCheckType = "database"
|
||||
HealthCheckTypeService HealthCheckType = "service"
|
||||
HealthCheckTypeHTTP HealthCheckType = "http"
|
||||
HealthCheckTypeTCP HealthCheckType = "tcp"
|
||||
HealthCheckTypeCustom HealthCheckType = "custom"
|
||||
HealthCheckTypeComponent HealthCheckType = "component"
|
||||
HealthCheckTypeDatabase HealthCheckType = "database"
|
||||
HealthCheckTypeService HealthCheckType = "service"
|
||||
)
|
||||
|
||||
// HealthCheckResult represents the result of a health check
|
||||
type HealthCheckResult struct {
|
||||
CheckName string `json:"check_name"`
|
||||
Status HealthCheckStatus `json:"status"`
|
||||
ResponseTime time.Duration `json:"response_time"`
|
||||
Message string `json:"message"`
|
||||
Details map[string]interface{} `json:"details"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Attempt int `json:"attempt"`
|
||||
CheckName string `json:"check_name"`
|
||||
Status HealthCheckStatus `json:"status"`
|
||||
ResponseTime time.Duration `json:"response_time"`
|
||||
Message string `json:"message"`
|
||||
Details map[string]interface{} `json:"details"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Attempt int `json:"attempt"`
|
||||
}
|
||||
|
||||
// HealthCheckStatus represents the status of a health check
|
||||
@@ -258,45 +258,45 @@ const (
|
||||
|
||||
// HealthCheckSchedule defines when health checks should run
|
||||
type HealthCheckSchedule struct {
|
||||
CheckName string `json:"check_name"`
|
||||
Interval time.Duration `json:"interval"`
|
||||
NextRun time.Time `json:"next_run"`
|
||||
LastRun time.Time `json:"last_run"`
|
||||
Enabled bool `json:"enabled"`
|
||||
FailureCount int `json:"failure_count"`
|
||||
CheckName string `json:"check_name"`
|
||||
Interval time.Duration `json:"interval"`
|
||||
NextRun time.Time `json:"next_run"`
|
||||
LastRun time.Time `json:"last_run"`
|
||||
Enabled bool `json:"enabled"`
|
||||
FailureCount int `json:"failure_count"`
|
||||
}
|
||||
|
||||
// AlertManager manages system alerts and notifications
|
||||
type AlertManager struct {
|
||||
mu sync.RWMutex
|
||||
alertRules map[string]*AlertRule
|
||||
activeAlerts map[string]*Alert
|
||||
alertHistory []*Alert
|
||||
notifiers []AlertNotifier
|
||||
silences map[string]*AlertSilence
|
||||
running bool
|
||||
mu sync.RWMutex
|
||||
alertRules map[string]*AlertRule
|
||||
activeAlerts map[string]*Alert
|
||||
alertHistory []*Alert
|
||||
notifiers []AlertNotifier
|
||||
silences map[string]*AlertSilence
|
||||
running bool
|
||||
}
|
||||
|
||||
// AlertRule defines conditions for triggering alerts
|
||||
type AlertRule struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Conditions []*AlertCondition `json:"conditions"`
|
||||
Duration time.Duration `json:"duration"` // How long condition must persist
|
||||
Cooldown time.Duration `json:"cooldown"` // Minimum time between alerts
|
||||
Labels map[string]string `json:"labels"`
|
||||
Annotations map[string]string `json:"annotations"`
|
||||
Enabled bool `json:"enabled"`
|
||||
LastTriggered *time.Time `json:"last_triggered,omitempty"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Conditions []*AlertCondition `json:"conditions"`
|
||||
Duration time.Duration `json:"duration"` // How long condition must persist
|
||||
Cooldown time.Duration `json:"cooldown"` // Minimum time between alerts
|
||||
Labels map[string]string `json:"labels"`
|
||||
Annotations map[string]string `json:"annotations"`
|
||||
Enabled bool `json:"enabled"`
|
||||
LastTriggered *time.Time `json:"last_triggered,omitempty"`
|
||||
}
|
||||
|
||||
// AlertCondition defines a single condition for an alert
|
||||
type AlertCondition struct {
|
||||
MetricName string `json:"metric_name"`
|
||||
Operator ConditionOperator `json:"operator"`
|
||||
Threshold float64 `json:"threshold"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
MetricName string `json:"metric_name"`
|
||||
Operator ConditionOperator `json:"operator"`
|
||||
Threshold float64 `json:"threshold"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
}
|
||||
|
||||
// ConditionOperator represents comparison operators for alert conditions
|
||||
@@ -313,39 +313,39 @@ const (
|
||||
|
||||
// Alert represents an active alert
|
||||
type Alert struct {
|
||||
ID string `json:"id"`
|
||||
RuleName string `json:"rule_name"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Status AlertStatus `json:"status"`
|
||||
Message string `json:"message"`
|
||||
Details map[string]interface{} `json:"details"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
Annotations map[string]string `json:"annotations"`
|
||||
StartsAt time.Time `json:"starts_at"`
|
||||
EndsAt *time.Time `json:"ends_at,omitempty"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
AckBy string `json:"acknowledged_by,omitempty"`
|
||||
AckAt *time.Time `json:"acknowledged_at,omitempty"`
|
||||
ID string `json:"id"`
|
||||
RuleName string `json:"rule_name"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Status AlertStatus `json:"status"`
|
||||
Message string `json:"message"`
|
||||
Details map[string]interface{} `json:"details"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
Annotations map[string]string `json:"annotations"`
|
||||
StartsAt time.Time `json:"starts_at"`
|
||||
EndsAt *time.Time `json:"ends_at,omitempty"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
AckBy string `json:"acknowledged_by,omitempty"`
|
||||
AckAt *time.Time `json:"acknowledged_at,omitempty"`
|
||||
}
|
||||
|
||||
// AlertSeverity represents the severity level of an alert
|
||||
type AlertSeverity string
|
||||
|
||||
const (
|
||||
SeverityInfo AlertSeverity = "info"
|
||||
SeverityWarning AlertSeverity = "warning"
|
||||
SeverityError AlertSeverity = "error"
|
||||
SeverityCritical AlertSeverity = "critical"
|
||||
AlertAlertSeverityInfo AlertSeverity = "info"
|
||||
AlertAlertSeverityWarning AlertSeverity = "warning"
|
||||
AlertAlertSeverityError AlertSeverity = "error"
|
||||
AlertAlertSeverityCritical AlertSeverity = "critical"
|
||||
)
|
||||
|
||||
// AlertStatus represents the current status of an alert
|
||||
type AlertStatus string
|
||||
|
||||
const (
|
||||
AlertStatusFiring AlertStatus = "firing"
|
||||
AlertStatusResolved AlertStatus = "resolved"
|
||||
AlertStatusFiring AlertStatus = "firing"
|
||||
AlertStatusResolved AlertStatus = "resolved"
|
||||
AlertStatusAcknowledged AlertStatus = "acknowledged"
|
||||
AlertStatusSilenced AlertStatus = "silenced"
|
||||
AlertStatusSilenced AlertStatus = "silenced"
|
||||
)
|
||||
|
||||
// AlertNotifier sends alert notifications
|
||||
@@ -357,64 +357,64 @@ type AlertNotifier interface {
|
||||
|
||||
// AlertSilence represents a silenced alert
|
||||
type AlertSilence struct {
|
||||
ID string `json:"id"`
|
||||
Matchers map[string]string `json:"matchers"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
Comment string `json:"comment"`
|
||||
Active bool `json:"active"`
|
||||
ID string `json:"id"`
|
||||
Matchers map[string]string `json:"matchers"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
Comment string `json:"comment"`
|
||||
Active bool `json:"active"`
|
||||
}
|
||||
|
||||
// DashboardServer provides web-based monitoring dashboard
|
||||
type DashboardServer struct {
|
||||
mu sync.RWMutex
|
||||
server *http.Server
|
||||
dashboards map[string]*Dashboard
|
||||
widgets map[string]*Widget
|
||||
customPages map[string]*CustomPage
|
||||
running bool
|
||||
port int
|
||||
mu sync.RWMutex
|
||||
server *http.Server
|
||||
dashboards map[string]*Dashboard
|
||||
widgets map[string]*Widget
|
||||
customPages map[string]*CustomPage
|
||||
running bool
|
||||
port int
|
||||
}
|
||||
|
||||
// Dashboard represents a monitoring dashboard
|
||||
type Dashboard struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Widgets []*Widget `json:"widgets"`
|
||||
Layout *DashboardLayout `json:"layout"`
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Widgets []*Widget `json:"widgets"`
|
||||
Layout *DashboardLayout `json:"layout"`
|
||||
Settings *DashboardSettings `json:"settings"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// Widget represents a dashboard widget
|
||||
type Widget struct {
|
||||
ID string `json:"id"`
|
||||
Type WidgetType `json:"type"`
|
||||
Title string `json:"title"`
|
||||
DataSource string `json:"data_source"`
|
||||
Query string `json:"query"`
|
||||
Settings map[string]interface{} `json:"settings"`
|
||||
Position *WidgetPosition `json:"position"`
|
||||
RefreshRate time.Duration `json:"refresh_rate"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
ID string `json:"id"`
|
||||
Type WidgetType `json:"type"`
|
||||
Title string `json:"title"`
|
||||
DataSource string `json:"data_source"`
|
||||
Query string `json:"query"`
|
||||
Settings map[string]interface{} `json:"settings"`
|
||||
Position *WidgetPosition `json:"position"`
|
||||
RefreshRate time.Duration `json:"refresh_rate"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
|
||||
// WidgetType represents different types of dashboard widgets
|
||||
type WidgetType string
|
||||
|
||||
const (
|
||||
WidgetTypeMetric WidgetType = "metric"
|
||||
WidgetTypeChart WidgetType = "chart"
|
||||
WidgetTypeTable WidgetType = "table"
|
||||
WidgetTypeAlert WidgetType = "alert"
|
||||
WidgetTypeHealth WidgetType = "health"
|
||||
WidgetTypeTopology WidgetType = "topology"
|
||||
WidgetTypeLog WidgetType = "log"
|
||||
WidgetTypeCustom WidgetType = "custom"
|
||||
WidgetTypeMetric WidgetType = "metric"
|
||||
WidgetTypeChart WidgetType = "chart"
|
||||
WidgetTypeTable WidgetType = "table"
|
||||
WidgetTypeAlert WidgetType = "alert"
|
||||
WidgetTypeHealth WidgetType = "health"
|
||||
WidgetTypeTopology WidgetType = "topology"
|
||||
WidgetTypeLog WidgetType = "log"
|
||||
WidgetTypeCustom WidgetType = "custom"
|
||||
)
|
||||
|
||||
// WidgetPosition defines widget position and size
|
||||
@@ -427,11 +427,11 @@ type WidgetPosition struct {
|
||||
|
||||
// DashboardLayout defines dashboard layout settings
|
||||
type DashboardLayout struct {
|
||||
Columns int `json:"columns"`
|
||||
RowHeight int `json:"row_height"`
|
||||
Margins [2]int `json:"margins"` // [x, y]
|
||||
Spacing [2]int `json:"spacing"` // [x, y]
|
||||
Breakpoints map[string]int `json:"breakpoints"`
|
||||
Columns int `json:"columns"`
|
||||
RowHeight int `json:"row_height"`
|
||||
Margins [2]int `json:"margins"` // [x, y]
|
||||
Spacing [2]int `json:"spacing"` // [x, y]
|
||||
Breakpoints map[string]int `json:"breakpoints"`
|
||||
}
|
||||
|
||||
// DashboardSettings contains dashboard configuration
|
||||
@@ -446,43 +446,43 @@ type DashboardSettings struct {
|
||||
|
||||
// CustomPage represents a custom monitoring page
|
||||
type CustomPage struct {
|
||||
Path string `json:"path"`
|
||||
Title string `json:"title"`
|
||||
Content string `json:"content"`
|
||||
ContentType string `json:"content_type"`
|
||||
Handler http.HandlerFunc `json:"-"`
|
||||
Path string `json:"path"`
|
||||
Title string `json:"title"`
|
||||
Content string `json:"content"`
|
||||
ContentType string `json:"content_type"`
|
||||
Handler http.HandlerFunc `json:"-"`
|
||||
}
|
||||
|
||||
// LogManager manages system logs and log analysis
|
||||
type LogManager struct {
|
||||
mu sync.RWMutex
|
||||
logSources map[string]*LogSource
|
||||
logEntries []*LogEntry
|
||||
logAnalyzers []LogAnalyzer
|
||||
mu sync.RWMutex
|
||||
logSources map[string]*LogSource
|
||||
logEntries []*LogEntry
|
||||
logAnalyzers []LogAnalyzer
|
||||
retentionPolicy *LogRetentionPolicy
|
||||
running bool
|
||||
running bool
|
||||
}
|
||||
|
||||
// LogSource represents a source of log data
|
||||
type LogSource struct {
|
||||
Name string `json:"name"`
|
||||
Type LogSourceType `json:"type"`
|
||||
Location string `json:"location"`
|
||||
Format LogFormat `json:"format"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
Enabled bool `json:"enabled"`
|
||||
LastRead time.Time `json:"last_read"`
|
||||
Name string `json:"name"`
|
||||
Type LogSourceType `json:"type"`
|
||||
Location string `json:"location"`
|
||||
Format LogFormat `json:"format"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
Enabled bool `json:"enabled"`
|
||||
LastRead time.Time `json:"last_read"`
|
||||
}
|
||||
|
||||
// LogSourceType represents different types of log sources
|
||||
type LogSourceType string
|
||||
|
||||
const (
|
||||
LogSourceTypeFile LogSourceType = "file"
|
||||
LogSourceTypeHTTP LogSourceType = "http"
|
||||
LogSourceTypeStream LogSourceType = "stream"
|
||||
LogSourceTypeDatabase LogSourceType = "database"
|
||||
LogSourceTypeCustom LogSourceType = "custom"
|
||||
LogSourceTypeFile LogSourceType = "file"
|
||||
LogSourceTypeHTTP LogSourceType = "http"
|
||||
LogSourceTypeStream LogSourceType = "stream"
|
||||
LogSourceTypeDatabase LogSourceType = "database"
|
||||
LogSourceTypeCustom LogSourceType = "custom"
|
||||
)
|
||||
|
||||
// LogFormat represents log entry format
|
||||
@@ -497,14 +497,14 @@ const (
|
||||
|
||||
// LogEntry represents a single log entry
|
||||
type LogEntry struct {
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Level LogLevel `json:"level"`
|
||||
Source string `json:"source"`
|
||||
Message string `json:"message"`
|
||||
Fields map[string]interface{} `json:"fields"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
TraceID string `json:"trace_id,omitempty"`
|
||||
SpanID string `json:"span_id,omitempty"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Level LogLevel `json:"level"`
|
||||
Source string `json:"source"`
|
||||
Message string `json:"message"`
|
||||
Fields map[string]interface{} `json:"fields"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
TraceID string `json:"trace_id,omitempty"`
|
||||
SpanID string `json:"span_id,omitempty"`
|
||||
}
|
||||
|
||||
// LogLevel represents log entry severity
|
||||
@@ -527,22 +527,22 @@ type LogAnalyzer interface {
|
||||
|
||||
// LogAnalysisResult represents the result of log analysis
|
||||
type LogAnalysisResult struct {
|
||||
AnalyzerName string `json:"analyzer_name"`
|
||||
Anomalies []*LogAnomaly `json:"anomalies"`
|
||||
Patterns []*LogPattern `json:"patterns"`
|
||||
Statistics *LogStatistics `json:"statistics"`
|
||||
Recommendations []string `json:"recommendations"`
|
||||
AnalyzedAt time.Time `json:"analyzed_at"`
|
||||
AnalyzerName string `json:"analyzer_name"`
|
||||
Anomalies []*LogAnomaly `json:"anomalies"`
|
||||
Patterns []*LogPattern `json:"patterns"`
|
||||
Statistics *LogStatistics `json:"statistics"`
|
||||
Recommendations []string `json:"recommendations"`
|
||||
AnalyzedAt time.Time `json:"analyzed_at"`
|
||||
}
|
||||
|
||||
// LogAnomaly represents detected log anomaly
|
||||
type LogAnomaly struct {
|
||||
Type AnomalyType `json:"type"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Description string `json:"description"`
|
||||
Entries []*LogEntry `json:"entries"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
DetectedAt time.Time `json:"detected_at"`
|
||||
Type AnomalyType `json:"type"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Description string `json:"description"`
|
||||
Entries []*LogEntry `json:"entries"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
DetectedAt time.Time `json:"detected_at"`
|
||||
}
|
||||
|
||||
// AnomalyType represents different types of log anomalies
|
||||
@@ -558,38 +558,38 @@ const (
|
||||
|
||||
// LogPattern represents detected log pattern
|
||||
type LogPattern struct {
|
||||
Pattern string `json:"pattern"`
|
||||
Frequency int `json:"frequency"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
Sources []string `json:"sources"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
Pattern string `json:"pattern"`
|
||||
Frequency int `json:"frequency"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
Sources []string `json:"sources"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
}
|
||||
|
||||
// LogStatistics provides log statistics
|
||||
type LogStatistics struct {
|
||||
TotalEntries int64 `json:"total_entries"`
|
||||
EntriesByLevel map[LogLevel]int64 `json:"entries_by_level"`
|
||||
EntriesBySource map[string]int64 `json:"entries_by_source"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
AverageRate float64 `json:"average_rate"`
|
||||
TimeRange [2]time.Time `json:"time_range"`
|
||||
TotalEntries int64 `json:"total_entries"`
|
||||
EntriesByLevel map[LogLevel]int64 `json:"entries_by_level"`
|
||||
EntriesBySource map[string]int64 `json:"entries_by_source"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
AverageRate float64 `json:"average_rate"`
|
||||
TimeRange [2]time.Time `json:"time_range"`
|
||||
}
|
||||
|
||||
// LogRetentionPolicy defines log retention rules
|
||||
type LogRetentionPolicy struct {
|
||||
RetentionPeriod time.Duration `json:"retention_period"`
|
||||
MaxEntries int64 `json:"max_entries"`
|
||||
CompressionAge time.Duration `json:"compression_age"`
|
||||
ArchiveAge time.Duration `json:"archive_age"`
|
||||
Rules []*RetentionRule `json:"rules"`
|
||||
RetentionPeriod time.Duration `json:"retention_period"`
|
||||
MaxEntries int64 `json:"max_entries"`
|
||||
CompressionAge time.Duration `json:"compression_age"`
|
||||
ArchiveAge time.Duration `json:"archive_age"`
|
||||
Rules []*RetentionRule `json:"rules"`
|
||||
}
|
||||
|
||||
// RetentionRule defines specific retention rules
|
||||
type RetentionRule struct {
|
||||
Name string `json:"name"`
|
||||
Condition string `json:"condition"` // Query expression
|
||||
Retention time.Duration `json:"retention"`
|
||||
Action RetentionAction `json:"action"`
|
||||
Name string `json:"name"`
|
||||
Condition string `json:"condition"` // Query expression
|
||||
Retention time.Duration `json:"retention"`
|
||||
Action RetentionAction `json:"action"`
|
||||
}
|
||||
|
||||
// RetentionAction represents retention actions
|
||||
@@ -603,47 +603,47 @@ const (
|
||||
|
||||
// TraceManager manages distributed tracing
|
||||
type TraceManager struct {
|
||||
mu sync.RWMutex
|
||||
traces map[string]*Trace
|
||||
spans map[string]*Span
|
||||
samplers []TraceSampler
|
||||
exporters []TraceExporter
|
||||
running bool
|
||||
mu sync.RWMutex
|
||||
traces map[string]*Trace
|
||||
spans map[string]*Span
|
||||
samplers []TraceSampler
|
||||
exporters []TraceExporter
|
||||
running bool
|
||||
}
|
||||
|
||||
// Trace represents a distributed trace
|
||||
type Trace struct {
|
||||
TraceID string `json:"trace_id"`
|
||||
Spans []*Span `json:"spans"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
Status TraceStatus `json:"status"`
|
||||
Tags map[string]string `json:"tags"`
|
||||
Operations []string `json:"operations"`
|
||||
TraceID string `json:"trace_id"`
|
||||
Spans []*Span `json:"spans"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
Status TraceStatus `json:"status"`
|
||||
Tags map[string]string `json:"tags"`
|
||||
Operations []string `json:"operations"`
|
||||
}
|
||||
|
||||
// Span represents a single span in a trace
|
||||
type Span struct {
|
||||
SpanID string `json:"span_id"`
|
||||
TraceID string `json:"trace_id"`
|
||||
ParentID string `json:"parent_id,omitempty"`
|
||||
Operation string `json:"operation"`
|
||||
Service string `json:"service"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
Status SpanStatus `json:"status"`
|
||||
Tags map[string]string `json:"tags"`
|
||||
Logs []*SpanLog `json:"logs"`
|
||||
SpanID string `json:"span_id"`
|
||||
TraceID string `json:"trace_id"`
|
||||
ParentID string `json:"parent_id,omitempty"`
|
||||
Operation string `json:"operation"`
|
||||
Service string `json:"service"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
Status SpanStatus `json:"status"`
|
||||
Tags map[string]string `json:"tags"`
|
||||
Logs []*SpanLog `json:"logs"`
|
||||
}
|
||||
|
||||
// TraceStatus represents the status of a trace
|
||||
type TraceStatus string
|
||||
|
||||
const (
|
||||
TraceStatusOK TraceStatus = "ok"
|
||||
TraceStatusError TraceStatus = "error"
|
||||
TraceStatusOK TraceStatus = "ok"
|
||||
TraceStatusError TraceStatus = "error"
|
||||
TraceStatusTimeout TraceStatus = "timeout"
|
||||
)
|
||||
|
||||
@@ -675,18 +675,18 @@ type TraceExporter interface {
|
||||
|
||||
// ErrorEvent represents a system error event
|
||||
type ErrorEvent struct {
|
||||
ID string `json:"id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Level LogLevel `json:"level"`
|
||||
Component string `json:"component"`
|
||||
Message string `json:"message"`
|
||||
Error string `json:"error"`
|
||||
Context map[string]interface{} `json:"context"`
|
||||
TraceID string `json:"trace_id,omitempty"`
|
||||
SpanID string `json:"span_id,omitempty"`
|
||||
Count int `json:"count"`
|
||||
FirstSeen time.Time `json:"first_seen"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
ID string `json:"id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Level LogLevel `json:"level"`
|
||||
Component string `json:"component"`
|
||||
Message string `json:"message"`
|
||||
Error string `json:"error"`
|
||||
Context map[string]interface{} `json:"context"`
|
||||
TraceID string `json:"trace_id,omitempty"`
|
||||
SpanID string `json:"span_id,omitempty"`
|
||||
Count int `json:"count"`
|
||||
FirstSeen time.Time `json:"first_seen"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
}
|
||||
|
||||
// NewMonitoringSystem creates a comprehensive monitoring system
|
||||
@@ -722,7 +722,7 @@ func (ms *MonitoringSystem) initializeComponents() error {
|
||||
aggregatedStats: &AggregatedStatistics{
|
||||
LastUpdated: time.Now(),
|
||||
},
|
||||
exporters: []MetricsExporter{},
|
||||
exporters: []MetricsExporter{},
|
||||
lastCollection: time.Now(),
|
||||
}
|
||||
|
||||
@@ -1134,13 +1134,13 @@ func (ms *MonitoringSystem) createDefaultDashboards() {
|
||||
|
||||
func (ms *MonitoringSystem) severityWeight(severity AlertSeverity) int {
|
||||
switch severity {
|
||||
case SeverityCritical:
|
||||
case AlertSeverityCritical:
|
||||
return 4
|
||||
case SeverityError:
|
||||
case AlertSeverityError:
|
||||
return 3
|
||||
case SeverityWarning:
|
||||
case AlertSeverityWarning:
|
||||
return 2
|
||||
case SeverityInfo:
|
||||
case AlertSeverityInfo:
|
||||
return 1
|
||||
default:
|
||||
return 0
|
||||
|
||||
@@ -9,74 +9,74 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/config"
|
||||
"chorus/pkg/dht"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
)
|
||||
|
||||
// NetworkManagerImpl implements NetworkManager interface for network topology and partition management
|
||||
type NetworkManagerImpl struct {
|
||||
mu sync.RWMutex
|
||||
dht *dht.DHT
|
||||
config *config.Config
|
||||
topology *NetworkTopology
|
||||
partitionInfo *PartitionInfo
|
||||
connectivity *ConnectivityMatrix
|
||||
stats *NetworkStatistics
|
||||
healthChecker *NetworkHealthChecker
|
||||
partitionDetector *PartitionDetector
|
||||
recoveryManager *RecoveryManager
|
||||
mu sync.RWMutex
|
||||
dht *dht.DHT
|
||||
config *config.Config
|
||||
topology *NetworkTopology
|
||||
partitionInfo *PartitionInfo
|
||||
connectivity *ConnectivityMatrix
|
||||
stats *NetworkStatistics
|
||||
healthChecker *NetworkHealthChecker
|
||||
partitionDetector *PartitionDetector
|
||||
recoveryManager *RecoveryManager
|
||||
|
||||
// Configuration
|
||||
healthCheckInterval time.Duration
|
||||
healthCheckInterval time.Duration
|
||||
partitionCheckInterval time.Duration
|
||||
connectivityTimeout time.Duration
|
||||
maxPartitionDuration time.Duration
|
||||
connectivityTimeout time.Duration
|
||||
maxPartitionDuration time.Duration
|
||||
|
||||
// State
|
||||
lastTopologyUpdate time.Time
|
||||
lastPartitionCheck time.Time
|
||||
running bool
|
||||
recoveryInProgress bool
|
||||
lastTopologyUpdate time.Time
|
||||
lastPartitionCheck time.Time
|
||||
running bool
|
||||
recoveryInProgress bool
|
||||
}
|
||||
|
||||
// ConnectivityMatrix tracks connectivity between all nodes
|
||||
type ConnectivityMatrix struct {
|
||||
Matrix map[string]map[string]*ConnectionInfo `json:"matrix"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// ConnectionInfo represents connectivity information between two nodes
|
||||
type ConnectionInfo struct {
|
||||
Connected bool `json:"connected"`
|
||||
Latency time.Duration `json:"latency"`
|
||||
PacketLoss float64 `json:"packet_loss"`
|
||||
Bandwidth int64 `json:"bandwidth"`
|
||||
LastChecked time.Time `json:"last_checked"`
|
||||
ErrorCount int `json:"error_count"`
|
||||
LastError string `json:"last_error,omitempty"`
|
||||
Connected bool `json:"connected"`
|
||||
Latency time.Duration `json:"latency"`
|
||||
PacketLoss float64 `json:"packet_loss"`
|
||||
Bandwidth int64 `json:"bandwidth"`
|
||||
LastChecked time.Time `json:"last_checked"`
|
||||
ErrorCount int `json:"error_count"`
|
||||
LastError string `json:"last_error,omitempty"`
|
||||
}
|
||||
|
||||
// NetworkHealthChecker performs network health checks
|
||||
type NetworkHealthChecker struct {
|
||||
mu sync.RWMutex
|
||||
nodeHealth map[string]*NodeHealth
|
||||
healthHistory map[string][]*HealthCheckResult
|
||||
healthHistory map[string][]*NetworkHealthCheckResult
|
||||
alertThresholds *NetworkAlertThresholds
|
||||
}
|
||||
|
||||
// NodeHealth represents health status of a network node
|
||||
type NodeHealth struct {
|
||||
NodeID string `json:"node_id"`
|
||||
Status NodeStatus `json:"status"`
|
||||
HealthScore float64 `json:"health_score"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
ResponseTime time.Duration `json:"response_time"`
|
||||
PacketLossRate float64 `json:"packet_loss_rate"`
|
||||
BandwidthUtil float64 `json:"bandwidth_utilization"`
|
||||
Uptime time.Duration `json:"uptime"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
NodeID string `json:"node_id"`
|
||||
Status NodeStatus `json:"status"`
|
||||
HealthScore float64 `json:"health_score"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
ResponseTime time.Duration `json:"response_time"`
|
||||
PacketLossRate float64 `json:"packet_loss_rate"`
|
||||
BandwidthUtil float64 `json:"bandwidth_utilization"`
|
||||
Uptime time.Duration `json:"uptime"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
}
|
||||
|
||||
// NodeStatus represents the status of a network node
|
||||
@@ -91,23 +91,23 @@ const (
|
||||
)
|
||||
|
||||
// HealthCheckResult represents the result of a health check
|
||||
type HealthCheckResult struct {
|
||||
NodeID string `json:"node_id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Success bool `json:"success"`
|
||||
ResponseTime time.Duration `json:"response_time"`
|
||||
ErrorMessage string `json:"error_message,omitempty"`
|
||||
type NetworkHealthCheckResult struct {
|
||||
NodeID string `json:"node_id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Success bool `json:"success"`
|
||||
ResponseTime time.Duration `json:"response_time"`
|
||||
ErrorMessage string `json:"error_message,omitempty"`
|
||||
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
|
||||
}
|
||||
|
||||
// NetworkAlertThresholds defines thresholds for network alerts
|
||||
type NetworkAlertThresholds struct {
|
||||
LatencyWarning time.Duration `json:"latency_warning"`
|
||||
LatencyCritical time.Duration `json:"latency_critical"`
|
||||
PacketLossWarning float64 `json:"packet_loss_warning"`
|
||||
PacketLossCritical float64 `json:"packet_loss_critical"`
|
||||
HealthScoreWarning float64 `json:"health_score_warning"`
|
||||
HealthScoreCritical float64 `json:"health_score_critical"`
|
||||
LatencyWarning time.Duration `json:"latency_warning"`
|
||||
LatencyCritical time.Duration `json:"latency_critical"`
|
||||
PacketLossWarning float64 `json:"packet_loss_warning"`
|
||||
PacketLossCritical float64 `json:"packet_loss_critical"`
|
||||
HealthScoreWarning float64 `json:"health_score_warning"`
|
||||
HealthScoreCritical float64 `json:"health_score_critical"`
|
||||
}
|
||||
|
||||
// PartitionDetector detects network partitions
|
||||
@@ -131,14 +131,14 @@ const (
|
||||
|
||||
// PartitionEvent represents a partition detection event
|
||||
type PartitionEvent struct {
|
||||
EventID string `json:"event_id"`
|
||||
DetectedAt time.Time `json:"detected_at"`
|
||||
EventID string `json:"event_id"`
|
||||
DetectedAt time.Time `json:"detected_at"`
|
||||
Algorithm PartitionDetectionAlgorithm `json:"algorithm"`
|
||||
PartitionedNodes []string `json:"partitioned_nodes"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
Resolved bool `json:"resolved"`
|
||||
ResolvedAt *time.Time `json:"resolved_at,omitempty"`
|
||||
PartitionedNodes []string `json:"partitioned_nodes"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
Resolved bool `json:"resolved"`
|
||||
ResolvedAt *time.Time `json:"resolved_at,omitempty"`
|
||||
}
|
||||
|
||||
// FalsePositiveFilter helps reduce false partition detections
|
||||
@@ -159,10 +159,10 @@ type PartitionDetectorConfig struct {
|
||||
|
||||
// RecoveryManager manages network partition recovery
|
||||
type RecoveryManager struct {
|
||||
mu sync.RWMutex
|
||||
mu sync.RWMutex
|
||||
recoveryStrategies map[RecoveryStrategy]*RecoveryStrategyConfig
|
||||
activeRecoveries map[string]*RecoveryOperation
|
||||
recoveryHistory []*RecoveryResult
|
||||
activeRecoveries map[string]*RecoveryOperation
|
||||
recoveryHistory []*RecoveryResult
|
||||
}
|
||||
|
||||
// RecoveryStrategy represents different recovery strategies
|
||||
@@ -177,25 +177,25 @@ const (
|
||||
|
||||
// RecoveryStrategyConfig configures a recovery strategy
|
||||
type RecoveryStrategyConfig struct {
|
||||
Strategy RecoveryStrategy `json:"strategy"`
|
||||
Timeout time.Duration `json:"timeout"`
|
||||
RetryAttempts int `json:"retry_attempts"`
|
||||
RetryInterval time.Duration `json:"retry_interval"`
|
||||
RequireConsensus bool `json:"require_consensus"`
|
||||
ForcedThreshold time.Duration `json:"forced_threshold"`
|
||||
Strategy RecoveryStrategy `json:"strategy"`
|
||||
Timeout time.Duration `json:"timeout"`
|
||||
RetryAttempts int `json:"retry_attempts"`
|
||||
RetryInterval time.Duration `json:"retry_interval"`
|
||||
RequireConsensus bool `json:"require_consensus"`
|
||||
ForcedThreshold time.Duration `json:"forced_threshold"`
|
||||
}
|
||||
|
||||
// RecoveryOperation represents an active recovery operation
|
||||
type RecoveryOperation struct {
|
||||
OperationID string `json:"operation_id"`
|
||||
Strategy RecoveryStrategy `json:"strategy"`
|
||||
StartedAt time.Time `json:"started_at"`
|
||||
TargetNodes []string `json:"target_nodes"`
|
||||
Status RecoveryStatus `json:"status"`
|
||||
Progress float64 `json:"progress"`
|
||||
CurrentPhase RecoveryPhase `json:"current_phase"`
|
||||
Errors []string `json:"errors"`
|
||||
LastUpdate time.Time `json:"last_update"`
|
||||
OperationID string `json:"operation_id"`
|
||||
Strategy RecoveryStrategy `json:"strategy"`
|
||||
StartedAt time.Time `json:"started_at"`
|
||||
TargetNodes []string `json:"target_nodes"`
|
||||
Status RecoveryStatus `json:"status"`
|
||||
Progress float64 `json:"progress"`
|
||||
CurrentPhase RecoveryPhase `json:"current_phase"`
|
||||
Errors []string `json:"errors"`
|
||||
LastUpdate time.Time `json:"last_update"`
|
||||
}
|
||||
|
||||
// RecoveryStatus represents the status of a recovery operation
|
||||
@@ -213,12 +213,12 @@ const (
|
||||
type RecoveryPhase string
|
||||
|
||||
const (
|
||||
RecoveryPhaseAssessment RecoveryPhase = "assessment"
|
||||
RecoveryPhasePreparation RecoveryPhase = "preparation"
|
||||
RecoveryPhaseReconnection RecoveryPhase = "reconnection"
|
||||
RecoveryPhaseAssessment RecoveryPhase = "assessment"
|
||||
RecoveryPhasePreparation RecoveryPhase = "preparation"
|
||||
RecoveryPhaseReconnection RecoveryPhase = "reconnection"
|
||||
RecoveryPhaseSynchronization RecoveryPhase = "synchronization"
|
||||
RecoveryPhaseValidation RecoveryPhase = "validation"
|
||||
RecoveryPhaseCompletion RecoveryPhase = "completion"
|
||||
RecoveryPhaseValidation RecoveryPhase = "validation"
|
||||
RecoveryPhaseCompletion RecoveryPhase = "completion"
|
||||
)
|
||||
|
||||
// NewNetworkManagerImpl creates a new network manager implementation
|
||||
@@ -231,13 +231,13 @@ func NewNetworkManagerImpl(dht *dht.DHT, config *config.Config) (*NetworkManager
|
||||
}
|
||||
|
||||
nm := &NetworkManagerImpl{
|
||||
dht: dht,
|
||||
config: config,
|
||||
healthCheckInterval: 30 * time.Second,
|
||||
partitionCheckInterval: 60 * time.Second,
|
||||
connectivityTimeout: 10 * time.Second,
|
||||
maxPartitionDuration: 10 * time.Minute,
|
||||
connectivity: &ConnectivityMatrix{Matrix: make(map[string]map[string]*ConnectionInfo)},
|
||||
dht: dht,
|
||||
config: config,
|
||||
healthCheckInterval: 30 * time.Second,
|
||||
partitionCheckInterval: 60 * time.Second,
|
||||
connectivityTimeout: 10 * time.Second,
|
||||
maxPartitionDuration: 10 * time.Minute,
|
||||
connectivity: &ConnectivityMatrix{Matrix: make(map[string]map[string]*ConnectionInfo)},
|
||||
stats: &NetworkStatistics{
|
||||
LastUpdated: time.Now(),
|
||||
},
|
||||
@@ -255,33 +255,33 @@ func NewNetworkManagerImpl(dht *dht.DHT, config *config.Config) (*NetworkManager
|
||||
func (nm *NetworkManagerImpl) initializeComponents() error {
|
||||
// Initialize topology
|
||||
nm.topology = &NetworkTopology{
|
||||
TotalNodes: 0,
|
||||
Connections: make(map[string][]string),
|
||||
Regions: make(map[string][]string),
|
||||
TotalNodes: 0,
|
||||
Connections: make(map[string][]string),
|
||||
Regions: make(map[string][]string),
|
||||
AvailabilityZones: make(map[string][]string),
|
||||
UpdatedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Initialize partition info
|
||||
nm.partitionInfo = &PartitionInfo{
|
||||
PartitionDetected: false,
|
||||
PartitionCount: 1,
|
||||
IsolatedNodes: []string{},
|
||||
PartitionDetected: false,
|
||||
PartitionCount: 1,
|
||||
IsolatedNodes: []string{},
|
||||
ConnectivityMatrix: make(map[string]map[string]bool),
|
||||
DetectedAt: time.Now(),
|
||||
DetectedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Initialize health checker
|
||||
nm.healthChecker = &NetworkHealthChecker{
|
||||
nodeHealth: make(map[string]*NodeHealth),
|
||||
healthHistory: make(map[string][]*HealthCheckResult),
|
||||
healthHistory: make(map[string][]*NetworkHealthCheckResult),
|
||||
alertThresholds: &NetworkAlertThresholds{
|
||||
LatencyWarning: 500 * time.Millisecond,
|
||||
LatencyCritical: 2 * time.Second,
|
||||
PacketLossWarning: 0.05, // 5%
|
||||
PacketLossCritical: 0.15, // 15%
|
||||
HealthScoreWarning: 0.7,
|
||||
HealthScoreCritical: 0.4,
|
||||
LatencyWarning: 500 * time.Millisecond,
|
||||
LatencyCritical: 2 * time.Second,
|
||||
PacketLossWarning: 0.05, // 5%
|
||||
PacketLossCritical: 0.15, // 15%
|
||||
HealthScoreWarning: 0.7,
|
||||
HealthScoreCritical: 0.4,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -307,20 +307,20 @@ func (nm *NetworkManagerImpl) initializeComponents() error {
|
||||
nm.recoveryManager = &RecoveryManager{
|
||||
recoveryStrategies: map[RecoveryStrategy]*RecoveryStrategyConfig{
|
||||
RecoveryStrategyAutomatic: {
|
||||
Strategy: RecoveryStrategyAutomatic,
|
||||
Timeout: 5 * time.Minute,
|
||||
RetryAttempts: 3,
|
||||
RetryInterval: 30 * time.Second,
|
||||
Strategy: RecoveryStrategyAutomatic,
|
||||
Timeout: 5 * time.Minute,
|
||||
RetryAttempts: 3,
|
||||
RetryInterval: 30 * time.Second,
|
||||
RequireConsensus: false,
|
||||
ForcedThreshold: 10 * time.Minute,
|
||||
ForcedThreshold: 10 * time.Minute,
|
||||
},
|
||||
RecoveryStrategyGraceful: {
|
||||
Strategy: RecoveryStrategyGraceful,
|
||||
Timeout: 10 * time.Minute,
|
||||
RetryAttempts: 5,
|
||||
RetryInterval: 60 * time.Second,
|
||||
Strategy: RecoveryStrategyGraceful,
|
||||
Timeout: 10 * time.Minute,
|
||||
RetryAttempts: 5,
|
||||
RetryInterval: 60 * time.Second,
|
||||
RequireConsensus: true,
|
||||
ForcedThreshold: 20 * time.Minute,
|
||||
ForcedThreshold: 20 * time.Minute,
|
||||
},
|
||||
},
|
||||
activeRecoveries: make(map[string]*RecoveryOperation),
|
||||
@@ -677,7 +677,7 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
|
||||
|
||||
// Store health check history
|
||||
if _, exists := nm.healthChecker.healthHistory[peer.String()]; !exists {
|
||||
nm.healthChecker.healthHistory[peer.String()] = []*HealthCheckResult{}
|
||||
nm.healthChecker.healthHistory[peer.String()] = []*NetworkHealthCheckResult{}
|
||||
}
|
||||
nm.healthChecker.healthHistory[peer.String()] = append(
|
||||
nm.healthChecker.healthHistory[peer.String()],
|
||||
@@ -907,7 +907,7 @@ func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID s
|
||||
}
|
||||
}
|
||||
|
||||
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *HealthCheckResult {
|
||||
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *NetworkHealthCheckResult {
|
||||
start := time.Now()
|
||||
|
||||
// In a real implementation, this would perform actual health checks
|
||||
@@ -950,12 +950,12 @@ func (nm *NetworkManagerImpl) testConnection(ctx context.Context, peerID string)
|
||||
}
|
||||
|
||||
return &ConnectionInfo{
|
||||
Connected: connected,
|
||||
Latency: latency,
|
||||
PacketLoss: 0.0,
|
||||
Bandwidth: 1000000, // 1 Mbps placeholder
|
||||
LastChecked: time.Now(),
|
||||
ErrorCount: 0,
|
||||
Connected: connected,
|
||||
Latency: latency,
|
||||
PacketLoss: 0.0,
|
||||
Bandwidth: 1000000, // 1 Mbps placeholder
|
||||
LastChecked: time.Now(),
|
||||
ErrorCount: 0,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1024,14 +1024,14 @@ func (nm *NetworkManagerImpl) calculateOverallNetworkHealth() float64 {
|
||||
return float64(nm.stats.ConnectedNodes) / float64(nm.stats.TotalNodes)
|
||||
}
|
||||
|
||||
func (nm *NetworkManagerImpl) determineNodeStatus(result *HealthCheckResult) NodeStatus {
|
||||
func (nm *NetworkManagerImpl) determineNodeStatus(result *NetworkHealthCheckResult) NodeStatus {
|
||||
if result.Success {
|
||||
return NodeStatusHealthy
|
||||
}
|
||||
return NodeStatusUnreachable
|
||||
}
|
||||
|
||||
func (nm *NetworkManagerImpl) calculateHealthScore(result *HealthCheckResult) float64 {
|
||||
func (nm *NetworkManagerImpl) calculateHealthScore(result *NetworkHealthCheckResult) float64 {
|
||||
if result.Success {
|
||||
return 1.0
|
||||
}
|
||||
|
||||
@@ -7,39 +7,39 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/config"
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/ucxl"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
)
|
||||
|
||||
// ReplicationManagerImpl implements ReplicationManager interface
|
||||
type ReplicationManagerImpl struct {
|
||||
mu sync.RWMutex
|
||||
dht *dht.DHT
|
||||
config *config.Config
|
||||
replicationMap map[string]*ReplicationStatus
|
||||
repairQueue chan *RepairRequest
|
||||
rebalanceQueue chan *RebalanceRequest
|
||||
consistentHash ConsistentHashing
|
||||
policy *ReplicationPolicy
|
||||
stats *ReplicationStatistics
|
||||
running bool
|
||||
mu sync.RWMutex
|
||||
dht *dht.DHT
|
||||
config *config.Config
|
||||
replicationMap map[string]*ReplicationStatus
|
||||
repairQueue chan *RepairRequest
|
||||
rebalanceQueue chan *RebalanceRequest
|
||||
consistentHash ConsistentHashing
|
||||
policy *ReplicationPolicy
|
||||
stats *ReplicationStatistics
|
||||
running bool
|
||||
}
|
||||
|
||||
// RepairRequest represents a repair request
|
||||
type RepairRequest struct {
|
||||
Address ucxl.Address
|
||||
RequestedBy string
|
||||
Priority Priority
|
||||
RequestTime time.Time
|
||||
Address ucxl.Address
|
||||
RequestedBy string
|
||||
Priority Priority
|
||||
RequestTime time.Time
|
||||
}
|
||||
|
||||
// RebalanceRequest represents a rebalance request
|
||||
type RebalanceRequest struct {
|
||||
Reason string
|
||||
RequestedBy string
|
||||
RequestTime time.Time
|
||||
Reason string
|
||||
RequestedBy string
|
||||
RequestTime time.Time
|
||||
}
|
||||
|
||||
// NewReplicationManagerImpl creates a new replication manager implementation
|
||||
@@ -220,10 +220,10 @@ func (rm *ReplicationManagerImpl) BalanceReplicas(ctx context.Context) (*Rebalan
|
||||
start := time.Now()
|
||||
|
||||
result := &RebalanceResult{
|
||||
RebalanceTime: 0,
|
||||
RebalanceTime: 0,
|
||||
RebalanceSuccessful: false,
|
||||
Errors: []string{},
|
||||
RebalancedAt: time.Now(),
|
||||
Errors: []string{},
|
||||
RebalancedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Get current cluster topology
|
||||
@@ -462,7 +462,7 @@ func (rm *ReplicationManagerImpl) discoverReplicas(ctx context.Context, address
|
||||
// For now, we'll simulate some replicas
|
||||
peers := rm.dht.GetConnectedPeers()
|
||||
if len(peers) > 0 {
|
||||
status.CurrentReplicas = min(len(peers), rm.policy.DefaultFactor)
|
||||
status.CurrentReplicas = minInt(len(peers), rm.policy.DefaultFactor)
|
||||
status.HealthyReplicas = status.CurrentReplicas
|
||||
|
||||
for i, peer := range peers {
|
||||
@@ -630,15 +630,15 @@ func (rm *ReplicationManagerImpl) isNodeOverloaded(nodeID string) bool {
|
||||
|
||||
// RebalanceMove represents a replica move operation
|
||||
type RebalanceMove struct {
|
||||
Address ucxl.Address `json:"address"`
|
||||
FromNode string `json:"from_node"`
|
||||
ToNode string `json:"to_node"`
|
||||
Priority Priority `json:"priority"`
|
||||
Reason string `json:"reason"`
|
||||
Address ucxl.Address `json:"address"`
|
||||
FromNode string `json:"from_node"`
|
||||
ToNode string `json:"to_node"`
|
||||
Priority Priority `json:"priority"`
|
||||
Reason string `json:"reason"`
|
||||
}
|
||||
|
||||
// Utility functions
|
||||
func min(a, b int) int {
|
||||
func minInt(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
|
||||
@@ -20,21 +20,21 @@ import (
|
||||
|
||||
// SecurityManager handles all security aspects of the distributed system
|
||||
type SecurityManager struct {
|
||||
mu sync.RWMutex
|
||||
config *config.Config
|
||||
tlsConfig *TLSConfig
|
||||
authManager *AuthenticationManager
|
||||
authzManager *AuthorizationManager
|
||||
auditLogger *SecurityAuditLogger
|
||||
nodeAuth *NodeAuthentication
|
||||
encryption *DistributionEncryption
|
||||
certificateAuth *CertificateAuthority
|
||||
mu sync.RWMutex
|
||||
config *config.Config
|
||||
tlsConfig *TLSConfig
|
||||
authManager *AuthenticationManager
|
||||
authzManager *AuthorizationManager
|
||||
auditLogger *SecurityAuditLogger
|
||||
nodeAuth *NodeAuthentication
|
||||
encryption *DistributionEncryption
|
||||
certificateAuth *CertificateAuthority
|
||||
|
||||
// Security state
|
||||
trustedNodes map[string]*TrustedNode
|
||||
activeSessions map[string]*SecuritySession
|
||||
securityPolicies map[string]*SecurityPolicy
|
||||
threatDetector *ThreatDetector
|
||||
trustedNodes map[string]*TrustedNode
|
||||
activeSessions map[string]*SecuritySession
|
||||
securityPolicies map[string]*SecurityPolicy
|
||||
threatDetector *ThreatDetector
|
||||
|
||||
// Configuration
|
||||
tlsEnabled bool
|
||||
@@ -45,28 +45,28 @@ type SecurityManager struct {
|
||||
|
||||
// TLSConfig manages TLS configuration for secure communications
|
||||
type TLSConfig struct {
|
||||
ServerConfig *tls.Config
|
||||
ClientConfig *tls.Config
|
||||
CertificatePath string
|
||||
PrivateKeyPath string
|
||||
CAPath string
|
||||
MinTLSVersion uint16
|
||||
CipherSuites []uint16
|
||||
CurvePreferences []tls.CurveID
|
||||
ClientAuth tls.ClientAuthType
|
||||
VerifyConnection func(tls.ConnectionState) error
|
||||
ServerConfig *tls.Config
|
||||
ClientConfig *tls.Config
|
||||
CertificatePath string
|
||||
PrivateKeyPath string
|
||||
CAPath string
|
||||
MinTLSVersion uint16
|
||||
CipherSuites []uint16
|
||||
CurvePreferences []tls.CurveID
|
||||
ClientAuth tls.ClientAuthType
|
||||
VerifyConnection func(tls.ConnectionState) error
|
||||
}
|
||||
|
||||
// AuthenticationManager handles node and user authentication
|
||||
type AuthenticationManager struct {
|
||||
mu sync.RWMutex
|
||||
providers map[string]AuthProvider
|
||||
tokenValidator TokenValidator
|
||||
sessionManager *SessionManager
|
||||
multiFactorAuth *MultiFactorAuth
|
||||
credentialStore *CredentialStore
|
||||
loginAttempts map[string]*LoginAttempts
|
||||
authPolicies map[string]*AuthPolicy
|
||||
mu sync.RWMutex
|
||||
providers map[string]AuthProvider
|
||||
tokenValidator TokenValidator
|
||||
sessionManager *SessionManager
|
||||
multiFactorAuth *MultiFactorAuth
|
||||
credentialStore *CredentialStore
|
||||
loginAttempts map[string]*LoginAttempts
|
||||
authPolicies map[string]*AuthPolicy
|
||||
}
|
||||
|
||||
// AuthProvider interface for different authentication methods
|
||||
@@ -80,14 +80,14 @@ type AuthProvider interface {
|
||||
|
||||
// Credentials represents authentication credentials
|
||||
type Credentials struct {
|
||||
Type CredentialType `json:"type"`
|
||||
Username string `json:"username,omitempty"`
|
||||
Password string `json:"password,omitempty"`
|
||||
Token string `json:"token,omitempty"`
|
||||
Certificate *x509.Certificate `json:"certificate,omitempty"`
|
||||
Signature []byte `json:"signature,omitempty"`
|
||||
Challenge string `json:"challenge,omitempty"`
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
||||
Type CredentialType `json:"type"`
|
||||
Username string `json:"username,omitempty"`
|
||||
Password string `json:"password,omitempty"`
|
||||
Token string `json:"token,omitempty"`
|
||||
Certificate *x509.Certificate `json:"certificate,omitempty"`
|
||||
Signature []byte `json:"signature,omitempty"`
|
||||
Challenge string `json:"challenge,omitempty"`
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
||||
}
|
||||
|
||||
// CredentialType represents different types of credentials
|
||||
@@ -104,15 +104,15 @@ const (
|
||||
|
||||
// AuthResult represents the result of authentication
|
||||
type AuthResult struct {
|
||||
Success bool `json:"success"`
|
||||
UserID string `json:"user_id"`
|
||||
Roles []string `json:"roles"`
|
||||
Permissions []string `json:"permissions"`
|
||||
TokenPair *TokenPair `json:"token_pair"`
|
||||
SessionID string `json:"session_id"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
FailureReason string `json:"failure_reason,omitempty"`
|
||||
Success bool `json:"success"`
|
||||
UserID string `json:"user_id"`
|
||||
Roles []string `json:"roles"`
|
||||
Permissions []string `json:"permissions"`
|
||||
TokenPair *TokenPair `json:"token_pair"`
|
||||
SessionID string `json:"session_id"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
FailureReason string `json:"failure_reason,omitempty"`
|
||||
}
|
||||
|
||||
// TokenPair represents access and refresh tokens
|
||||
@@ -140,13 +140,13 @@ type TokenClaims struct {
|
||||
|
||||
// AuthorizationManager handles authorization and access control
|
||||
type AuthorizationManager struct {
|
||||
mu sync.RWMutex
|
||||
policyEngine PolicyEngine
|
||||
rbacManager *RBACManager
|
||||
aclManager *ACLManager
|
||||
resourceManager *ResourceManager
|
||||
permissionCache *PermissionCache
|
||||
authzPolicies map[string]*AuthorizationPolicy
|
||||
mu sync.RWMutex
|
||||
policyEngine PolicyEngine
|
||||
rbacManager *RBACManager
|
||||
aclManager *ACLManager
|
||||
resourceManager *ResourceManager
|
||||
permissionCache *PermissionCache
|
||||
authzPolicies map[string]*AuthorizationPolicy
|
||||
}
|
||||
|
||||
// PolicyEngine interface for policy evaluation
|
||||
@@ -168,13 +168,13 @@ type AuthorizationRequest struct {
|
||||
|
||||
// AuthorizationResult represents the result of authorization
|
||||
type AuthorizationResult struct {
|
||||
Decision AuthorizationDecision `json:"decision"`
|
||||
Reason string `json:"reason"`
|
||||
Policies []string `json:"applied_policies"`
|
||||
Conditions []string `json:"conditions"`
|
||||
TTL time.Duration `json:"ttl"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
EvaluationTime time.Duration `json:"evaluation_time"`
|
||||
Decision AuthorizationDecision `json:"decision"`
|
||||
Reason string `json:"reason"`
|
||||
Policies []string `json:"applied_policies"`
|
||||
Conditions []string `json:"conditions"`
|
||||
TTL time.Duration `json:"ttl"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
EvaluationTime time.Duration `json:"evaluation_time"`
|
||||
}
|
||||
|
||||
// AuthorizationDecision represents authorization decisions
|
||||
@@ -188,13 +188,13 @@ const (
|
||||
|
||||
// SecurityAuditLogger handles security event logging
|
||||
type SecurityAuditLogger struct {
|
||||
mu sync.RWMutex
|
||||
loggers []SecurityLogger
|
||||
eventBuffer []*SecurityEvent
|
||||
alertManager *SecurityAlertManager
|
||||
compliance *ComplianceManager
|
||||
retention *AuditRetentionPolicy
|
||||
enabled bool
|
||||
mu sync.RWMutex
|
||||
loggers []SecurityLogger
|
||||
eventBuffer []*SecurityEvent
|
||||
alertManager *SecurityAlertManager
|
||||
compliance *ComplianceManager
|
||||
retention *AuditRetentionPolicy
|
||||
enabled bool
|
||||
}
|
||||
|
||||
// SecurityLogger interface for security event logging
|
||||
@@ -206,22 +206,22 @@ type SecurityLogger interface {
|
||||
|
||||
// SecurityEvent represents a security event
|
||||
type SecurityEvent struct {
|
||||
EventID string `json:"event_id"`
|
||||
EventType SecurityEventType `json:"event_type"`
|
||||
Severity SecuritySeverity `json:"severity"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
UserID string `json:"user_id,omitempty"`
|
||||
NodeID string `json:"node_id,omitempty"`
|
||||
Resource string `json:"resource,omitempty"`
|
||||
Action string `json:"action,omitempty"`
|
||||
Result string `json:"result"`
|
||||
Message string `json:"message"`
|
||||
Details map[string]interface{} `json:"details"`
|
||||
IPAddress string `json:"ip_address,omitempty"`
|
||||
UserAgent string `json:"user_agent,omitempty"`
|
||||
SessionID string `json:"session_id,omitempty"`
|
||||
RequestID string `json:"request_id,omitempty"`
|
||||
Fingerprint string `json:"fingerprint"`
|
||||
EventID string `json:"event_id"`
|
||||
EventType SecurityEventType `json:"event_type"`
|
||||
Severity SecuritySeverity `json:"severity"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
UserID string `json:"user_id,omitempty"`
|
||||
NodeID string `json:"node_id,omitempty"`
|
||||
Resource string `json:"resource,omitempty"`
|
||||
Action string `json:"action,omitempty"`
|
||||
Result string `json:"result"`
|
||||
Message string `json:"message"`
|
||||
Details map[string]interface{} `json:"details"`
|
||||
IPAddress string `json:"ip_address,omitempty"`
|
||||
UserAgent string `json:"user_agent,omitempty"`
|
||||
SessionID string `json:"session_id,omitempty"`
|
||||
RequestID string `json:"request_id,omitempty"`
|
||||
Fingerprint string `json:"fingerprint"`
|
||||
}
|
||||
|
||||
// SecurityEventType represents different types of security events
|
||||
@@ -242,12 +242,12 @@ const (
|
||||
type SecuritySeverity string
|
||||
|
||||
const (
|
||||
SeverityDebug SecuritySeverity = "debug"
|
||||
SeverityInfo SecuritySeverity = "info"
|
||||
SeverityWarning SecuritySeverity = "warning"
|
||||
SeverityError SecuritySeverity = "error"
|
||||
SeverityCritical SecuritySeverity = "critical"
|
||||
SeverityAlert SecuritySeverity = "alert"
|
||||
SecuritySeverityDebug SecuritySeverity = "debug"
|
||||
SecuritySeverityInfo SecuritySeverity = "info"
|
||||
SecuritySeverityWarning SecuritySeverity = "warning"
|
||||
SecuritySeverityError SecuritySeverity = "error"
|
||||
SecuritySeverityCritical SecuritySeverity = "critical"
|
||||
SecuritySeverityAlert SecuritySeverity = "alert"
|
||||
)
|
||||
|
||||
// NodeAuthentication handles node-to-node authentication
|
||||
@@ -262,16 +262,16 @@ type NodeAuthentication struct {
|
||||
|
||||
// TrustedNode represents a trusted node in the network
|
||||
type TrustedNode struct {
|
||||
NodeID string `json:"node_id"`
|
||||
PublicKey []byte `json:"public_key"`
|
||||
Certificate *x509.Certificate `json:"certificate"`
|
||||
Roles []string `json:"roles"`
|
||||
Capabilities []string `json:"capabilities"`
|
||||
TrustLevel TrustLevel `json:"trust_level"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
VerifiedAt time.Time `json:"verified_at"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
Status NodeStatus `json:"status"`
|
||||
NodeID string `json:"node_id"`
|
||||
PublicKey []byte `json:"public_key"`
|
||||
Certificate *x509.Certificate `json:"certificate"`
|
||||
Roles []string `json:"roles"`
|
||||
Capabilities []string `json:"capabilities"`
|
||||
TrustLevel TrustLevel `json:"trust_level"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
VerifiedAt time.Time `json:"verified_at"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
Status NodeStatus `json:"status"`
|
||||
}
|
||||
|
||||
// TrustLevel represents the trust level of a node
|
||||
@@ -287,18 +287,18 @@ const (
|
||||
|
||||
// SecuritySession represents an active security session
|
||||
type SecuritySession struct {
|
||||
SessionID string `json:"session_id"`
|
||||
UserID string `json:"user_id"`
|
||||
NodeID string `json:"node_id"`
|
||||
Roles []string `json:"roles"`
|
||||
Permissions []string `json:"permissions"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
LastActivity time.Time `json:"last_activity"`
|
||||
IPAddress string `json:"ip_address"`
|
||||
UserAgent string `json:"user_agent"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
Status SessionStatus `json:"status"`
|
||||
SessionID string `json:"session_id"`
|
||||
UserID string `json:"user_id"`
|
||||
NodeID string `json:"node_id"`
|
||||
Roles []string `json:"roles"`
|
||||
Permissions []string `json:"permissions"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
LastActivity time.Time `json:"last_activity"`
|
||||
IPAddress string `json:"ip_address"`
|
||||
UserAgent string `json:"user_agent"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
Status SessionStatus `json:"status"`
|
||||
}
|
||||
|
||||
// SessionStatus represents session status
|
||||
@@ -313,61 +313,61 @@ const (
|
||||
|
||||
// ThreatDetector detects security threats and anomalies
|
||||
type ThreatDetector struct {
|
||||
mu sync.RWMutex
|
||||
detectionRules []*ThreatDetectionRule
|
||||
behaviorAnalyzer *BehaviorAnalyzer
|
||||
anomalyDetector *AnomalyDetector
|
||||
threatIntelligence *ThreatIntelligence
|
||||
activeThreats map[string]*ThreatEvent
|
||||
mu sync.RWMutex
|
||||
detectionRules []*ThreatDetectionRule
|
||||
behaviorAnalyzer *BehaviorAnalyzer
|
||||
anomalyDetector *AnomalyDetector
|
||||
threatIntelligence *ThreatIntelligence
|
||||
activeThreats map[string]*ThreatEvent
|
||||
mitigationStrategies map[ThreatType]*MitigationStrategy
|
||||
}
|
||||
|
||||
// ThreatDetectionRule represents a threat detection rule
|
||||
type ThreatDetectionRule struct {
|
||||
RuleID string `json:"rule_id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
ThreatType ThreatType `json:"threat_type"`
|
||||
Severity SecuritySeverity `json:"severity"`
|
||||
Conditions []*ThreatCondition `json:"conditions"`
|
||||
Actions []*ThreatAction `json:"actions"`
|
||||
Enabled bool `json:"enabled"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
RuleID string `json:"rule_id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
ThreatType ThreatType `json:"threat_type"`
|
||||
Severity SecuritySeverity `json:"severity"`
|
||||
Conditions []*ThreatCondition `json:"conditions"`
|
||||
Actions []*ThreatAction `json:"actions"`
|
||||
Enabled bool `json:"enabled"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
}
|
||||
|
||||
// ThreatType represents different types of threats
|
||||
type ThreatType string
|
||||
|
||||
const (
|
||||
ThreatTypeBruteForce ThreatType = "brute_force"
|
||||
ThreatTypeUnauthorized ThreatType = "unauthorized_access"
|
||||
ThreatTypeDataExfiltration ThreatType = "data_exfiltration"
|
||||
ThreatTypeDoS ThreatType = "denial_of_service"
|
||||
ThreatTypeBruteForce ThreatType = "brute_force"
|
||||
ThreatTypeUnauthorized ThreatType = "unauthorized_access"
|
||||
ThreatTypeDataExfiltration ThreatType = "data_exfiltration"
|
||||
ThreatTypeDoS ThreatType = "denial_of_service"
|
||||
ThreatTypePrivilegeEscalation ThreatType = "privilege_escalation"
|
||||
ThreatTypeAnomalous ThreatType = "anomalous_behavior"
|
||||
ThreatTypeMaliciousCode ThreatType = "malicious_code"
|
||||
ThreatTypeInsiderThreat ThreatType = "insider_threat"
|
||||
ThreatTypeAnomalous ThreatType = "anomalous_behavior"
|
||||
ThreatTypeMaliciousCode ThreatType = "malicious_code"
|
||||
ThreatTypeInsiderThreat ThreatType = "insider_threat"
|
||||
)
|
||||
|
||||
// CertificateAuthority manages certificate generation and validation
|
||||
type CertificateAuthority struct {
|
||||
mu sync.RWMutex
|
||||
rootCA *x509.Certificate
|
||||
rootKey interface{}
|
||||
intermediateCA *x509.Certificate
|
||||
mu sync.RWMutex
|
||||
rootCA *x509.Certificate
|
||||
rootKey interface{}
|
||||
intermediateCA *x509.Certificate
|
||||
intermediateKey interface{}
|
||||
certStore *CertificateStore
|
||||
crlManager *CRLManager
|
||||
ocspResponder *OCSPResponder
|
||||
certStore *CertificateStore
|
||||
crlManager *CRLManager
|
||||
ocspResponder *OCSPResponder
|
||||
}
|
||||
|
||||
// DistributionEncryption handles encryption for distributed communications
|
||||
type DistributionEncryption struct {
|
||||
mu sync.RWMutex
|
||||
keyManager *DistributionKeyManager
|
||||
encryptionSuite *EncryptionSuite
|
||||
mu sync.RWMutex
|
||||
keyManager *DistributionKeyManager
|
||||
encryptionSuite *EncryptionSuite
|
||||
keyRotationPolicy *KeyRotationPolicy
|
||||
encryptionMetrics *EncryptionMetrics
|
||||
}
|
||||
@@ -379,13 +379,13 @@ func NewSecurityManager(config *config.Config) (*SecurityManager, error) {
|
||||
}
|
||||
|
||||
sm := &SecurityManager{
|
||||
config: config,
|
||||
trustedNodes: make(map[string]*TrustedNode),
|
||||
activeSessions: make(map[string]*SecuritySession),
|
||||
securityPolicies: make(map[string]*SecurityPolicy),
|
||||
tlsEnabled: true,
|
||||
mutualTLSEnabled: true,
|
||||
auditingEnabled: true,
|
||||
config: config,
|
||||
trustedNodes: make(map[string]*TrustedNode),
|
||||
activeSessions: make(map[string]*SecuritySession),
|
||||
securityPolicies: make(map[string]*SecurityPolicy),
|
||||
tlsEnabled: true,
|
||||
mutualTLSEnabled: true,
|
||||
auditingEnabled: true,
|
||||
encryptionEnabled: true,
|
||||
}
|
||||
|
||||
@@ -508,12 +508,12 @@ func (sm *SecurityManager) Authenticate(ctx context.Context, credentials *Creden
|
||||
// Log authentication attempt
|
||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||
EventType: EventTypeAuthentication,
|
||||
Severity: SeverityInfo,
|
||||
Severity: SecuritySeverityInfo,
|
||||
Action: "authenticate",
|
||||
Message: "Authentication attempt",
|
||||
Details: map[string]interface{}{
|
||||
"credential_type": credentials.Type,
|
||||
"username": credentials.Username,
|
||||
"username": credentials.Username,
|
||||
},
|
||||
})
|
||||
|
||||
@@ -525,7 +525,7 @@ func (sm *SecurityManager) Authorize(ctx context.Context, request *Authorization
|
||||
// Log authorization attempt
|
||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||
EventType: EventTypeAuthorization,
|
||||
Severity: SeverityInfo,
|
||||
Severity: SecuritySeverityInfo,
|
||||
UserID: request.UserID,
|
||||
Resource: request.Resource,
|
||||
Action: request.Action,
|
||||
@@ -554,7 +554,7 @@ func (sm *SecurityManager) ValidateNodeIdentity(ctx context.Context, nodeID stri
|
||||
// Log successful validation
|
||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||
EventType: EventTypeAuthentication,
|
||||
Severity: SeverityInfo,
|
||||
Severity: SecuritySeverityInfo,
|
||||
NodeID: nodeID,
|
||||
Action: "validate_node_identity",
|
||||
Result: "success",
|
||||
@@ -609,7 +609,7 @@ func (sm *SecurityManager) AddTrustedNode(ctx context.Context, node *TrustedNode
|
||||
// Log node addition
|
||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||
EventType: EventTypeConfiguration,
|
||||
Severity: SeverityInfo,
|
||||
Severity: SecuritySeverityInfo,
|
||||
NodeID: node.NodeID,
|
||||
Action: "add_trusted_node",
|
||||
Result: "success",
|
||||
@@ -660,11 +660,11 @@ func (sm *SecurityManager) generateSelfSignedCertificate() ([]byte, []byte, erro
|
||||
StreetAddress: []string{""},
|
||||
PostalCode: []string{""},
|
||||
},
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: time.Now().Add(365 * 24 * time.Hour),
|
||||
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
|
||||
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback},
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: time.Now().Add(365 * 24 * time.Hour),
|
||||
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
|
||||
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback},
|
||||
}
|
||||
|
||||
// This is a simplified implementation
|
||||
@@ -765,8 +765,8 @@ func NewDistributionEncryption(config *config.Config) (*DistributionEncryption,
|
||||
|
||||
func NewThreatDetector(config *config.Config) (*ThreatDetector, error) {
|
||||
return &ThreatDetector{
|
||||
detectionRules: []*ThreatDetectionRule{},
|
||||
activeThreats: make(map[string]*ThreatEvent),
|
||||
detectionRules: []*ThreatDetectionRule{},
|
||||
activeThreats: make(map[string]*ThreatEvent),
|
||||
mitigationStrategies: make(map[ThreatType]*MitigationStrategy),
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -11,8 +11,8 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// DefaultDirectoryAnalyzer provides comprehensive directory structure analysis
|
||||
@@ -268,11 +268,11 @@ func NewRelationshipAnalyzer() *RelationshipAnalyzer {
|
||||
// AnalyzeStructure analyzes directory organization patterns
|
||||
func (da *DefaultDirectoryAnalyzer) AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error) {
|
||||
structure := &DirectoryStructure{
|
||||
Path: dirPath,
|
||||
FileTypes: make(map[string]int),
|
||||
Languages: make(map[string]int),
|
||||
Dependencies: []string{},
|
||||
AnalyzedAt: time.Now(),
|
||||
Path: dirPath,
|
||||
FileTypes: make(map[string]int),
|
||||
Languages: make(map[string]int),
|
||||
Dependencies: []string{},
|
||||
AnalyzedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Walk the directory tree
|
||||
@@ -340,9 +340,9 @@ func (da *DefaultDirectoryAnalyzer) DetectConventions(ctx context.Context, dirPa
|
||||
OrganizationalPatterns: []*OrganizationalPattern{},
|
||||
Consistency: 0.0,
|
||||
Violations: []*Violation{},
|
||||
Recommendations: []*Recommendation{},
|
||||
Recommendations: []*BasicRecommendation{},
|
||||
AppliedStandards: []string{},
|
||||
AnalyzedAt: time.Now(),
|
||||
AnalyzedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Collect all files and directories
|
||||
@@ -385,39 +385,39 @@ func (da *DefaultDirectoryAnalyzer) IdentifyPurpose(ctx context.Context, structu
|
||||
purpose string
|
||||
confidence float64
|
||||
}{
|
||||
"src": {"Source code repository", 0.9},
|
||||
"source": {"Source code repository", 0.9},
|
||||
"lib": {"Library code", 0.8},
|
||||
"libs": {"Library code", 0.8},
|
||||
"vendor": {"Third-party dependencies", 0.9},
|
||||
"node_modules": {"Node.js dependencies", 0.95},
|
||||
"build": {"Build artifacts", 0.9},
|
||||
"dist": {"Distribution files", 0.9},
|
||||
"bin": {"Binary executables", 0.9},
|
||||
"test": {"Test code", 0.9},
|
||||
"tests": {"Test code", 0.9},
|
||||
"docs": {"Documentation", 0.9},
|
||||
"doc": {"Documentation", 0.9},
|
||||
"config": {"Configuration files", 0.9},
|
||||
"configs": {"Configuration files", 0.9},
|
||||
"scripts": {"Utility scripts", 0.8},
|
||||
"tools": {"Development tools", 0.8},
|
||||
"assets": {"Static assets", 0.8},
|
||||
"public": {"Public web assets", 0.8},
|
||||
"static": {"Static files", 0.8},
|
||||
"templates": {"Template files", 0.8},
|
||||
"migrations": {"Database migrations", 0.9},
|
||||
"models": {"Data models", 0.8},
|
||||
"views": {"View layer", 0.8},
|
||||
"controllers": {"Controller layer", 0.8},
|
||||
"services": {"Service layer", 0.8},
|
||||
"components": {"Reusable components", 0.8},
|
||||
"modules": {"Modular components", 0.8},
|
||||
"packages": {"Package organization", 0.7},
|
||||
"internal": {"Internal implementation", 0.8},
|
||||
"cmd": {"Command-line applications", 0.9},
|
||||
"api": {"API implementation", 0.8},
|
||||
"pkg": {"Go package directory", 0.8},
|
||||
"src": {"Source code repository", 0.9},
|
||||
"source": {"Source code repository", 0.9},
|
||||
"lib": {"Library code", 0.8},
|
||||
"libs": {"Library code", 0.8},
|
||||
"vendor": {"Third-party dependencies", 0.9},
|
||||
"node_modules": {"Node.js dependencies", 0.95},
|
||||
"build": {"Build artifacts", 0.9},
|
||||
"dist": {"Distribution files", 0.9},
|
||||
"bin": {"Binary executables", 0.9},
|
||||
"test": {"Test code", 0.9},
|
||||
"tests": {"Test code", 0.9},
|
||||
"docs": {"Documentation", 0.9},
|
||||
"doc": {"Documentation", 0.9},
|
||||
"config": {"Configuration files", 0.9},
|
||||
"configs": {"Configuration files", 0.9},
|
||||
"scripts": {"Utility scripts", 0.8},
|
||||
"tools": {"Development tools", 0.8},
|
||||
"assets": {"Static assets", 0.8},
|
||||
"public": {"Public web assets", 0.8},
|
||||
"static": {"Static files", 0.8},
|
||||
"templates": {"Template files", 0.8},
|
||||
"migrations": {"Database migrations", 0.9},
|
||||
"models": {"Data models", 0.8},
|
||||
"views": {"View layer", 0.8},
|
||||
"controllers": {"Controller layer", 0.8},
|
||||
"services": {"Service layer", 0.8},
|
||||
"components": {"Reusable components", 0.8},
|
||||
"modules": {"Modular components", 0.8},
|
||||
"packages": {"Package organization", 0.7},
|
||||
"internal": {"Internal implementation", 0.8},
|
||||
"cmd": {"Command-line applications", 0.9},
|
||||
"api": {"API implementation", 0.8},
|
||||
"pkg": {"Go package directory", 0.8},
|
||||
}
|
||||
|
||||
if p, exists := purposes[dirName]; exists {
|
||||
@@ -459,12 +459,12 @@ func (da *DefaultDirectoryAnalyzer) IdentifyPurpose(ctx context.Context, structu
|
||||
// AnalyzeRelationships analyzes relationships between subdirectories
|
||||
func (da *DefaultDirectoryAnalyzer) AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error) {
|
||||
analysis := &RelationshipAnalysis{
|
||||
Dependencies: []*DirectoryDependency{},
|
||||
Relationships: []*DirectoryRelation{},
|
||||
CouplingMetrics: &CouplingMetrics{},
|
||||
ModularityScore: 0.0,
|
||||
Dependencies: []*DirectoryDependency{},
|
||||
Relationships: []*DirectoryRelation{},
|
||||
CouplingMetrics: &CouplingMetrics{},
|
||||
ModularityScore: 0.0,
|
||||
ArchitecturalStyle: "unknown",
|
||||
AnalyzedAt: time.Now(),
|
||||
AnalyzedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Find subdirectories
|
||||
@@ -568,20 +568,20 @@ func (da *DefaultDirectoryAnalyzer) GenerateHierarchy(ctx context.Context, rootP
|
||||
|
||||
func (da *DefaultDirectoryAnalyzer) mapExtensionToLanguage(ext string) string {
|
||||
langMap := map[string]string{
|
||||
".go": "go",
|
||||
".py": "python",
|
||||
".js": "javascript",
|
||||
".jsx": "javascript",
|
||||
".ts": "typescript",
|
||||
".tsx": "typescript",
|
||||
".java": "java",
|
||||
".c": "c",
|
||||
".cpp": "cpp",
|
||||
".cs": "csharp",
|
||||
".php": "php",
|
||||
".rb": "ruby",
|
||||
".rs": "rust",
|
||||
".kt": "kotlin",
|
||||
".go": "go",
|
||||
".py": "python",
|
||||
".js": "javascript",
|
||||
".jsx": "javascript",
|
||||
".ts": "typescript",
|
||||
".tsx": "typescript",
|
||||
".java": "java",
|
||||
".c": "c",
|
||||
".cpp": "cpp",
|
||||
".cs": "csharp",
|
||||
".php": "php",
|
||||
".rb": "ruby",
|
||||
".rs": "rust",
|
||||
".kt": "kotlin",
|
||||
".swift": "swift",
|
||||
}
|
||||
|
||||
@@ -996,7 +996,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeNamingPattern(paths []string, scope s
|
||||
Type: "naming",
|
||||
Description: fmt.Sprintf("Naming convention for %ss", scope),
|
||||
Confidence: da.calculateNamingConsistency(names, convention),
|
||||
Examples: names[:min(5, len(names))],
|
||||
Examples: names[:minInt(5, len(names))],
|
||||
},
|
||||
Convention: convention,
|
||||
Scope: scope,
|
||||
@@ -1100,12 +1100,12 @@ func (da *DefaultDirectoryAnalyzer) detectNamingStyle(name string) string {
|
||||
return "unknown"
|
||||
}
|
||||
|
||||
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*Recommendation {
|
||||
recommendations := []*Recommendation{}
|
||||
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*BasicRecommendation {
|
||||
recommendations := []*BasicRecommendation{}
|
||||
|
||||
// Recommend consistency improvements
|
||||
if analysis.Consistency < 0.8 {
|
||||
recommendations = append(recommendations, &Recommendation{
|
||||
recommendations = append(recommendations, &BasicRecommendation{
|
||||
Type: "consistency",
|
||||
Title: "Improve naming consistency",
|
||||
Description: "Consider standardizing naming conventions across the project",
|
||||
@@ -1118,7 +1118,7 @@ func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *
|
||||
|
||||
// Recommend architectural improvements
|
||||
if len(analysis.OrganizationalPatterns) == 0 {
|
||||
recommendations = append(recommendations, &Recommendation{
|
||||
recommendations = append(recommendations, &BasicRecommendation{
|
||||
Type: "architecture",
|
||||
Title: "Consider architectural patterns",
|
||||
Description: "Project structure could benefit from established architectural patterns",
|
||||
@@ -1225,12 +1225,11 @@ func (da *DefaultDirectoryAnalyzer) extractImports(content string, patterns []*r
|
||||
|
||||
func (da *DefaultDirectoryAnalyzer) isLocalDependency(importPath, fromDir, toDir string) bool {
|
||||
// Simple heuristic: check if import path references the target directory
|
||||
fromBase := filepath.Base(fromDir)
|
||||
toBase := filepath.Base(toDir)
|
||||
|
||||
return strings.Contains(importPath, toBase) ||
|
||||
strings.Contains(importPath, "../"+toBase) ||
|
||||
strings.Contains(importPath, "./"+toBase)
|
||||
strings.Contains(importPath, "../"+toBase) ||
|
||||
strings.Contains(importPath, "./"+toBase)
|
||||
}
|
||||
|
||||
func (da *DefaultDirectoryAnalyzer) analyzeDirectoryRelationships(subdirs []string, dependencies []*DirectoryDependency) []*DirectoryRelation {
|
||||
@@ -1399,7 +1398,7 @@ func (da *DefaultDirectoryAnalyzer) walkDirectoryHierarchy(rootPath string, curr
|
||||
|
||||
func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Address, error) {
|
||||
cleanPath := filepath.Clean(path)
|
||||
addr, err := ucxl.ParseAddress(fmt.Sprintf("dir://%s", cleanPath))
|
||||
addr, err := ucxl.Parse(fmt.Sprintf("dir://%s", cleanPath))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
||||
}
|
||||
@@ -1417,7 +1416,7 @@ func (da *DefaultDirectoryAnalyzer) generateDirectorySummary(structure *Director
|
||||
langs = append(langs, fmt.Sprintf("%s (%d)", lang, count))
|
||||
}
|
||||
sort.Strings(langs)
|
||||
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:min(3, len(langs))], ", "))
|
||||
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:minInt(3, len(langs))], ", "))
|
||||
}
|
||||
|
||||
return summary
|
||||
@@ -1497,7 +1496,7 @@ func (da *DefaultDirectoryAnalyzer) calculateDirectorySpecificity(structure *Dir
|
||||
return specificity
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
func minInt(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
|
||||
@@ -2,9 +2,9 @@ package intelligence
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
)
|
||||
|
||||
@@ -138,26 +138,26 @@ type RAGIntegration interface {
|
||||
|
||||
// ProjectGoal represents a high-level project objective
|
||||
type ProjectGoal struct {
|
||||
ID string `json:"id"` // Unique identifier
|
||||
Name string `json:"name"` // Goal name
|
||||
Description string `json:"description"` // Detailed description
|
||||
Keywords []string `json:"keywords"` // Associated keywords
|
||||
Priority int `json:"priority"` // Priority level (1=highest)
|
||||
Phase string `json:"phase"` // Project phase
|
||||
Metrics []string `json:"metrics"` // Success metrics
|
||||
Owner string `json:"owner"` // Goal owner
|
||||
ID string `json:"id"` // Unique identifier
|
||||
Name string `json:"name"` // Goal name
|
||||
Description string `json:"description"` // Detailed description
|
||||
Keywords []string `json:"keywords"` // Associated keywords
|
||||
Priority int `json:"priority"` // Priority level (1=highest)
|
||||
Phase string `json:"phase"` // Project phase
|
||||
Metrics []string `json:"metrics"` // Success metrics
|
||||
Owner string `json:"owner"` // Goal owner
|
||||
Deadline *time.Time `json:"deadline,omitempty"` // Target deadline
|
||||
}
|
||||
|
||||
// RoleProfile defines context requirements for different roles
|
||||
type RoleProfile struct {
|
||||
Role string `json:"role"` // Role identifier
|
||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level"` // Required access level
|
||||
RelevantTags []string `json:"relevant_tags"` // Relevant context tags
|
||||
ContextScope []string `json:"context_scope"` // Scope of interest
|
||||
InsightTypes []string `json:"insight_types"` // Types of insights needed
|
||||
QualityThreshold float64 `json:"quality_threshold"` // Minimum quality threshold
|
||||
Preferences map[string]interface{} `json:"preferences"` // Role-specific preferences
|
||||
Role string `json:"role"` // Role identifier
|
||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level"` // Required access level
|
||||
RelevantTags []string `json:"relevant_tags"` // Relevant context tags
|
||||
ContextScope []string `json:"context_scope"` // Scope of interest
|
||||
InsightTypes []string `json:"insight_types"` // Types of insights needed
|
||||
QualityThreshold float64 `json:"quality_threshold"` // Minimum quality threshold
|
||||
Preferences map[string]interface{} `json:"preferences"` // Role-specific preferences
|
||||
}
|
||||
|
||||
// EngineConfig represents configuration for the intelligence engine
|
||||
@@ -168,59 +168,64 @@ type EngineConfig struct {
|
||||
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
|
||||
|
||||
// RAG integration settings
|
||||
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
|
||||
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
|
||||
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
|
||||
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
|
||||
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
|
||||
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
|
||||
EnableRAG bool `json:"enable_rag"` // Legacy toggle for RAG enablement
|
||||
// Feature toggles
|
||||
EnableGoalAlignment bool `json:"enable_goal_alignment"`
|
||||
EnablePatternDetection bool `json:"enable_pattern_detection"`
|
||||
EnableRoleAware bool `json:"enable_role_aware"`
|
||||
|
||||
// Quality settings
|
||||
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
|
||||
RequireValidation bool `json:"require_validation"` // Whether validation is required
|
||||
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
|
||||
RequireValidation bool `json:"require_validation"` // Whether validation is required
|
||||
|
||||
// Performance settings
|
||||
CacheEnabled bool `json:"cache_enabled"` // Whether caching is enabled
|
||||
CacheTTL time.Duration `json:"cache_ttl"` // Cache TTL
|
||||
CacheEnabled bool `json:"cache_enabled"` // Whether caching is enabled
|
||||
CacheTTL time.Duration `json:"cache_ttl"` // Cache TTL
|
||||
|
||||
// Role profiles
|
||||
RoleProfiles map[string]*RoleProfile `json:"role_profiles"` // Role-specific profiles
|
||||
RoleProfiles map[string]*RoleProfile `json:"role_profiles"` // Role-specific profiles
|
||||
|
||||
// Project goals
|
||||
ProjectGoals []*ProjectGoal `json:"project_goals"` // Active project goals
|
||||
ProjectGoals []*ProjectGoal `json:"project_goals"` // Active project goals
|
||||
}
|
||||
|
||||
// EngineStatistics represents performance statistics for the engine
|
||||
type EngineStatistics struct {
|
||||
TotalAnalyses int64 `json:"total_analyses"` // Total analyses performed
|
||||
SuccessfulAnalyses int64 `json:"successful_analyses"` // Successful analyses
|
||||
FailedAnalyses int64 `json:"failed_analyses"` // Failed analyses
|
||||
AverageAnalysisTime time.Duration `json:"average_analysis_time"` // Average analysis time
|
||||
CacheHitRate float64 `json:"cache_hit_rate"` // Cache hit rate
|
||||
RAGQueriesPerformed int64 `json:"rag_queries_performed"` // RAG queries made
|
||||
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
|
||||
FilesAnalyzed int64 `json:"files_analyzed"` // Total files analyzed
|
||||
DirectoriesAnalyzed int64 `json:"directories_analyzed"` // Total directories analyzed
|
||||
PatternsDetected int64 `json:"patterns_detected"` // Patterns detected
|
||||
LastResetAt time.Time `json:"last_reset_at"` // When stats were last reset
|
||||
TotalAnalyses int64 `json:"total_analyses"` // Total analyses performed
|
||||
SuccessfulAnalyses int64 `json:"successful_analyses"` // Successful analyses
|
||||
FailedAnalyses int64 `json:"failed_analyses"` // Failed analyses
|
||||
AverageAnalysisTime time.Duration `json:"average_analysis_time"` // Average analysis time
|
||||
CacheHitRate float64 `json:"cache_hit_rate"` // Cache hit rate
|
||||
RAGQueriesPerformed int64 `json:"rag_queries_performed"` // RAG queries made
|
||||
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
|
||||
FilesAnalyzed int64 `json:"files_analyzed"` // Total files analyzed
|
||||
DirectoriesAnalyzed int64 `json:"directories_analyzed"` // Total directories analyzed
|
||||
PatternsDetected int64 `json:"patterns_detected"` // Patterns detected
|
||||
LastResetAt time.Time `json:"last_reset_at"` // When stats were last reset
|
||||
}
|
||||
|
||||
// FileAnalysis represents the result of file analysis
|
||||
type FileAnalysis struct {
|
||||
FilePath string `json:"file_path"` // Path to analyzed file
|
||||
Language string `json:"language"` // Detected language
|
||||
LanguageConf float64 `json:"language_conf"` // Language detection confidence
|
||||
FileType string `json:"file_type"` // File type classification
|
||||
Size int64 `json:"size"` // File size in bytes
|
||||
LineCount int `json:"line_count"` // Number of lines
|
||||
Complexity float64 `json:"complexity"` // Code complexity score
|
||||
Dependencies []string `json:"dependencies"` // Identified dependencies
|
||||
Exports []string `json:"exports"` // Exported symbols/functions
|
||||
Imports []string `json:"imports"` // Import statements
|
||||
Functions []string `json:"functions"` // Function/method names
|
||||
Classes []string `json:"classes"` // Class names
|
||||
Variables []string `json:"variables"` // Variable names
|
||||
Comments []string `json:"comments"` // Extracted comments
|
||||
TODOs []string `json:"todos"` // TODO comments
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
FilePath string `json:"file_path"` // Path to analyzed file
|
||||
Language string `json:"language"` // Detected language
|
||||
LanguageConf float64 `json:"language_conf"` // Language detection confidence
|
||||
FileType string `json:"file_type"` // File type classification
|
||||
Size int64 `json:"size"` // File size in bytes
|
||||
LineCount int `json:"line_count"` // Number of lines
|
||||
Complexity float64 `json:"complexity"` // Code complexity score
|
||||
Dependencies []string `json:"dependencies"` // Identified dependencies
|
||||
Exports []string `json:"exports"` // Exported symbols/functions
|
||||
Imports []string `json:"imports"` // Import statements
|
||||
Functions []string `json:"functions"` // Function/method names
|
||||
Classes []string `json:"classes"` // Class names
|
||||
Variables []string `json:"variables"` // Variable names
|
||||
Comments []string `json:"comments"` // Extracted comments
|
||||
TODOs []string `json:"todos"` // TODO comments
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
}
|
||||
|
||||
// DefaultIntelligenceEngine provides a complete implementation of the IntelligenceEngine interface
|
||||
@@ -250,6 +255,10 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
|
||||
config = DefaultEngineConfig()
|
||||
}
|
||||
|
||||
if config.EnableRAG {
|
||||
config.RAGEnabled = true
|
||||
}
|
||||
|
||||
// Initialize file analyzer
|
||||
fileAnalyzer := NewDefaultFileAnalyzer(config)
|
||||
|
||||
@@ -273,13 +282,22 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
|
||||
directoryAnalyzer: dirAnalyzer,
|
||||
patternDetector: patternDetector,
|
||||
ragIntegration: ragIntegration,
|
||||
stats: &EngineStatistics{
|
||||
stats: &EngineStatistics{
|
||||
LastResetAt: time.Now(),
|
||||
},
|
||||
cache: &sync.Map{},
|
||||
projectGoals: config.ProjectGoals,
|
||||
roleProfiles: config.RoleProfiles,
|
||||
cache: &sync.Map{},
|
||||
projectGoals: config.ProjectGoals,
|
||||
roleProfiles: config.RoleProfiles,
|
||||
}
|
||||
|
||||
return engine, nil
|
||||
}
|
||||
|
||||
// NewIntelligenceEngine is a convenience wrapper expected by legacy callers.
|
||||
func NewIntelligenceEngine(config *EngineConfig) *DefaultIntelligenceEngine {
|
||||
engine, err := NewDefaultIntelligenceEngine(config)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return engine
|
||||
}
|
||||
|
||||
@@ -4,14 +4,13 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// AnalyzeFile analyzes a single file and generates contextual understanding
|
||||
@@ -136,8 +135,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeDirectory(ctx context.Context, dirPat
|
||||
}()
|
||||
|
||||
// Analyze directory structure
|
||||
structure, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath)
|
||||
if err != nil {
|
||||
if _, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath); err != nil {
|
||||
e.updateStats("directory_analysis", time.Since(start), false)
|
||||
return nil, fmt.Errorf("failed to analyze directory structure: %w", err)
|
||||
}
|
||||
@@ -232,7 +230,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeBatch(ctx context.Context, filePaths
|
||||
wg.Add(1)
|
||||
go func(path string) {
|
||||
defer wg.Done()
|
||||
semaphore <- struct{}{} // Acquire semaphore
|
||||
semaphore <- struct{}{} // Acquire semaphore
|
||||
defer func() { <-semaphore }() // Release semaphore
|
||||
|
||||
ctxNode, err := e.AnalyzeFile(ctx, path, role)
|
||||
@@ -430,7 +428,7 @@ func (e *DefaultIntelligenceEngine) readFileContent(filePath string) ([]byte, er
|
||||
func (e *DefaultIntelligenceEngine) generateUCXLAddress(filePath string) (*ucxl.Address, error) {
|
||||
// Simple implementation - in reality this would be more sophisticated
|
||||
cleanPath := filepath.Clean(filePath)
|
||||
addr, err := ucxl.ParseAddress(fmt.Sprintf("file://%s", cleanPath))
|
||||
addr, err := ucxl.Parse(fmt.Sprintf("file://%s", cleanPath))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
||||
}
|
||||
@@ -640,6 +638,10 @@ func DefaultEngineConfig() *EngineConfig {
|
||||
RAGEndpoint: "",
|
||||
RAGTimeout: 10 * time.Second,
|
||||
RAGEnabled: false,
|
||||
EnableRAG: false,
|
||||
EnableGoalAlignment: false,
|
||||
EnablePatternDetection: false,
|
||||
EnableRoleAware: false,
|
||||
MinConfidenceThreshold: 0.6,
|
||||
RequireValidation: true,
|
||||
CacheEnabled: true,
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
//go:build integration
|
||||
// +build integration
|
||||
|
||||
package intelligence
|
||||
|
||||
import (
|
||||
@@ -13,12 +16,12 @@ import (
|
||||
func TestIntelligenceEngine_Integration(t *testing.T) {
|
||||
// Create test configuration
|
||||
config := &EngineConfig{
|
||||
EnableRAG: false, // Disable RAG for testing
|
||||
EnableGoalAlignment: true,
|
||||
EnablePatternDetection: true,
|
||||
EnableRoleAware: true,
|
||||
MaxConcurrentAnalysis: 2,
|
||||
AnalysisTimeout: 30 * time.Second,
|
||||
EnableRAG: false, // Disable RAG for testing
|
||||
EnableGoalAlignment: true,
|
||||
EnablePatternDetection: true,
|
||||
EnableRoleAware: true,
|
||||
MaxConcurrentAnalysis: 2,
|
||||
AnalysisTimeout: 30 * time.Second,
|
||||
CacheTTL: 5 * time.Minute,
|
||||
MinConfidenceThreshold: 0.5,
|
||||
}
|
||||
@@ -29,13 +32,13 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
|
||||
|
||||
// Create test context node
|
||||
testNode := &slurpContext.ContextNode{
|
||||
Path: "/test/example.go",
|
||||
Summary: "A Go service implementing user authentication",
|
||||
Purpose: "Handles user login and authentication for the web application",
|
||||
Path: "/test/example.go",
|
||||
Summary: "A Go service implementing user authentication",
|
||||
Purpose: "Handles user login and authentication for the web application",
|
||||
Technologies: []string{"go", "jwt", "bcrypt"},
|
||||
Tags: []string{"authentication", "security", "web"},
|
||||
CreatedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
Tags: []string{"authentication", "security", "web"},
|
||||
GeneratedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Create test project goal
|
||||
@@ -47,7 +50,7 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
|
||||
Priority: 1,
|
||||
Phase: "development",
|
||||
Deadline: nil,
|
||||
CreatedAt: time.Now(),
|
||||
GeneratedAt: time.Now(),
|
||||
}
|
||||
|
||||
t.Run("AnalyzeFile", func(t *testing.T) {
|
||||
@@ -220,9 +223,9 @@ func TestPatternDetector_DetectDesignPatterns(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
filename string
|
||||
content []byte
|
||||
name string
|
||||
filename string
|
||||
content []byte
|
||||
expectedPattern string
|
||||
}{
|
||||
{
|
||||
@@ -652,7 +655,7 @@ func createTestContextNode(path, summary, purpose string, technologies, tags []s
|
||||
Purpose: purpose,
|
||||
Technologies: technologies,
|
||||
Tags: tags,
|
||||
CreatedAt: time.Now(),
|
||||
GeneratedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
}
|
||||
@@ -665,7 +668,7 @@ func createTestProjectGoal(id, name, description string, keywords []string, prio
|
||||
Keywords: keywords,
|
||||
Priority: priority,
|
||||
Phase: phase,
|
||||
CreatedAt: time.Now(),
|
||||
GeneratedAt: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package intelligence
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
@@ -33,12 +32,12 @@ type CodeStructureAnalyzer struct {
|
||||
|
||||
// LanguagePatterns contains regex patterns for different language constructs
|
||||
type LanguagePatterns struct {
|
||||
Functions []*regexp.Regexp
|
||||
Classes []*regexp.Regexp
|
||||
Variables []*regexp.Regexp
|
||||
Imports []*regexp.Regexp
|
||||
Comments []*regexp.Regexp
|
||||
TODOs []*regexp.Regexp
|
||||
Functions []*regexp.Regexp
|
||||
Classes []*regexp.Regexp
|
||||
Variables []*regexp.Regexp
|
||||
Imports []*regexp.Regexp
|
||||
Comments []*regexp.Regexp
|
||||
TODOs []*regexp.Regexp
|
||||
}
|
||||
|
||||
// MetadataExtractor extracts file system metadata
|
||||
@@ -65,66 +64,66 @@ func NewLanguageDetector() *LanguageDetector {
|
||||
|
||||
// Map file extensions to languages
|
||||
extensions := map[string]string{
|
||||
".go": "go",
|
||||
".py": "python",
|
||||
".js": "javascript",
|
||||
".jsx": "javascript",
|
||||
".ts": "typescript",
|
||||
".tsx": "typescript",
|
||||
".java": "java",
|
||||
".c": "c",
|
||||
".cpp": "cpp",
|
||||
".cc": "cpp",
|
||||
".cxx": "cpp",
|
||||
".h": "c",
|
||||
".hpp": "cpp",
|
||||
".cs": "csharp",
|
||||
".php": "php",
|
||||
".rb": "ruby",
|
||||
".rs": "rust",
|
||||
".kt": "kotlin",
|
||||
".swift": "swift",
|
||||
".m": "objective-c",
|
||||
".mm": "objective-c",
|
||||
".scala": "scala",
|
||||
".clj": "clojure",
|
||||
".hs": "haskell",
|
||||
".ex": "elixir",
|
||||
".exs": "elixir",
|
||||
".erl": "erlang",
|
||||
".lua": "lua",
|
||||
".pl": "perl",
|
||||
".r": "r",
|
||||
".sh": "shell",
|
||||
".bash": "shell",
|
||||
".zsh": "shell",
|
||||
".fish": "shell",
|
||||
".sql": "sql",
|
||||
".html": "html",
|
||||
".htm": "html",
|
||||
".css": "css",
|
||||
".scss": "scss",
|
||||
".sass": "sass",
|
||||
".less": "less",
|
||||
".xml": "xml",
|
||||
".json": "json",
|
||||
".yaml": "yaml",
|
||||
".yml": "yaml",
|
||||
".toml": "toml",
|
||||
".ini": "ini",
|
||||
".cfg": "ini",
|
||||
".conf": "config",
|
||||
".md": "markdown",
|
||||
".rst": "rst",
|
||||
".tex": "latex",
|
||||
".proto": "protobuf",
|
||||
".tf": "terraform",
|
||||
".hcl": "hcl",
|
||||
".dockerfile": "dockerfile",
|
||||
".go": "go",
|
||||
".py": "python",
|
||||
".js": "javascript",
|
||||
".jsx": "javascript",
|
||||
".ts": "typescript",
|
||||
".tsx": "typescript",
|
||||
".java": "java",
|
||||
".c": "c",
|
||||
".cpp": "cpp",
|
||||
".cc": "cpp",
|
||||
".cxx": "cpp",
|
||||
".h": "c",
|
||||
".hpp": "cpp",
|
||||
".cs": "csharp",
|
||||
".php": "php",
|
||||
".rb": "ruby",
|
||||
".rs": "rust",
|
||||
".kt": "kotlin",
|
||||
".swift": "swift",
|
||||
".m": "objective-c",
|
||||
".mm": "objective-c",
|
||||
".scala": "scala",
|
||||
".clj": "clojure",
|
||||
".hs": "haskell",
|
||||
".ex": "elixir",
|
||||
".exs": "elixir",
|
||||
".erl": "erlang",
|
||||
".lua": "lua",
|
||||
".pl": "perl",
|
||||
".r": "r",
|
||||
".sh": "shell",
|
||||
".bash": "shell",
|
||||
".zsh": "shell",
|
||||
".fish": "shell",
|
||||
".sql": "sql",
|
||||
".html": "html",
|
||||
".htm": "html",
|
||||
".css": "css",
|
||||
".scss": "scss",
|
||||
".sass": "sass",
|
||||
".less": "less",
|
||||
".xml": "xml",
|
||||
".json": "json",
|
||||
".yaml": "yaml",
|
||||
".yml": "yaml",
|
||||
".toml": "toml",
|
||||
".ini": "ini",
|
||||
".cfg": "ini",
|
||||
".conf": "config",
|
||||
".md": "markdown",
|
||||
".rst": "rst",
|
||||
".tex": "latex",
|
||||
".proto": "protobuf",
|
||||
".tf": "terraform",
|
||||
".hcl": "hcl",
|
||||
".dockerfile": "dockerfile",
|
||||
".dockerignore": "dockerignore",
|
||||
".gitignore": "gitignore",
|
||||
".vim": "vim",
|
||||
".emacs": "emacs",
|
||||
".gitignore": "gitignore",
|
||||
".vim": "vim",
|
||||
".emacs": "emacs",
|
||||
}
|
||||
|
||||
for ext, lang := range extensions {
|
||||
@@ -500,8 +499,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
||||
|
||||
// Configuration files
|
||||
if strings.Contains(filenameUpper, "CONFIG") ||
|
||||
strings.Contains(filenameUpper, "CONF") ||
|
||||
analysis.FileType == ".ini" || analysis.FileType == ".toml" {
|
||||
strings.Contains(filenameUpper, "CONF") ||
|
||||
analysis.FileType == ".ini" || analysis.FileType == ".toml" {
|
||||
purpose = "Configuration management"
|
||||
confidence = 0.9
|
||||
return purpose, confidence, nil
|
||||
@@ -509,9 +508,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
||||
|
||||
// Test files
|
||||
if strings.Contains(filenameUpper, "TEST") ||
|
||||
strings.Contains(filenameUpper, "SPEC") ||
|
||||
strings.HasSuffix(filenameUpper, "_TEST.GO") ||
|
||||
strings.HasSuffix(filenameUpper, "_TEST.PY") {
|
||||
strings.Contains(filenameUpper, "SPEC") ||
|
||||
strings.HasSuffix(filenameUpper, "_TEST.GO") ||
|
||||
strings.HasSuffix(filenameUpper, "_TEST.PY") {
|
||||
purpose = "Testing and quality assurance"
|
||||
confidence = 0.9
|
||||
return purpose, confidence, nil
|
||||
@@ -519,8 +518,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
||||
|
||||
// Documentation files
|
||||
if analysis.FileType == ".md" || analysis.FileType == ".rst" ||
|
||||
strings.Contains(filenameUpper, "README") ||
|
||||
strings.Contains(filenameUpper, "DOC") {
|
||||
strings.Contains(filenameUpper, "README") ||
|
||||
strings.Contains(filenameUpper, "DOC") {
|
||||
purpose = "Documentation and guidance"
|
||||
confidence = 0.9
|
||||
return purpose, confidence, nil
|
||||
@@ -528,8 +527,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
||||
|
||||
// API files
|
||||
if strings.Contains(filenameUpper, "API") ||
|
||||
strings.Contains(filenameUpper, "ROUTER") ||
|
||||
strings.Contains(filenameUpper, "HANDLER") {
|
||||
strings.Contains(filenameUpper, "ROUTER") ||
|
||||
strings.Contains(filenameUpper, "HANDLER") {
|
||||
purpose = "API endpoint management"
|
||||
confidence = 0.8
|
||||
return purpose, confidence, nil
|
||||
@@ -537,9 +536,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
||||
|
||||
// Database files
|
||||
if strings.Contains(filenameUpper, "DB") ||
|
||||
strings.Contains(filenameUpper, "DATABASE") ||
|
||||
strings.Contains(filenameUpper, "MODEL") ||
|
||||
strings.Contains(filenameUpper, "SCHEMA") {
|
||||
strings.Contains(filenameUpper, "DATABASE") ||
|
||||
strings.Contains(filenameUpper, "MODEL") ||
|
||||
strings.Contains(filenameUpper, "SCHEMA") {
|
||||
purpose = "Data storage and management"
|
||||
confidence = 0.8
|
||||
return purpose, confidence, nil
|
||||
@@ -547,9 +546,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
||||
|
||||
// UI/Frontend files
|
||||
if analysis.Language == "javascript" || analysis.Language == "typescript" ||
|
||||
strings.Contains(filenameUpper, "COMPONENT") ||
|
||||
strings.Contains(filenameUpper, "VIEW") ||
|
||||
strings.Contains(filenameUpper, "UI") {
|
||||
strings.Contains(filenameUpper, "COMPONENT") ||
|
||||
strings.Contains(filenameUpper, "VIEW") ||
|
||||
strings.Contains(filenameUpper, "UI") {
|
||||
purpose = "User interface component"
|
||||
confidence = 0.7
|
||||
return purpose, confidence, nil
|
||||
@@ -557,8 +556,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
||||
|
||||
// Service/Business logic
|
||||
if strings.Contains(filenameUpper, "SERVICE") ||
|
||||
strings.Contains(filenameUpper, "BUSINESS") ||
|
||||
strings.Contains(filenameUpper, "LOGIC") {
|
||||
strings.Contains(filenameUpper, "BUSINESS") ||
|
||||
strings.Contains(filenameUpper, "LOGIC") {
|
||||
purpose = "Business logic implementation"
|
||||
confidence = 0.7
|
||||
return purpose, confidence, nil
|
||||
@@ -566,8 +565,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
||||
|
||||
// Utility files
|
||||
if strings.Contains(filenameUpper, "UTIL") ||
|
||||
strings.Contains(filenameUpper, "HELPER") ||
|
||||
strings.Contains(filenameUpper, "COMMON") {
|
||||
strings.Contains(filenameUpper, "HELPER") ||
|
||||
strings.Contains(filenameUpper, "COMMON") {
|
||||
purpose = "Utility and helper functions"
|
||||
confidence = 0.7
|
||||
return purpose, confidence, nil
|
||||
@@ -646,20 +645,20 @@ func (fa *DefaultFileAnalyzer) ExtractTechnologies(ctx context.Context, analysis
|
||||
|
||||
// Framework detection
|
||||
frameworks := map[string]string{
|
||||
"react": "React",
|
||||
"vue": "Vue.js",
|
||||
"angular": "Angular",
|
||||
"express": "Express.js",
|
||||
"django": "Django",
|
||||
"flask": "Flask",
|
||||
"spring": "Spring",
|
||||
"gin": "Gin",
|
||||
"echo": "Echo",
|
||||
"fastapi": "FastAPI",
|
||||
"bootstrap": "Bootstrap",
|
||||
"tailwind": "Tailwind CSS",
|
||||
"material": "Material UI",
|
||||
"antd": "Ant Design",
|
||||
"react": "React",
|
||||
"vue": "Vue.js",
|
||||
"angular": "Angular",
|
||||
"express": "Express.js",
|
||||
"django": "Django",
|
||||
"flask": "Flask",
|
||||
"spring": "Spring",
|
||||
"gin": "Gin",
|
||||
"echo": "Echo",
|
||||
"fastapi": "FastAPI",
|
||||
"bootstrap": "Bootstrap",
|
||||
"tailwind": "Tailwind CSS",
|
||||
"material": "Material UI",
|
||||
"antd": "Ant Design",
|
||||
}
|
||||
|
||||
for pattern, tech := range frameworks {
|
||||
@@ -832,12 +831,12 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
|
||||
// Technology mapping based on common imports
|
||||
techMap := map[string]string{
|
||||
// Go
|
||||
"gin-gonic/gin": "Gin",
|
||||
"labstack/echo": "Echo",
|
||||
"gorilla/mux": "Gorilla Mux",
|
||||
"gorm.io/gorm": "GORM",
|
||||
"github.com/redis": "Redis",
|
||||
"go.mongodb.org": "MongoDB",
|
||||
"gin-gonic/gin": "Gin",
|
||||
"labstack/echo": "Echo",
|
||||
"gorilla/mux": "Gorilla Mux",
|
||||
"gorm.io/gorm": "GORM",
|
||||
"github.com/redis": "Redis",
|
||||
"go.mongodb.org": "MongoDB",
|
||||
|
||||
// Python
|
||||
"django": "Django",
|
||||
@@ -851,13 +850,13 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
|
||||
"torch": "PyTorch",
|
||||
|
||||
// JavaScript/TypeScript
|
||||
"react": "React",
|
||||
"vue": "Vue.js",
|
||||
"angular": "Angular",
|
||||
"express": "Express.js",
|
||||
"axios": "Axios",
|
||||
"lodash": "Lodash",
|
||||
"moment": "Moment.js",
|
||||
"react": "React",
|
||||
"vue": "Vue.js",
|
||||
"angular": "Angular",
|
||||
"express": "Express.js",
|
||||
"axios": "Axios",
|
||||
"lodash": "Lodash",
|
||||
"moment": "Moment.js",
|
||||
"socket.io": "Socket.IO",
|
||||
}
|
||||
|
||||
|
||||
@@ -8,80 +8,79 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/crypto"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
)
|
||||
|
||||
// RoleAwareProcessor provides role-based context processing and insight generation
|
||||
type RoleAwareProcessor struct {
|
||||
mu sync.RWMutex
|
||||
config *EngineConfig
|
||||
roleManager *RoleManager
|
||||
securityFilter *SecurityFilter
|
||||
insightGenerator *InsightGenerator
|
||||
accessController *AccessController
|
||||
auditLogger *AuditLogger
|
||||
permissions *PermissionMatrix
|
||||
roleProfiles map[string]*RoleProfile
|
||||
mu sync.RWMutex
|
||||
config *EngineConfig
|
||||
roleManager *RoleManager
|
||||
securityFilter *SecurityFilter
|
||||
insightGenerator *InsightGenerator
|
||||
accessController *AccessController
|
||||
auditLogger *AuditLogger
|
||||
permissions *PermissionMatrix
|
||||
roleProfiles map[string]*RoleBlueprint
|
||||
}
|
||||
|
||||
// RoleManager manages role definitions and hierarchies
|
||||
type RoleManager struct {
|
||||
roles map[string]*Role
|
||||
hierarchies map[string]*RoleHierarchy
|
||||
capabilities map[string]*RoleCapabilities
|
||||
restrictions map[string]*RoleRestrictions
|
||||
roles map[string]*Role
|
||||
hierarchies map[string]*RoleHierarchy
|
||||
capabilities map[string]*RoleCapabilities
|
||||
restrictions map[string]*RoleRestrictions
|
||||
}
|
||||
|
||||
// Role represents an AI agent role with specific permissions and capabilities
|
||||
type Role struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
SecurityLevel int `json:"security_level"`
|
||||
Capabilities []string `json:"capabilities"`
|
||||
Restrictions []string `json:"restrictions"`
|
||||
AccessPatterns []string `json:"access_patterns"`
|
||||
ContextFilters []string `json:"context_filters"`
|
||||
Priority int `json:"priority"`
|
||||
ParentRoles []string `json:"parent_roles"`
|
||||
ChildRoles []string `json:"child_roles"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
IsActive bool `json:"is_active"`
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
SecurityLevel int `json:"security_level"`
|
||||
Capabilities []string `json:"capabilities"`
|
||||
Restrictions []string `json:"restrictions"`
|
||||
AccessPatterns []string `json:"access_patterns"`
|
||||
ContextFilters []string `json:"context_filters"`
|
||||
Priority int `json:"priority"`
|
||||
ParentRoles []string `json:"parent_roles"`
|
||||
ChildRoles []string `json:"child_roles"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
IsActive bool `json:"is_active"`
|
||||
}
|
||||
|
||||
// RoleHierarchy defines role inheritance and relationships
|
||||
type RoleHierarchy struct {
|
||||
ParentRole string `json:"parent_role"`
|
||||
ChildRoles []string `json:"child_roles"`
|
||||
InheritLevel int `json:"inherit_level"`
|
||||
OverrideRules []string `json:"override_rules"`
|
||||
ParentRole string `json:"parent_role"`
|
||||
ChildRoles []string `json:"child_roles"`
|
||||
InheritLevel int `json:"inherit_level"`
|
||||
OverrideRules []string `json:"override_rules"`
|
||||
}
|
||||
|
||||
// RoleCapabilities defines what a role can do
|
||||
type RoleCapabilities struct {
|
||||
RoleID string `json:"role_id"`
|
||||
ReadAccess []string `json:"read_access"`
|
||||
WriteAccess []string `json:"write_access"`
|
||||
ExecuteAccess []string `json:"execute_access"`
|
||||
AnalysisTypes []string `json:"analysis_types"`
|
||||
InsightLevels []string `json:"insight_levels"`
|
||||
SecurityScopes []string `json:"security_scopes"`
|
||||
RoleID string `json:"role_id"`
|
||||
ReadAccess []string `json:"read_access"`
|
||||
WriteAccess []string `json:"write_access"`
|
||||
ExecuteAccess []string `json:"execute_access"`
|
||||
AnalysisTypes []string `json:"analysis_types"`
|
||||
InsightLevels []string `json:"insight_levels"`
|
||||
SecurityScopes []string `json:"security_scopes"`
|
||||
DataClassifications []string `json:"data_classifications"`
|
||||
}
|
||||
|
||||
// RoleRestrictions defines what a role cannot do or access
|
||||
type RoleRestrictions struct {
|
||||
RoleID string `json:"role_id"`
|
||||
ForbiddenPaths []string `json:"forbidden_paths"`
|
||||
ForbiddenTypes []string `json:"forbidden_types"`
|
||||
ForbiddenKeywords []string `json:"forbidden_keywords"`
|
||||
TimeRestrictions []string `json:"time_restrictions"`
|
||||
RateLimit *RateLimit `json:"rate_limit"`
|
||||
MaxContextSize int `json:"max_context_size"`
|
||||
MaxInsights int `json:"max_insights"`
|
||||
RoleID string `json:"role_id"`
|
||||
ForbiddenPaths []string `json:"forbidden_paths"`
|
||||
ForbiddenTypes []string `json:"forbidden_types"`
|
||||
ForbiddenKeywords []string `json:"forbidden_keywords"`
|
||||
TimeRestrictions []string `json:"time_restrictions"`
|
||||
RateLimit *RateLimit `json:"rate_limit"`
|
||||
MaxContextSize int `json:"max_context_size"`
|
||||
MaxInsights int `json:"max_insights"`
|
||||
}
|
||||
|
||||
// RateLimit defines rate limiting for role operations
|
||||
@@ -111,9 +110,9 @@ type ContentFilter struct {
|
||||
|
||||
// AccessMatrix defines access control rules
|
||||
type AccessMatrix struct {
|
||||
Rules map[string]*AccessRule `json:"rules"`
|
||||
DefaultDeny bool `json:"default_deny"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
Rules map[string]*AccessRule `json:"rules"`
|
||||
DefaultDeny bool `json:"default_deny"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
|
||||
// AccessRule defines a specific access control rule
|
||||
@@ -144,14 +143,14 @@ type RoleInsightGenerator interface {
|
||||
|
||||
// InsightTemplate defines templates for generating insights
|
||||
type InsightTemplate struct {
|
||||
TemplateID string `json:"template_id"`
|
||||
Name string `json:"name"`
|
||||
Template string `json:"template"`
|
||||
Variables []string `json:"variables"`
|
||||
Roles []string `json:"roles"`
|
||||
Category string `json:"category"`
|
||||
Priority int `json:"priority"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
TemplateID string `json:"template_id"`
|
||||
Name string `json:"name"`
|
||||
Template string `json:"template"`
|
||||
Variables []string `json:"variables"`
|
||||
Roles []string `json:"roles"`
|
||||
Category string `json:"category"`
|
||||
Priority int `json:"priority"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
}
|
||||
|
||||
// InsightFilter filters insights based on role permissions
|
||||
@@ -179,39 +178,39 @@ type PermissionMatrix struct {
|
||||
|
||||
// RolePermissions defines permissions for a specific role
|
||||
type RolePermissions struct {
|
||||
RoleID string `json:"role_id"`
|
||||
ContextAccess *ContextAccessRights `json:"context_access"`
|
||||
AnalysisAccess *AnalysisAccessRights `json:"analysis_access"`
|
||||
InsightAccess *InsightAccessRights `json:"insight_access"`
|
||||
SystemAccess *SystemAccessRights `json:"system_access"`
|
||||
CustomAccess map[string]interface{} `json:"custom_access"`
|
||||
RoleID string `json:"role_id"`
|
||||
ContextAccess *ContextAccessRights `json:"context_access"`
|
||||
AnalysisAccess *AnalysisAccessRights `json:"analysis_access"`
|
||||
InsightAccess *InsightAccessRights `json:"insight_access"`
|
||||
SystemAccess *SystemAccessRights `json:"system_access"`
|
||||
CustomAccess map[string]interface{} `json:"custom_access"`
|
||||
}
|
||||
|
||||
// ContextAccessRights defines context-related access rights
|
||||
type ContextAccessRights struct {
|
||||
ReadLevel int `json:"read_level"`
|
||||
WriteLevel int `json:"write_level"`
|
||||
AllowedTypes []string `json:"allowed_types"`
|
||||
ForbiddenTypes []string `json:"forbidden_types"`
|
||||
ReadLevel int `json:"read_level"`
|
||||
WriteLevel int `json:"write_level"`
|
||||
AllowedTypes []string `json:"allowed_types"`
|
||||
ForbiddenTypes []string `json:"forbidden_types"`
|
||||
PathRestrictions []string `json:"path_restrictions"`
|
||||
SizeLimit int `json:"size_limit"`
|
||||
SizeLimit int `json:"size_limit"`
|
||||
}
|
||||
|
||||
// AnalysisAccessRights defines analysis-related access rights
|
||||
type AnalysisAccessRights struct {
|
||||
AllowedAnalysisTypes []string `json:"allowed_analysis_types"`
|
||||
MaxComplexity int `json:"max_complexity"`
|
||||
AllowedAnalysisTypes []string `json:"allowed_analysis_types"`
|
||||
MaxComplexity int `json:"max_complexity"`
|
||||
TimeoutLimit time.Duration `json:"timeout_limit"`
|
||||
ResourceLimit int `json:"resource_limit"`
|
||||
ResourceLimit int `json:"resource_limit"`
|
||||
}
|
||||
|
||||
// InsightAccessRights defines insight-related access rights
|
||||
type InsightAccessRights struct {
|
||||
GenerationLevel int `json:"generation_level"`
|
||||
AccessLevel int `json:"access_level"`
|
||||
CategoryFilters []string `json:"category_filters"`
|
||||
ConfidenceThreshold float64 `json:"confidence_threshold"`
|
||||
MaxInsights int `json:"max_insights"`
|
||||
GenerationLevel int `json:"generation_level"`
|
||||
AccessLevel int `json:"access_level"`
|
||||
CategoryFilters []string `json:"category_filters"`
|
||||
ConfidenceThreshold float64 `json:"confidence_threshold"`
|
||||
MaxInsights int `json:"max_insights"`
|
||||
}
|
||||
|
||||
// SystemAccessRights defines system-level access rights
|
||||
@@ -254,15 +253,15 @@ type AuditLogger struct {
|
||||
|
||||
// AuditEntry represents an audit log entry
|
||||
type AuditEntry struct {
|
||||
ID string `json:"id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
RoleID string `json:"role_id"`
|
||||
Action string `json:"action"`
|
||||
Resource string `json:"resource"`
|
||||
Result string `json:"result"` // success, denied, error
|
||||
Details string `json:"details"`
|
||||
Context map[string]interface{} `json:"context"`
|
||||
SecurityLevel int `json:"security_level"`
|
||||
ID string `json:"id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
RoleID string `json:"role_id"`
|
||||
Action string `json:"action"`
|
||||
Resource string `json:"resource"`
|
||||
Result string `json:"result"` // success, denied, error
|
||||
Details string `json:"details"`
|
||||
Context map[string]interface{} `json:"context"`
|
||||
SecurityLevel int `json:"security_level"`
|
||||
}
|
||||
|
||||
// AuditConfig defines audit logging configuration
|
||||
@@ -276,49 +275,49 @@ type AuditConfig struct {
|
||||
}
|
||||
|
||||
// RoleProfile contains comprehensive role configuration
|
||||
type RoleProfile struct {
|
||||
Role *Role `json:"role"`
|
||||
Capabilities *RoleCapabilities `json:"capabilities"`
|
||||
Restrictions *RoleRestrictions `json:"restrictions"`
|
||||
Permissions *RolePermissions `json:"permissions"`
|
||||
InsightConfig *RoleInsightConfig `json:"insight_config"`
|
||||
SecurityConfig *RoleSecurityConfig `json:"security_config"`
|
||||
type RoleBlueprint struct {
|
||||
Role *Role `json:"role"`
|
||||
Capabilities *RoleCapabilities `json:"capabilities"`
|
||||
Restrictions *RoleRestrictions `json:"restrictions"`
|
||||
Permissions *RolePermissions `json:"permissions"`
|
||||
InsightConfig *RoleInsightConfig `json:"insight_config"`
|
||||
SecurityConfig *RoleSecurityConfig `json:"security_config"`
|
||||
}
|
||||
|
||||
// RoleInsightConfig defines insight generation configuration for a role
|
||||
type RoleInsightConfig struct {
|
||||
EnabledGenerators []string `json:"enabled_generators"`
|
||||
MaxInsights int `json:"max_insights"`
|
||||
ConfidenceThreshold float64 `json:"confidence_threshold"`
|
||||
CategoryWeights map[string]float64 `json:"category_weights"`
|
||||
CustomFilters []string `json:"custom_filters"`
|
||||
EnabledGenerators []string `json:"enabled_generators"`
|
||||
MaxInsights int `json:"max_insights"`
|
||||
ConfidenceThreshold float64 `json:"confidence_threshold"`
|
||||
CategoryWeights map[string]float64 `json:"category_weights"`
|
||||
CustomFilters []string `json:"custom_filters"`
|
||||
}
|
||||
|
||||
// RoleSecurityConfig defines security configuration for a role
|
||||
type RoleSecurityConfig struct {
|
||||
EncryptionRequired bool `json:"encryption_required"`
|
||||
AccessLogging bool `json:"access_logging"`
|
||||
EncryptionRequired bool `json:"encryption_required"`
|
||||
AccessLogging bool `json:"access_logging"`
|
||||
RateLimit *RateLimit `json:"rate_limit"`
|
||||
IPWhitelist []string `json:"ip_whitelist"`
|
||||
RequiredClaims []string `json:"required_claims"`
|
||||
IPWhitelist []string `json:"ip_whitelist"`
|
||||
RequiredClaims []string `json:"required_claims"`
|
||||
}
|
||||
|
||||
// RoleSpecificInsight represents an insight tailored to a specific role
|
||||
type RoleSpecificInsight struct {
|
||||
ID string `json:"id"`
|
||||
RoleID string `json:"role_id"`
|
||||
Category string `json:"category"`
|
||||
Title string `json:"title"`
|
||||
Content string `json:"content"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
Priority int `json:"priority"`
|
||||
SecurityLevel int `json:"security_level"`
|
||||
Tags []string `json:"tags"`
|
||||
ActionItems []string `json:"action_items"`
|
||||
References []string `json:"references"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
GeneratedAt time.Time `json:"generated_at"`
|
||||
ExpiresAt *time.Time `json:"expires_at,omitempty"`
|
||||
ID string `json:"id"`
|
||||
RoleID string `json:"role_id"`
|
||||
Category string `json:"category"`
|
||||
Title string `json:"title"`
|
||||
Content string `json:"content"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
Priority int `json:"priority"`
|
||||
SecurityLevel int `json:"security_level"`
|
||||
Tags []string `json:"tags"`
|
||||
ActionItems []string `json:"action_items"`
|
||||
References []string `json:"references"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
GeneratedAt time.Time `json:"generated_at"`
|
||||
ExpiresAt *time.Time `json:"expires_at,omitempty"`
|
||||
}
|
||||
|
||||
// NewRoleAwareProcessor creates a new role-aware processor
|
||||
@@ -331,7 +330,7 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
|
||||
accessController: NewAccessController(),
|
||||
auditLogger: NewAuditLogger(),
|
||||
permissions: NewPermissionMatrix(),
|
||||
roleProfiles: make(map[string]*RoleProfile),
|
||||
roleProfiles: make(map[string]*RoleBlueprint),
|
||||
}
|
||||
|
||||
// Initialize default roles
|
||||
@@ -342,10 +341,10 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
|
||||
// NewRoleManager creates a role manager with default roles
|
||||
func NewRoleManager() *RoleManager {
|
||||
rm := &RoleManager{
|
||||
roles: make(map[string]*Role),
|
||||
hierarchies: make(map[string]*RoleHierarchy),
|
||||
capabilities: make(map[string]*RoleCapabilities),
|
||||
restrictions: make(map[string]*RoleRestrictions),
|
||||
roles: make(map[string]*Role),
|
||||
hierarchies: make(map[string]*RoleHierarchy),
|
||||
capabilities: make(map[string]*RoleCapabilities),
|
||||
restrictions: make(map[string]*RoleRestrictions),
|
||||
}
|
||||
|
||||
// Initialize with default roles
|
||||
@@ -383,8 +382,11 @@ func (rap *RoleAwareProcessor) ProcessContextForRole(ctx context.Context, node *
|
||||
|
||||
// Apply insights to node
|
||||
if len(insights) > 0 {
|
||||
filteredNode.RoleSpecificInsights = insights
|
||||
filteredNode.ProcessedForRole = roleID
|
||||
if filteredNode.Metadata == nil {
|
||||
filteredNode.Metadata = make(map[string]interface{})
|
||||
}
|
||||
filteredNode.Metadata["role_specific_insights"] = insights
|
||||
filteredNode.Metadata["processed_for_role"] = roleID
|
||||
}
|
||||
|
||||
// Log successful processing
|
||||
@@ -448,69 +450,69 @@ func (rap *RoleAwareProcessor) GetRoleCapabilities(roleID string) (*RoleCapabili
|
||||
func (rap *RoleAwareProcessor) initializeDefaultRoles() {
|
||||
defaultRoles := []*Role{
|
||||
{
|
||||
ID: "architect",
|
||||
Name: "System Architect",
|
||||
Description: "High-level system design and architecture decisions",
|
||||
SecurityLevel: 8,
|
||||
Capabilities: []string{"architecture_design", "high_level_analysis", "strategic_planning"},
|
||||
Restrictions: []string{"no_implementation_details", "no_low_level_code"},
|
||||
ID: "architect",
|
||||
Name: "System Architect",
|
||||
Description: "High-level system design and architecture decisions",
|
||||
SecurityLevel: 8,
|
||||
Capabilities: []string{"architecture_design", "high_level_analysis", "strategic_planning"},
|
||||
Restrictions: []string{"no_implementation_details", "no_low_level_code"},
|
||||
AccessPatterns: []string{"architecture/**", "design/**", "docs/**"},
|
||||
Priority: 1,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
Priority: 1,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
},
|
||||
{
|
||||
ID: "developer",
|
||||
Name: "Software Developer",
|
||||
Description: "Code implementation and development tasks",
|
||||
SecurityLevel: 6,
|
||||
Capabilities: []string{"code_analysis", "implementation", "debugging", "testing"},
|
||||
Restrictions: []string{"no_architecture_changes", "no_security_config"},
|
||||
ID: "developer",
|
||||
Name: "Software Developer",
|
||||
Description: "Code implementation and development tasks",
|
||||
SecurityLevel: 6,
|
||||
Capabilities: []string{"code_analysis", "implementation", "debugging", "testing"},
|
||||
Restrictions: []string{"no_architecture_changes", "no_security_config"},
|
||||
AccessPatterns: []string{"src/**", "lib/**", "test/**"},
|
||||
Priority: 2,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
Priority: 2,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
},
|
||||
{
|
||||
ID: "security_analyst",
|
||||
Name: "Security Analyst",
|
||||
Description: "Security analysis and vulnerability assessment",
|
||||
SecurityLevel: 9,
|
||||
Capabilities: []string{"security_analysis", "vulnerability_assessment", "compliance_check"},
|
||||
Restrictions: []string{"no_code_modification"},
|
||||
ID: "security_analyst",
|
||||
Name: "Security Analyst",
|
||||
Description: "Security analysis and vulnerability assessment",
|
||||
SecurityLevel: 9,
|
||||
Capabilities: []string{"security_analysis", "vulnerability_assessment", "compliance_check"},
|
||||
Restrictions: []string{"no_code_modification"},
|
||||
AccessPatterns: []string{"**/*"},
|
||||
Priority: 1,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
Priority: 1,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
},
|
||||
{
|
||||
ID: "devops_engineer",
|
||||
Name: "DevOps Engineer",
|
||||
Description: "Infrastructure and deployment operations",
|
||||
SecurityLevel: 7,
|
||||
Capabilities: []string{"infrastructure_analysis", "deployment", "monitoring", "ci_cd"},
|
||||
Restrictions: []string{"no_business_logic"},
|
||||
ID: "devops_engineer",
|
||||
Name: "DevOps Engineer",
|
||||
Description: "Infrastructure and deployment operations",
|
||||
SecurityLevel: 7,
|
||||
Capabilities: []string{"infrastructure_analysis", "deployment", "monitoring", "ci_cd"},
|
||||
Restrictions: []string{"no_business_logic"},
|
||||
AccessPatterns: []string{"infra/**", "deploy/**", "config/**", "docker/**"},
|
||||
Priority: 2,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
Priority: 2,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
},
|
||||
{
|
||||
ID: "qa_engineer",
|
||||
Name: "Quality Assurance Engineer",
|
||||
Description: "Quality assurance and testing",
|
||||
SecurityLevel: 5,
|
||||
Capabilities: []string{"quality_analysis", "testing", "test_planning"},
|
||||
Restrictions: []string{"no_production_access", "no_code_modification"},
|
||||
ID: "qa_engineer",
|
||||
Name: "Quality Assurance Engineer",
|
||||
Description: "Quality assurance and testing",
|
||||
SecurityLevel: 5,
|
||||
Capabilities: []string{"quality_analysis", "testing", "test_planning"},
|
||||
Restrictions: []string{"no_production_access", "no_code_modification"},
|
||||
AccessPatterns: []string{"test/**", "spec/**", "qa/**"},
|
||||
Priority: 3,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
Priority: 3,
|
||||
IsActive: true,
|
||||
CreatedAt: time.Now(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, role := range defaultRoles {
|
||||
rap.roleProfiles[role.ID] = &RoleProfile{
|
||||
rap.roleProfiles[role.ID] = &RoleBlueprint{
|
||||
Role: role,
|
||||
Capabilities: rap.createDefaultCapabilities(role),
|
||||
Restrictions: rap.createDefaultRestrictions(role),
|
||||
@@ -615,10 +617,10 @@ func (rap *RoleAwareProcessor) createDefaultPermissions(role *Role) *RolePermiss
|
||||
return &RolePermissions{
|
||||
RoleID: role.ID,
|
||||
ContextAccess: &ContextAccessRights{
|
||||
ReadLevel: role.SecurityLevel,
|
||||
WriteLevel: role.SecurityLevel - 2,
|
||||
AllowedTypes: []string{"code", "documentation", "configuration"},
|
||||
SizeLimit: 1000000,
|
||||
ReadLevel: role.SecurityLevel,
|
||||
WriteLevel: role.SecurityLevel - 2,
|
||||
AllowedTypes: []string{"code", "documentation", "configuration"},
|
||||
SizeLimit: 1000000,
|
||||
},
|
||||
AnalysisAccess: &AnalysisAccessRights{
|
||||
AllowedAnalysisTypes: role.Capabilities,
|
||||
@@ -627,10 +629,10 @@ func (rap *RoleAwareProcessor) createDefaultPermissions(role *Role) *RolePermiss
|
||||
ResourceLimit: 100,
|
||||
},
|
||||
InsightAccess: &InsightAccessRights{
|
||||
GenerationLevel: role.SecurityLevel,
|
||||
AccessLevel: role.SecurityLevel,
|
||||
ConfidenceThreshold: 0.5,
|
||||
MaxInsights: 50,
|
||||
GenerationLevel: role.SecurityLevel,
|
||||
AccessLevel: role.SecurityLevel,
|
||||
ConfidenceThreshold: 0.5,
|
||||
MaxInsights: 50,
|
||||
},
|
||||
SystemAccess: &SystemAccessRights{
|
||||
AdminAccess: role.SecurityLevel >= 8,
|
||||
@@ -664,19 +666,19 @@ func (rap *RoleAwareProcessor) createDefaultInsightConfig(role *Role) *RoleInsig
|
||||
case "developer":
|
||||
config.EnabledGenerators = []string{"code_insights", "implementation_suggestions", "bug_detection"}
|
||||
config.CategoryWeights = map[string]float64{
|
||||
"code_quality": 1.0,
|
||||
"implementation": 0.9,
|
||||
"bugs": 0.8,
|
||||
"performance": 0.6,
|
||||
"code_quality": 1.0,
|
||||
"implementation": 0.9,
|
||||
"bugs": 0.8,
|
||||
"performance": 0.6,
|
||||
}
|
||||
|
||||
case "security_analyst":
|
||||
config.EnabledGenerators = []string{"security_insights", "vulnerability_analysis", "compliance_check"}
|
||||
config.CategoryWeights = map[string]float64{
|
||||
"security": 1.0,
|
||||
"security": 1.0,
|
||||
"vulnerabilities": 1.0,
|
||||
"compliance": 0.9,
|
||||
"privacy": 0.8,
|
||||
"compliance": 0.9,
|
||||
"privacy": 0.8,
|
||||
}
|
||||
config.MaxInsights = 200
|
||||
|
||||
@@ -751,7 +753,7 @@ func NewSecurityFilter() *SecurityFilter {
|
||||
"top_secret": 10,
|
||||
},
|
||||
contentFilters: make(map[string]*ContentFilter),
|
||||
accessMatrix: &AccessMatrix{
|
||||
accessMatrix: &AccessMatrix{
|
||||
Rules: make(map[string]*AccessRule),
|
||||
DefaultDeny: true,
|
||||
LastUpdated: time.Now(),
|
||||
@@ -1174,6 +1176,7 @@ func (al *AuditLogger) GetAuditLog(limit int) []*AuditEntry {
|
||||
// These would be fully implemented with sophisticated logic in production
|
||||
|
||||
type ArchitectInsightGenerator struct{}
|
||||
|
||||
func NewArchitectInsightGenerator() *ArchitectInsightGenerator { return &ArchitectInsightGenerator{} }
|
||||
func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||
return []*RoleSpecificInsight{
|
||||
@@ -1191,10 +1194,15 @@ func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node
|
||||
}, nil
|
||||
}
|
||||
func (aig *ArchitectInsightGenerator) GetSupportedRoles() []string { return []string{"architect"} }
|
||||
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string { return []string{"architecture", "design", "patterns"} }
|
||||
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
||||
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string {
|
||||
return []string{"architecture", "design", "patterns"}
|
||||
}
|
||||
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type DeveloperInsightGenerator struct{}
|
||||
|
||||
func NewDeveloperInsightGenerator() *DeveloperInsightGenerator { return &DeveloperInsightGenerator{} }
|
||||
func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||
return []*RoleSpecificInsight{
|
||||
@@ -1212,10 +1220,15 @@ func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node
|
||||
}, nil
|
||||
}
|
||||
func (dig *DeveloperInsightGenerator) GetSupportedRoles() []string { return []string{"developer"} }
|
||||
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string { return []string{"code_quality", "implementation", "bugs"} }
|
||||
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
||||
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string {
|
||||
return []string{"code_quality", "implementation", "bugs"}
|
||||
}
|
||||
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type SecurityInsightGenerator struct{}
|
||||
|
||||
func NewSecurityInsightGenerator() *SecurityInsightGenerator { return &SecurityInsightGenerator{} }
|
||||
func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||
return []*RoleSpecificInsight{
|
||||
@@ -1232,11 +1245,18 @@ func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string { return []string{"security_analyst"} }
|
||||
func (sig *SecurityInsightGenerator) GetInsightTypes() []string { return []string{"security", "vulnerability", "compliance"} }
|
||||
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
||||
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string {
|
||||
return []string{"security_analyst"}
|
||||
}
|
||||
func (sig *SecurityInsightGenerator) GetInsightTypes() []string {
|
||||
return []string{"security", "vulnerability", "compliance"}
|
||||
}
|
||||
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type DevOpsInsightGenerator struct{}
|
||||
|
||||
func NewDevOpsInsightGenerator() *DevOpsInsightGenerator { return &DevOpsInsightGenerator{} }
|
||||
func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||
return []*RoleSpecificInsight{
|
||||
@@ -1254,10 +1274,15 @@ func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *
|
||||
}, nil
|
||||
}
|
||||
func (doig *DevOpsInsightGenerator) GetSupportedRoles() []string { return []string{"devops_engineer"} }
|
||||
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string { return []string{"infrastructure", "deployment", "monitoring"} }
|
||||
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
||||
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string {
|
||||
return []string{"infrastructure", "deployment", "monitoring"}
|
||||
}
|
||||
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type QAInsightGenerator struct{}
|
||||
|
||||
func NewQAInsightGenerator() *QAInsightGenerator { return &QAInsightGenerator{} }
|
||||
func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||
return []*RoleSpecificInsight{
|
||||
@@ -1275,5 +1300,9 @@ func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slur
|
||||
}, nil
|
||||
}
|
||||
func (qaig *QAInsightGenerator) GetSupportedRoles() []string { return []string{"qa_engineer"} }
|
||||
func (qaig *QAInsightGenerator) GetInsightTypes() []string { return []string{"quality", "testing", "validation"} }
|
||||
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
||||
func (qaig *QAInsightGenerator) GetInsightTypes() []string {
|
||||
return []string{"quality", "testing", "validation"}
|
||||
}
|
||||
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,236 +6,236 @@ import (
|
||||
|
||||
// FileMetadata represents metadata extracted from file system
|
||||
type FileMetadata struct {
|
||||
Path string `json:"path"` // File path
|
||||
Size int64 `json:"size"` // File size in bytes
|
||||
ModTime time.Time `json:"mod_time"` // Last modification time
|
||||
Mode uint32 `json:"mode"` // File mode
|
||||
IsDir bool `json:"is_dir"` // Whether it's a directory
|
||||
Extension string `json:"extension"` // File extension
|
||||
MimeType string `json:"mime_type"` // MIME type
|
||||
Hash string `json:"hash"` // Content hash
|
||||
Permissions string `json:"permissions"` // File permissions
|
||||
Path string `json:"path"` // File path
|
||||
Size int64 `json:"size"` // File size in bytes
|
||||
ModTime time.Time `json:"mod_time"` // Last modification time
|
||||
Mode uint32 `json:"mode"` // File mode
|
||||
IsDir bool `json:"is_dir"` // Whether it's a directory
|
||||
Extension string `json:"extension"` // File extension
|
||||
MimeType string `json:"mime_type"` // MIME type
|
||||
Hash string `json:"hash"` // Content hash
|
||||
Permissions string `json:"permissions"` // File permissions
|
||||
}
|
||||
|
||||
// StructureAnalysis represents analysis of code structure
|
||||
type StructureAnalysis struct {
|
||||
Architecture string `json:"architecture"` // Architectural pattern
|
||||
Patterns []string `json:"patterns"` // Design patterns used
|
||||
Components []*Component `json:"components"` // Code components
|
||||
Relationships []*Relationship `json:"relationships"` // Component relationships
|
||||
Complexity *ComplexityMetrics `json:"complexity"` // Complexity metrics
|
||||
QualityMetrics *QualityMetrics `json:"quality_metrics"` // Code quality metrics
|
||||
TestCoverage float64 `json:"test_coverage"` // Test coverage percentage
|
||||
Documentation *DocMetrics `json:"documentation"` // Documentation metrics
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
Architecture string `json:"architecture"` // Architectural pattern
|
||||
Patterns []string `json:"patterns"` // Design patterns used
|
||||
Components []*Component `json:"components"` // Code components
|
||||
Relationships []*Relationship `json:"relationships"` // Component relationships
|
||||
Complexity *ComplexityMetrics `json:"complexity"` // Complexity metrics
|
||||
QualityMetrics *QualityMetrics `json:"quality_metrics"` // Code quality metrics
|
||||
TestCoverage float64 `json:"test_coverage"` // Test coverage percentage
|
||||
Documentation *DocMetrics `json:"documentation"` // Documentation metrics
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
}
|
||||
|
||||
// Component represents a code component
|
||||
type Component struct {
|
||||
Name string `json:"name"` // Component name
|
||||
Type string `json:"type"` // Component type (class, function, etc.)
|
||||
Purpose string `json:"purpose"` // Component purpose
|
||||
Visibility string `json:"visibility"` // Visibility (public, private, etc.)
|
||||
Lines int `json:"lines"` // Lines of code
|
||||
Complexity int `json:"complexity"` // Cyclomatic complexity
|
||||
Dependencies []string `json:"dependencies"` // Dependencies
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
Name string `json:"name"` // Component name
|
||||
Type string `json:"type"` // Component type (class, function, etc.)
|
||||
Purpose string `json:"purpose"` // Component purpose
|
||||
Visibility string `json:"visibility"` // Visibility (public, private, etc.)
|
||||
Lines int `json:"lines"` // Lines of code
|
||||
Complexity int `json:"complexity"` // Cyclomatic complexity
|
||||
Dependencies []string `json:"dependencies"` // Dependencies
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// Relationship represents a relationship between components
|
||||
type Relationship struct {
|
||||
From string `json:"from"` // Source component
|
||||
To string `json:"to"` // Target component
|
||||
Type string `json:"type"` // Relationship type
|
||||
Strength float64 `json:"strength"` // Relationship strength (0-1)
|
||||
Direction string `json:"direction"` // Direction (unidirectional, bidirectional)
|
||||
Description string `json:"description"` // Relationship description
|
||||
From string `json:"from"` // Source component
|
||||
To string `json:"to"` // Target component
|
||||
Type string `json:"type"` // Relationship type
|
||||
Strength float64 `json:"strength"` // Relationship strength (0-1)
|
||||
Direction string `json:"direction"` // Direction (unidirectional, bidirectional)
|
||||
Description string `json:"description"` // Relationship description
|
||||
}
|
||||
|
||||
// ComplexityMetrics represents code complexity metrics
|
||||
type ComplexityMetrics struct {
|
||||
Cyclomatic float64 `json:"cyclomatic"` // Cyclomatic complexity
|
||||
Cognitive float64 `json:"cognitive"` // Cognitive complexity
|
||||
Halstead float64 `json:"halstead"` // Halstead complexity
|
||||
Maintainability float64 `json:"maintainability"` // Maintainability index
|
||||
TechnicalDebt float64 `json:"technical_debt"` // Technical debt estimate
|
||||
Cyclomatic float64 `json:"cyclomatic"` // Cyclomatic complexity
|
||||
Cognitive float64 `json:"cognitive"` // Cognitive complexity
|
||||
Halstead float64 `json:"halstead"` // Halstead complexity
|
||||
Maintainability float64 `json:"maintainability"` // Maintainability index
|
||||
TechnicalDebt float64 `json:"technical_debt"` // Technical debt estimate
|
||||
}
|
||||
|
||||
// QualityMetrics represents code quality metrics
|
||||
type QualityMetrics struct {
|
||||
Readability float64 `json:"readability"` // Readability score
|
||||
Testability float64 `json:"testability"` // Testability score
|
||||
Reusability float64 `json:"reusability"` // Reusability score
|
||||
Reliability float64 `json:"reliability"` // Reliability score
|
||||
Security float64 `json:"security"` // Security score
|
||||
Performance float64 `json:"performance"` // Performance score
|
||||
Duplication float64 `json:"duplication"` // Code duplication percentage
|
||||
Consistency float64 `json:"consistency"` // Code consistency score
|
||||
Readability float64 `json:"readability"` // Readability score
|
||||
Testability float64 `json:"testability"` // Testability score
|
||||
Reusability float64 `json:"reusability"` // Reusability score
|
||||
Reliability float64 `json:"reliability"` // Reliability score
|
||||
Security float64 `json:"security"` // Security score
|
||||
Performance float64 `json:"performance"` // Performance score
|
||||
Duplication float64 `json:"duplication"` // Code duplication percentage
|
||||
Consistency float64 `json:"consistency"` // Code consistency score
|
||||
}
|
||||
|
||||
// DocMetrics represents documentation metrics
|
||||
type DocMetrics struct {
|
||||
Coverage float64 `json:"coverage"` // Documentation coverage
|
||||
Quality float64 `json:"quality"` // Documentation quality
|
||||
CommentRatio float64 `json:"comment_ratio"` // Comment to code ratio
|
||||
APIDocCoverage float64 `json:"api_doc_coverage"` // API documentation coverage
|
||||
ExampleCount int `json:"example_count"` // Number of examples
|
||||
TODOCount int `json:"todo_count"` // Number of TODO comments
|
||||
FIXMECount int `json:"fixme_count"` // Number of FIXME comments
|
||||
Coverage float64 `json:"coverage"` // Documentation coverage
|
||||
Quality float64 `json:"quality"` // Documentation quality
|
||||
CommentRatio float64 `json:"comment_ratio"` // Comment to code ratio
|
||||
APIDocCoverage float64 `json:"api_doc_coverage"` // API documentation coverage
|
||||
ExampleCount int `json:"example_count"` // Number of examples
|
||||
TODOCount int `json:"todo_count"` // Number of TODO comments
|
||||
FIXMECount int `json:"fixme_count"` // Number of FIXME comments
|
||||
}
|
||||
|
||||
// DirectoryStructure represents analysis of directory organization
|
||||
type DirectoryStructure struct {
|
||||
Path string `json:"path"` // Directory path
|
||||
FileCount int `json:"file_count"` // Number of files
|
||||
DirectoryCount int `json:"directory_count"` // Number of subdirectories
|
||||
TotalSize int64 `json:"total_size"` // Total size in bytes
|
||||
FileTypes map[string]int `json:"file_types"` // File type distribution
|
||||
Languages map[string]int `json:"languages"` // Language distribution
|
||||
Organization *OrganizationInfo `json:"organization"` // Organization information
|
||||
Conventions *ConventionInfo `json:"conventions"` // Convention information
|
||||
Dependencies []string `json:"dependencies"` // Directory dependencies
|
||||
Purpose string `json:"purpose"` // Directory purpose
|
||||
Architecture string `json:"architecture"` // Architectural pattern
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
Path string `json:"path"` // Directory path
|
||||
FileCount int `json:"file_count"` // Number of files
|
||||
DirectoryCount int `json:"directory_count"` // Number of subdirectories
|
||||
TotalSize int64 `json:"total_size"` // Total size in bytes
|
||||
FileTypes map[string]int `json:"file_types"` // File type distribution
|
||||
Languages map[string]int `json:"languages"` // Language distribution
|
||||
Organization *OrganizationInfo `json:"organization"` // Organization information
|
||||
Conventions *ConventionInfo `json:"conventions"` // Convention information
|
||||
Dependencies []string `json:"dependencies"` // Directory dependencies
|
||||
Purpose string `json:"purpose"` // Directory purpose
|
||||
Architecture string `json:"architecture"` // Architectural pattern
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
}
|
||||
|
||||
// OrganizationInfo represents directory organization information
|
||||
type OrganizationInfo struct {
|
||||
Pattern string `json:"pattern"` // Organization pattern
|
||||
Consistency float64 `json:"consistency"` // Organization consistency
|
||||
Depth int `json:"depth"` // Directory depth
|
||||
FanOut int `json:"fan_out"` // Average fan-out
|
||||
Modularity float64 `json:"modularity"` // Modularity score
|
||||
Cohesion float64 `json:"cohesion"` // Cohesion score
|
||||
Coupling float64 `json:"coupling"` // Coupling score
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
Pattern string `json:"pattern"` // Organization pattern
|
||||
Consistency float64 `json:"consistency"` // Organization consistency
|
||||
Depth int `json:"depth"` // Directory depth
|
||||
FanOut int `json:"fan_out"` // Average fan-out
|
||||
Modularity float64 `json:"modularity"` // Modularity score
|
||||
Cohesion float64 `json:"cohesion"` // Cohesion score
|
||||
Coupling float64 `json:"coupling"` // Coupling score
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// ConventionInfo represents naming and organizational conventions
|
||||
type ConventionInfo struct {
|
||||
NamingStyle string `json:"naming_style"` // Naming convention style
|
||||
FileNaming string `json:"file_naming"` // File naming pattern
|
||||
DirectoryNaming string `json:"directory_naming"` // Directory naming pattern
|
||||
Consistency float64 `json:"consistency"` // Convention consistency
|
||||
Violations []*Violation `json:"violations"` // Convention violations
|
||||
Standards []string `json:"standards"` // Applied standards
|
||||
NamingStyle string `json:"naming_style"` // Naming convention style
|
||||
FileNaming string `json:"file_naming"` // File naming pattern
|
||||
DirectoryNaming string `json:"directory_naming"` // Directory naming pattern
|
||||
Consistency float64 `json:"consistency"` // Convention consistency
|
||||
Violations []*Violation `json:"violations"` // Convention violations
|
||||
Standards []string `json:"standards"` // Applied standards
|
||||
}
|
||||
|
||||
// Violation represents a convention violation
|
||||
type Violation struct {
|
||||
Type string `json:"type"` // Violation type
|
||||
Path string `json:"path"` // Violating path
|
||||
Expected string `json:"expected"` // Expected format
|
||||
Actual string `json:"actual"` // Actual format
|
||||
Severity string `json:"severity"` // Violation severity
|
||||
Suggestion string `json:"suggestion"` // Suggested fix
|
||||
Type string `json:"type"` // Violation type
|
||||
Path string `json:"path"` // Violating path
|
||||
Expected string `json:"expected"` // Expected format
|
||||
Actual string `json:"actual"` // Actual format
|
||||
Severity string `json:"severity"` // Violation severity
|
||||
Suggestion string `json:"suggestion"` // Suggested fix
|
||||
}
|
||||
|
||||
// ConventionAnalysis represents analysis of naming and organizational conventions
|
||||
type ConventionAnalysis struct {
|
||||
NamingPatterns []*NamingPattern `json:"naming_patterns"` // Detected naming patterns
|
||||
NamingPatterns []*NamingPattern `json:"naming_patterns"` // Detected naming patterns
|
||||
OrganizationalPatterns []*OrganizationalPattern `json:"organizational_patterns"` // Organizational patterns
|
||||
Consistency float64 `json:"consistency"` // Overall consistency score
|
||||
Violations []*Violation `json:"violations"` // Convention violations
|
||||
Recommendations []*Recommendation `json:"recommendations"` // Improvement recommendations
|
||||
AppliedStandards []string `json:"applied_standards"` // Applied coding standards
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
Consistency float64 `json:"consistency"` // Overall consistency score
|
||||
Violations []*Violation `json:"violations"` // Convention violations
|
||||
Recommendations []*BasicRecommendation `json:"recommendations"` // Improvement recommendations
|
||||
AppliedStandards []string `json:"applied_standards"` // Applied coding standards
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
}
|
||||
|
||||
// RelationshipAnalysis represents analysis of directory relationships
|
||||
type RelationshipAnalysis struct {
|
||||
Dependencies []*DirectoryDependency `json:"dependencies"` // Directory dependencies
|
||||
Relationships []*DirectoryRelation `json:"relationships"` // Directory relationships
|
||||
CouplingMetrics *CouplingMetrics `json:"coupling_metrics"` // Coupling metrics
|
||||
ModularityScore float64 `json:"modularity_score"` // Modularity score
|
||||
ArchitecturalStyle string `json:"architectural_style"` // Architectural style
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
Dependencies []*DirectoryDependency `json:"dependencies"` // Directory dependencies
|
||||
Relationships []*DirectoryRelation `json:"relationships"` // Directory relationships
|
||||
CouplingMetrics *CouplingMetrics `json:"coupling_metrics"` // Coupling metrics
|
||||
ModularityScore float64 `json:"modularity_score"` // Modularity score
|
||||
ArchitecturalStyle string `json:"architectural_style"` // Architectural style
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
}
|
||||
|
||||
// DirectoryDependency represents a dependency between directories
|
||||
type DirectoryDependency struct {
|
||||
From string `json:"from"` // Source directory
|
||||
To string `json:"to"` // Target directory
|
||||
Type string `json:"type"` // Dependency type
|
||||
Strength float64 `json:"strength"` // Dependency strength
|
||||
Reason string `json:"reason"` // Reason for dependency
|
||||
FileCount int `json:"file_count"` // Number of files involved
|
||||
From string `json:"from"` // Source directory
|
||||
To string `json:"to"` // Target directory
|
||||
Type string `json:"type"` // Dependency type
|
||||
Strength float64 `json:"strength"` // Dependency strength
|
||||
Reason string `json:"reason"` // Reason for dependency
|
||||
FileCount int `json:"file_count"` // Number of files involved
|
||||
}
|
||||
|
||||
// DirectoryRelation represents a relationship between directories
|
||||
type DirectoryRelation struct {
|
||||
Directory1 string `json:"directory1"` // First directory
|
||||
Directory2 string `json:"directory2"` // Second directory
|
||||
Type string `json:"type"` // Relation type
|
||||
Strength float64 `json:"strength"` // Relation strength
|
||||
Description string `json:"description"` // Relation description
|
||||
Bidirectional bool `json:"bidirectional"` // Whether relation is bidirectional
|
||||
Directory1 string `json:"directory1"` // First directory
|
||||
Directory2 string `json:"directory2"` // Second directory
|
||||
Type string `json:"type"` // Relation type
|
||||
Strength float64 `json:"strength"` // Relation strength
|
||||
Description string `json:"description"` // Relation description
|
||||
Bidirectional bool `json:"bidirectional"` // Whether relation is bidirectional
|
||||
}
|
||||
|
||||
// CouplingMetrics represents coupling metrics between directories
|
||||
type CouplingMetrics struct {
|
||||
AfferentCoupling float64 `json:"afferent_coupling"` // Afferent coupling
|
||||
EfferentCoupling float64 `json:"efferent_coupling"` // Efferent coupling
|
||||
Instability float64 `json:"instability"` // Instability metric
|
||||
Abstractness float64 `json:"abstractness"` // Abstractness metric
|
||||
DistanceFromMain float64 `json:"distance_from_main"` // Distance from main sequence
|
||||
AfferentCoupling float64 `json:"afferent_coupling"` // Afferent coupling
|
||||
EfferentCoupling float64 `json:"efferent_coupling"` // Efferent coupling
|
||||
Instability float64 `json:"instability"` // Instability metric
|
||||
Abstractness float64 `json:"abstractness"` // Abstractness metric
|
||||
DistanceFromMain float64 `json:"distance_from_main"` // Distance from main sequence
|
||||
}
|
||||
|
||||
// Pattern represents a detected pattern in code or organization
|
||||
type Pattern struct {
|
||||
ID string `json:"id"` // Pattern identifier
|
||||
Name string `json:"name"` // Pattern name
|
||||
Type string `json:"type"` // Pattern type
|
||||
Description string `json:"description"` // Pattern description
|
||||
Confidence float64 `json:"confidence"` // Detection confidence
|
||||
Frequency int `json:"frequency"` // Pattern frequency
|
||||
Examples []string `json:"examples"` // Example instances
|
||||
Criteria map[string]interface{} `json:"criteria"` // Pattern criteria
|
||||
Benefits []string `json:"benefits"` // Pattern benefits
|
||||
Drawbacks []string `json:"drawbacks"` // Pattern drawbacks
|
||||
ApplicableRoles []string `json:"applicable_roles"` // Roles that benefit from this pattern
|
||||
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
|
||||
ID string `json:"id"` // Pattern identifier
|
||||
Name string `json:"name"` // Pattern name
|
||||
Type string `json:"type"` // Pattern type
|
||||
Description string `json:"description"` // Pattern description
|
||||
Confidence float64 `json:"confidence"` // Detection confidence
|
||||
Frequency int `json:"frequency"` // Pattern frequency
|
||||
Examples []string `json:"examples"` // Example instances
|
||||
Criteria map[string]interface{} `json:"criteria"` // Pattern criteria
|
||||
Benefits []string `json:"benefits"` // Pattern benefits
|
||||
Drawbacks []string `json:"drawbacks"` // Pattern drawbacks
|
||||
ApplicableRoles []string `json:"applicable_roles"` // Roles that benefit from this pattern
|
||||
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
|
||||
}
|
||||
|
||||
// CodePattern represents a code-specific pattern
|
||||
type CodePattern struct {
|
||||
Pattern // Embedded base pattern
|
||||
Language string `json:"language"` // Programming language
|
||||
Framework string `json:"framework"` // Framework context
|
||||
Complexity float64 `json:"complexity"` // Pattern complexity
|
||||
Usage *UsagePattern `json:"usage"` // Usage pattern
|
||||
Performance *PerformanceInfo `json:"performance"` // Performance characteristics
|
||||
Pattern // Embedded base pattern
|
||||
Language string `json:"language"` // Programming language
|
||||
Framework string `json:"framework"` // Framework context
|
||||
Complexity float64 `json:"complexity"` // Pattern complexity
|
||||
Usage *UsagePattern `json:"usage"` // Usage pattern
|
||||
Performance *PerformanceInfo `json:"performance"` // Performance characteristics
|
||||
}
|
||||
|
||||
// NamingPattern represents a naming convention pattern
|
||||
type NamingPattern struct {
|
||||
Pattern // Embedded base pattern
|
||||
Convention string `json:"convention"` // Naming convention
|
||||
Scope string `json:"scope"` // Pattern scope
|
||||
Regex string `json:"regex"` // Regex pattern
|
||||
CaseStyle string `json:"case_style"` // Case style (camelCase, snake_case, etc.)
|
||||
Prefix string `json:"prefix"` // Common prefix
|
||||
Suffix string `json:"suffix"` // Common suffix
|
||||
Pattern // Embedded base pattern
|
||||
Convention string `json:"convention"` // Naming convention
|
||||
Scope string `json:"scope"` // Pattern scope
|
||||
Regex string `json:"regex"` // Regex pattern
|
||||
CaseStyle string `json:"case_style"` // Case style (camelCase, snake_case, etc.)
|
||||
Prefix string `json:"prefix"` // Common prefix
|
||||
Suffix string `json:"suffix"` // Common suffix
|
||||
}
|
||||
|
||||
// OrganizationalPattern represents an organizational pattern
|
||||
type OrganizationalPattern struct {
|
||||
Pattern // Embedded base pattern
|
||||
Structure string `json:"structure"` // Organizational structure
|
||||
Depth int `json:"depth"` // Typical depth
|
||||
FanOut int `json:"fan_out"` // Typical fan-out
|
||||
Modularity float64 `json:"modularity"` // Modularity characteristics
|
||||
Scalability string `json:"scalability"` // Scalability characteristics
|
||||
Pattern // Embedded base pattern
|
||||
Structure string `json:"structure"` // Organizational structure
|
||||
Depth int `json:"depth"` // Typical depth
|
||||
FanOut int `json:"fan_out"` // Typical fan-out
|
||||
Modularity float64 `json:"modularity"` // Modularity characteristics
|
||||
Scalability string `json:"scalability"` // Scalability characteristics
|
||||
}
|
||||
|
||||
// UsagePattern represents how a pattern is typically used
|
||||
type UsagePattern struct {
|
||||
Frequency string `json:"frequency"` // Usage frequency
|
||||
Context []string `json:"context"` // Usage contexts
|
||||
Prerequisites []string `json:"prerequisites"` // Prerequisites
|
||||
Alternatives []string `json:"alternatives"` // Alternative patterns
|
||||
Compatibility map[string]string `json:"compatibility"` // Compatibility with other patterns
|
||||
Frequency string `json:"frequency"` // Usage frequency
|
||||
Context []string `json:"context"` // Usage contexts
|
||||
Prerequisites []string `json:"prerequisites"` // Prerequisites
|
||||
Alternatives []string `json:"alternatives"` // Alternative patterns
|
||||
Compatibility map[string]string `json:"compatibility"` // Compatibility with other patterns
|
||||
}
|
||||
|
||||
// PerformanceInfo represents performance characteristics of a pattern
|
||||
@@ -249,12 +249,12 @@ type PerformanceInfo struct {
|
||||
|
||||
// PatternMatch represents a match between context and a pattern
|
||||
type PatternMatch struct {
|
||||
PatternID string `json:"pattern_id"` // Pattern identifier
|
||||
MatchScore float64 `json:"match_score"` // Match score (0-1)
|
||||
Confidence float64 `json:"confidence"` // Match confidence
|
||||
PatternID string `json:"pattern_id"` // Pattern identifier
|
||||
MatchScore float64 `json:"match_score"` // Match score (0-1)
|
||||
Confidence float64 `json:"confidence"` // Match confidence
|
||||
MatchedFields []string `json:"matched_fields"` // Fields that matched
|
||||
Explanation string `json:"explanation"` // Match explanation
|
||||
Suggestions []string `json:"suggestions"` // Improvement suggestions
|
||||
Explanation string `json:"explanation"` // Match explanation
|
||||
Suggestions []string `json:"suggestions"` // Improvement suggestions
|
||||
}
|
||||
|
||||
// ValidationResult represents context validation results
|
||||
@@ -269,12 +269,12 @@ type ValidationResult struct {
|
||||
|
||||
// ValidationIssue represents a validation issue
|
||||
type ValidationIssue struct {
|
||||
Type string `json:"type"` // Issue type
|
||||
Severity string `json:"severity"` // Issue severity
|
||||
Message string `json:"message"` // Issue message
|
||||
Field string `json:"field"` // Affected field
|
||||
Suggestion string `json:"suggestion"` // Suggested fix
|
||||
Impact float64 `json:"impact"` // Impact score
|
||||
Type string `json:"type"` // Issue type
|
||||
Severity string `json:"severity"` // Issue severity
|
||||
Message string `json:"message"` // Issue message
|
||||
Field string `json:"field"` // Affected field
|
||||
Suggestion string `json:"suggestion"` // Suggested fix
|
||||
Impact float64 `json:"impact"` // Impact score
|
||||
}
|
||||
|
||||
// Suggestion represents an improvement suggestion
|
||||
@@ -289,61 +289,61 @@ type Suggestion struct {
|
||||
}
|
||||
|
||||
// Recommendation represents an improvement recommendation
|
||||
type Recommendation struct {
|
||||
Type string `json:"type"` // Recommendation type
|
||||
Title string `json:"title"` // Recommendation title
|
||||
Description string `json:"description"` // Detailed description
|
||||
Priority int `json:"priority"` // Priority level
|
||||
Effort string `json:"effort"` // Effort required
|
||||
Impact string `json:"impact"` // Expected impact
|
||||
Steps []string `json:"steps"` // Implementation steps
|
||||
Resources []string `json:"resources"` // Required resources
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
type BasicRecommendation struct {
|
||||
Type string `json:"type"` // Recommendation type
|
||||
Title string `json:"title"` // Recommendation title
|
||||
Description string `json:"description"` // Detailed description
|
||||
Priority int `json:"priority"` // Priority level
|
||||
Effort string `json:"effort"` // Effort required
|
||||
Impact string `json:"impact"` // Expected impact
|
||||
Steps []string `json:"steps"` // Implementation steps
|
||||
Resources []string `json:"resources"` // Required resources
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// RAGResponse represents a response from the RAG system
|
||||
type RAGResponse struct {
|
||||
Query string `json:"query"` // Original query
|
||||
Answer string `json:"answer"` // Generated answer
|
||||
Sources []*RAGSource `json:"sources"` // Source documents
|
||||
Confidence float64 `json:"confidence"` // Response confidence
|
||||
Context map[string]interface{} `json:"context"` // Additional context
|
||||
ProcessedAt time.Time `json:"processed_at"` // When processed
|
||||
Query string `json:"query"` // Original query
|
||||
Answer string `json:"answer"` // Generated answer
|
||||
Sources []*RAGSource `json:"sources"` // Source documents
|
||||
Confidence float64 `json:"confidence"` // Response confidence
|
||||
Context map[string]interface{} `json:"context"` // Additional context
|
||||
ProcessedAt time.Time `json:"processed_at"` // When processed
|
||||
}
|
||||
|
||||
// RAGSource represents a source document from RAG system
|
||||
type RAGSource struct {
|
||||
ID string `json:"id"` // Source identifier
|
||||
Title string `json:"title"` // Source title
|
||||
Content string `json:"content"` // Source content excerpt
|
||||
Score float64 `json:"score"` // Relevance score
|
||||
Metadata map[string]interface{} `json:"metadata"` // Source metadata
|
||||
URL string `json:"url"` // Source URL if available
|
||||
ID string `json:"id"` // Source identifier
|
||||
Title string `json:"title"` // Source title
|
||||
Content string `json:"content"` // Source content excerpt
|
||||
Score float64 `json:"score"` // Relevance score
|
||||
Metadata map[string]interface{} `json:"metadata"` // Source metadata
|
||||
URL string `json:"url"` // Source URL if available
|
||||
}
|
||||
|
||||
// RAGResult represents a result from RAG similarity search
|
||||
type RAGResult struct {
|
||||
ID string `json:"id"` // Result identifier
|
||||
Content string `json:"content"` // Content
|
||||
Score float64 `json:"score"` // Similarity score
|
||||
Metadata map[string]interface{} `json:"metadata"` // Result metadata
|
||||
Highlights []string `json:"highlights"` // Content highlights
|
||||
ID string `json:"id"` // Result identifier
|
||||
Content string `json:"content"` // Content
|
||||
Score float64 `json:"score"` // Similarity score
|
||||
Metadata map[string]interface{} `json:"metadata"` // Result metadata
|
||||
Highlights []string `json:"highlights"` // Content highlights
|
||||
}
|
||||
|
||||
// RAGUpdate represents an update to the RAG index
|
||||
type RAGUpdate struct {
|
||||
ID string `json:"id"` // Document identifier
|
||||
Content string `json:"content"` // Document content
|
||||
Metadata map[string]interface{} `json:"metadata"` // Document metadata
|
||||
Operation string `json:"operation"` // Operation type (add, update, delete)
|
||||
ID string `json:"id"` // Document identifier
|
||||
Content string `json:"content"` // Document content
|
||||
Metadata map[string]interface{} `json:"metadata"` // Document metadata
|
||||
Operation string `json:"operation"` // Operation type (add, update, delete)
|
||||
}
|
||||
|
||||
// RAGStatistics represents RAG system statistics
|
||||
type RAGStatistics struct {
|
||||
TotalDocuments int64 `json:"total_documents"` // Total indexed documents
|
||||
TotalQueries int64 `json:"total_queries"` // Total queries processed
|
||||
TotalDocuments int64 `json:"total_documents"` // Total indexed documents
|
||||
TotalQueries int64 `json:"total_queries"` // Total queries processed
|
||||
AverageQueryTime time.Duration `json:"average_query_time"` // Average query time
|
||||
IndexSize int64 `json:"index_size"` // Index size in bytes
|
||||
LastIndexUpdate time.Time `json:"last_index_update"` // When index was last updated
|
||||
ErrorRate float64 `json:"error_rate"` // Error rate
|
||||
IndexSize int64 `json:"index_size"` // Index size in bytes
|
||||
LastIndexUpdate time.Time `json:"last_index_update"` // When index was last updated
|
||||
ErrorRate float64 `json:"error_rate"` // Error rate
|
||||
}
|
||||
@@ -282,25 +282,25 @@ func (cau *ContentAnalysisUtils) DetectTechnologies(content, filename string) []
|
||||
|
||||
// Language detection
|
||||
languageMap := map[string][]string{
|
||||
".go": {"go", "golang"},
|
||||
".py": {"python"},
|
||||
".js": {"javascript", "node.js"},
|
||||
".jsx": {"javascript", "react", "jsx"},
|
||||
".ts": {"typescript"},
|
||||
".tsx": {"typescript", "react", "jsx"},
|
||||
".java": {"java"},
|
||||
".kt": {"kotlin"},
|
||||
".rs": {"rust"},
|
||||
".cpp": {"c++"},
|
||||
".c": {"c"},
|
||||
".cs": {"c#", ".net"},
|
||||
".php": {"php"},
|
||||
".rb": {"ruby"},
|
||||
".go": {"go", "golang"},
|
||||
".py": {"python"},
|
||||
".js": {"javascript", "node.js"},
|
||||
".jsx": {"javascript", "react", "jsx"},
|
||||
".ts": {"typescript"},
|
||||
".tsx": {"typescript", "react", "jsx"},
|
||||
".java": {"java"},
|
||||
".kt": {"kotlin"},
|
||||
".rs": {"rust"},
|
||||
".cpp": {"c++"},
|
||||
".c": {"c"},
|
||||
".cs": {"c#", ".net"},
|
||||
".php": {"php"},
|
||||
".rb": {"ruby"},
|
||||
".swift": {"swift"},
|
||||
".scala": {"scala"},
|
||||
".clj": {"clojure"},
|
||||
".hs": {"haskell"},
|
||||
".ml": {"ocaml"},
|
||||
".clj": {"clojure"},
|
||||
".hs": {"haskell"},
|
||||
".ml": {"ocaml"},
|
||||
}
|
||||
|
||||
if langs, exists := languageMap[ext]; exists {
|
||||
@@ -309,34 +309,34 @@ func (cau *ContentAnalysisUtils) DetectTechnologies(content, filename string) []
|
||||
|
||||
// Framework and library detection
|
||||
frameworkPatterns := map[string][]string{
|
||||
"react": {"import.*react", "from [\"']react[\"']", "<.*/>", "jsx"},
|
||||
"vue": {"import.*vue", "from [\"']vue[\"']", "<template>", "vue"},
|
||||
"angular": {"import.*@angular", "from [\"']@angular", "ngmodule", "component"},
|
||||
"express": {"import.*express", "require.*express", "app.get", "app.post"},
|
||||
"django": {"from django", "import django", "django.db", "models.model"},
|
||||
"flask": {"from flask", "import flask", "@app.route", "flask.request"},
|
||||
"spring": {"@springboot", "@controller", "@service", "@repository"},
|
||||
"hibernate": {"@entity", "@table", "@column", "hibernate"},
|
||||
"jquery": {"$\\(", "jquery"},
|
||||
"bootstrap": {"bootstrap", "btn-", "col-", "row"},
|
||||
"docker": {"dockerfile", "docker-compose", "from.*:", "run.*"},
|
||||
"kubernetes": {"apiversion:", "kind:", "metadata:", "spec:"},
|
||||
"terraform": {"\\.tf$", "resource \"", "provider \"", "terraform"},
|
||||
"ansible": {"\\.yml$", "hosts:", "tasks:", "playbook"},
|
||||
"jenkins": {"jenkinsfile", "pipeline", "stage", "steps"},
|
||||
"git": {"\\.git", "git add", "git commit", "git push"},
|
||||
"mysql": {"mysql", "select.*from", "insert into", "create table"},
|
||||
"postgresql": {"postgresql", "postgres", "psql"},
|
||||
"mongodb": {"mongodb", "mongo", "find\\(", "insert\\("},
|
||||
"redis": {"redis", "set.*", "get.*", "rpush"},
|
||||
"elasticsearch": {"elasticsearch", "elastic", "query.*", "search.*"},
|
||||
"graphql": {"graphql", "query.*{", "mutation.*{", "subscription.*{"},
|
||||
"grpc": {"grpc", "proto", "service.*rpc", "\\.proto$"},
|
||||
"websocket": {"websocket", "ws://", "wss://", "socket.io"},
|
||||
"jwt": {"jwt", "jsonwebtoken", "bearer.*token"},
|
||||
"oauth": {"oauth", "oauth2", "client_id", "client_secret"},
|
||||
"ssl": {"ssl", "tls", "https", "certificate"},
|
||||
"encryption": {"encrypt", "decrypt", "bcrypt", "sha256"},
|
||||
"react": {"import.*react", "from [\"']react[\"']", "<.*/>", "jsx"},
|
||||
"vue": {"import.*vue", "from [\"']vue[\"']", "<template>", "vue"},
|
||||
"angular": {"import.*@angular", "from [\"']@angular", "ngmodule", "component"},
|
||||
"express": {"import.*express", "require.*express", "app.get", "app.post"},
|
||||
"django": {"from django", "import django", "django.db", "models.model"},
|
||||
"flask": {"from flask", "import flask", "@app.route", "flask.request"},
|
||||
"spring": {"@springboot", "@controller", "@service", "@repository"},
|
||||
"hibernate": {"@entity", "@table", "@column", "hibernate"},
|
||||
"jquery": {"$\\(", "jquery"},
|
||||
"bootstrap": {"bootstrap", "btn-", "col-", "row"},
|
||||
"docker": {"dockerfile", "docker-compose", "from.*:", "run.*"},
|
||||
"kubernetes": {"apiversion:", "kind:", "metadata:", "spec:"},
|
||||
"terraform": {"\\.tf$", "resource \"", "provider \"", "terraform"},
|
||||
"ansible": {"\\.yml$", "hosts:", "tasks:", "playbook"},
|
||||
"jenkins": {"jenkinsfile", "pipeline", "stage", "steps"},
|
||||
"git": {"\\.git", "git add", "git commit", "git push"},
|
||||
"mysql": {"mysql", "select.*from", "insert into", "create table"},
|
||||
"postgresql": {"postgresql", "postgres", "psql"},
|
||||
"mongodb": {"mongodb", "mongo", "find\\(", "insert\\("},
|
||||
"redis": {"redis", "set.*", "get.*", "rpush"},
|
||||
"elasticsearch": {"elasticsearch", "elastic", "query.*", "search.*"},
|
||||
"graphql": {"graphql", "query.*{", "mutation.*{", "subscription.*{"},
|
||||
"grpc": {"grpc", "proto", "service.*rpc", "\\.proto$"},
|
||||
"websocket": {"websocket", "ws://", "wss://", "socket.io"},
|
||||
"jwt": {"jwt", "jsonwebtoken", "bearer.*token"},
|
||||
"oauth": {"oauth", "oauth2", "client_id", "client_secret"},
|
||||
"ssl": {"ssl", "tls", "https", "certificate"},
|
||||
"encryption": {"encrypt", "decrypt", "bcrypt", "sha256"},
|
||||
}
|
||||
|
||||
for tech, patterns := range frameworkPatterns {
|
||||
@@ -741,30 +741,58 @@ func CloneContextNode(node *slurpContext.ContextNode) *slurpContext.ContextNode
|
||||
}
|
||||
|
||||
clone := &slurpContext.ContextNode{
|
||||
Path: node.Path,
|
||||
Summary: node.Summary,
|
||||
Purpose: node.Purpose,
|
||||
Technologies: make([]string, len(node.Technologies)),
|
||||
Tags: make([]string, len(node.Tags)),
|
||||
Insights: make([]string, len(node.Insights)),
|
||||
CreatedAt: node.CreatedAt,
|
||||
UpdatedAt: node.UpdatedAt,
|
||||
ContextSpecificity: node.ContextSpecificity,
|
||||
RAGConfidence: node.RAGConfidence,
|
||||
ProcessedForRole: node.ProcessedForRole,
|
||||
Path: node.Path,
|
||||
UCXLAddress: node.UCXLAddress,
|
||||
Summary: node.Summary,
|
||||
Purpose: node.Purpose,
|
||||
Technologies: make([]string, len(node.Technologies)),
|
||||
Tags: make([]string, len(node.Tags)),
|
||||
Insights: make([]string, len(node.Insights)),
|
||||
OverridesParent: node.OverridesParent,
|
||||
ContextSpecificity: node.ContextSpecificity,
|
||||
AppliesToChildren: node.AppliesToChildren,
|
||||
AppliesTo: node.AppliesTo,
|
||||
GeneratedAt: node.GeneratedAt,
|
||||
UpdatedAt: node.UpdatedAt,
|
||||
CreatedBy: node.CreatedBy,
|
||||
WhoUpdated: node.WhoUpdated,
|
||||
RAGConfidence: node.RAGConfidence,
|
||||
EncryptedFor: make([]string, len(node.EncryptedFor)),
|
||||
AccessLevel: node.AccessLevel,
|
||||
}
|
||||
|
||||
copy(clone.Technologies, node.Technologies)
|
||||
copy(clone.Tags, node.Tags)
|
||||
copy(clone.Insights, node.Insights)
|
||||
copy(clone.EncryptedFor, node.EncryptedFor)
|
||||
|
||||
if node.RoleSpecificInsights != nil {
|
||||
clone.RoleSpecificInsights = make([]*RoleSpecificInsight, len(node.RoleSpecificInsights))
|
||||
copy(clone.RoleSpecificInsights, node.RoleSpecificInsights)
|
||||
if node.Parent != nil {
|
||||
parent := *node.Parent
|
||||
clone.Parent = &parent
|
||||
}
|
||||
if len(node.Children) > 0 {
|
||||
clone.Children = make([]string, len(node.Children))
|
||||
copy(clone.Children, node.Children)
|
||||
}
|
||||
if node.Language != nil {
|
||||
language := *node.Language
|
||||
clone.Language = &language
|
||||
}
|
||||
if node.Size != nil {
|
||||
sz := *node.Size
|
||||
clone.Size = &sz
|
||||
}
|
||||
if node.LastModified != nil {
|
||||
lm := *node.LastModified
|
||||
clone.LastModified = &lm
|
||||
}
|
||||
if node.ContentHash != nil {
|
||||
hash := *node.ContentHash
|
||||
clone.ContentHash = &hash
|
||||
}
|
||||
|
||||
if node.Metadata != nil {
|
||||
clone.Metadata = make(map[string]interface{})
|
||||
clone.Metadata = make(map[string]interface{}, len(node.Metadata))
|
||||
for k, v := range node.Metadata {
|
||||
clone.Metadata[k] = v
|
||||
}
|
||||
@@ -799,9 +827,11 @@ func MergeContextNodes(nodes ...*slurpContext.ContextNode) *slurpContext.Context
|
||||
// Merge insights
|
||||
merged.Insights = mergeStringSlices(merged.Insights, node.Insights)
|
||||
|
||||
// Use most recent timestamps
|
||||
if node.CreatedAt.Before(merged.CreatedAt) {
|
||||
merged.CreatedAt = node.CreatedAt
|
||||
// Use most relevant timestamps
|
||||
if merged.GeneratedAt.IsZero() {
|
||||
merged.GeneratedAt = node.GeneratedAt
|
||||
} else if !node.GeneratedAt.IsZero() && node.GeneratedAt.Before(merged.GeneratedAt) {
|
||||
merged.GeneratedAt = node.GeneratedAt
|
||||
}
|
||||
if node.UpdatedAt.After(merged.UpdatedAt) {
|
||||
merged.UpdatedAt = node.UpdatedAt
|
||||
|
||||
@@ -2,6 +2,9 @@ package slurp
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/crypto"
|
||||
)
|
||||
|
||||
// Core interfaces for the SLURP contextual intelligence system.
|
||||
@@ -144,13 +147,13 @@ type TemporalGraph interface {
|
||||
// CreateInitialContext creates the first version of context.
|
||||
// Establishes the starting point for temporal evolution tracking.
|
||||
CreateInitialContext(ctx context.Context, ucxlAddress string,
|
||||
contextData *ContextNode, creator string) (*TemporalNode, error)
|
||||
contextData *ContextNode, creator string) (*TemporalNode, error)
|
||||
|
||||
// EvolveContext creates a new temporal version due to a decision.
|
||||
// Records the decision that caused the change and updates the graph.
|
||||
EvolveContext(ctx context.Context, ucxlAddress string,
|
||||
newContext *ContextNode, reason ChangeReason,
|
||||
decision *DecisionMetadata) (*TemporalNode, error)
|
||||
newContext *ContextNode, reason ChangeReason,
|
||||
decision *DecisionMetadata) (*TemporalNode, error)
|
||||
|
||||
// GetLatestVersion gets the most recent temporal node.
|
||||
GetLatestVersion(ctx context.Context, ucxlAddress string) (*TemporalNode, error)
|
||||
@@ -158,7 +161,7 @@ type TemporalGraph interface {
|
||||
// GetVersionAtDecision gets context as it was at a specific decision point.
|
||||
// Navigation based on decision hops, not chronological time.
|
||||
GetVersionAtDecision(ctx context.Context, ucxlAddress string,
|
||||
decisionHop int) (*TemporalNode, error)
|
||||
decisionHop int) (*TemporalNode, error)
|
||||
|
||||
// GetEvolutionHistory gets complete evolution history.
|
||||
// Returns all temporal versions ordered by decision sequence.
|
||||
@@ -177,7 +180,7 @@ type TemporalGraph interface {
|
||||
// FindRelatedDecisions finds decisions within N decision hops.
|
||||
// Explores the decision graph by conceptual distance, not time.
|
||||
FindRelatedDecisions(ctx context.Context, ucxlAddress string,
|
||||
maxHops int) ([]*DecisionPath, error)
|
||||
maxHops int) ([]*DecisionPath, error)
|
||||
|
||||
// FindDecisionPath finds shortest decision path between addresses.
|
||||
// Returns the path of decisions connecting two contexts.
|
||||
@@ -205,12 +208,12 @@ type DecisionNavigator interface {
|
||||
// NavigateDecisionHops navigates by decision distance, not time.
|
||||
// Moves through the decision graph by the specified number of hops.
|
||||
NavigateDecisionHops(ctx context.Context, ucxlAddress string,
|
||||
hops int, direction NavigationDirection) (*TemporalNode, error)
|
||||
hops int, direction NavigationDirection) (*TemporalNode, error)
|
||||
|
||||
// GetDecisionTimeline gets timeline ordered by decision sequence.
|
||||
// Returns decisions in the order they were made, not chronological order.
|
||||
GetDecisionTimeline(ctx context.Context, ucxlAddress string,
|
||||
includeRelated bool, maxHops int) (*DecisionTimeline, error)
|
||||
includeRelated bool, maxHops int) (*DecisionTimeline, error)
|
||||
|
||||
// FindStaleContexts finds contexts that may be outdated.
|
||||
// Identifies contexts that haven't been updated despite related changes.
|
||||
@@ -235,7 +238,7 @@ type DistributedStorage interface {
|
||||
// Store stores context data in the DHT with encryption.
|
||||
// Data is encrypted based on access level and role requirements.
|
||||
Store(ctx context.Context, key string, data interface{},
|
||||
accessLevel crypto.AccessLevel) error
|
||||
accessLevel crypto.AccessLevel) error
|
||||
|
||||
// Retrieve retrieves and decrypts context data.
|
||||
// Automatically handles decryption based on current role permissions.
|
||||
@@ -281,7 +284,7 @@ type EncryptedStorage interface {
|
||||
// StoreEncrypted stores data encrypted for specific roles.
|
||||
// Supports multi-role encryption for shared access.
|
||||
StoreEncrypted(ctx context.Context, key string, data interface{},
|
||||
roles []string) error
|
||||
roles []string) error
|
||||
|
||||
// RetrieveDecrypted retrieves and decrypts data using current role.
|
||||
// Automatically selects appropriate decryption key.
|
||||
@@ -318,12 +321,12 @@ type ContextGenerator interface {
|
||||
// GenerateContext generates context for a path (requires admin role).
|
||||
// Analyzes content, structure, and patterns to create comprehensive context.
|
||||
GenerateContext(ctx context.Context, path string,
|
||||
options *GenerationOptions) (*ContextNode, error)
|
||||
options *GenerationOptions) (*ContextNode, error)
|
||||
|
||||
// RegenerateHierarchy regenerates entire hierarchy (admin-only).
|
||||
// Rebuilds context hierarchy from scratch with improved analysis.
|
||||
RegenerateHierarchy(ctx context.Context, rootPath string,
|
||||
options *GenerationOptions) (*HierarchyStats, error)
|
||||
options *GenerationOptions) (*HierarchyStats, error)
|
||||
|
||||
// ValidateGeneration validates generated context quality.
|
||||
// Ensures generated context meets quality and consistency standards.
|
||||
@@ -336,12 +339,12 @@ type ContextGenerator interface {
|
||||
// GenerateBatch generates context for multiple paths efficiently.
|
||||
// Optimized for bulk generation operations.
|
||||
GenerateBatch(ctx context.Context, paths []string,
|
||||
options *GenerationOptions) (map[string]*ContextNode, error)
|
||||
options *GenerationOptions) (map[string]*ContextNode, error)
|
||||
|
||||
// ScheduleGeneration schedules background context generation.
|
||||
// Queues generation tasks for processing during low-activity periods.
|
||||
ScheduleGeneration(ctx context.Context, paths []string,
|
||||
options *GenerationOptions, priority int) error
|
||||
options *GenerationOptions, priority int) error
|
||||
|
||||
// GetGenerationStatus gets status of background generation tasks.
|
||||
GetGenerationStatus(ctx context.Context) (*GenerationStatus, error)
|
||||
@@ -447,7 +450,7 @@ type QueryEngine interface {
|
||||
// TemporalQuery performs temporal-aware queries.
|
||||
// Queries context as it existed at specific decision points.
|
||||
TemporalQuery(ctx context.Context, query *SearchQuery,
|
||||
temporal *TemporalFilter) ([]*SearchResult, error)
|
||||
temporal *TemporalFilter) ([]*SearchResult, error)
|
||||
|
||||
// FuzzySearch performs fuzzy text search.
|
||||
// Handles typos and approximate matching.
|
||||
@@ -497,83 +500,81 @@ type HealthChecker interface {
|
||||
|
||||
// Additional types needed by interfaces
|
||||
|
||||
import "time"
|
||||
|
||||
type StorageStats struct {
|
||||
TotalKeys int64 `json:"total_keys"`
|
||||
TotalSize int64 `json:"total_size"`
|
||||
IndexSize int64 `json:"index_size"`
|
||||
CacheSize int64 `json:"cache_size"`
|
||||
ReplicationStatus string `json:"replication_status"`
|
||||
LastSync time.Time `json:"last_sync"`
|
||||
SyncErrors int64 `json:"sync_errors"`
|
||||
AvailableSpace int64 `json:"available_space"`
|
||||
TotalKeys int64 `json:"total_keys"`
|
||||
TotalSize int64 `json:"total_size"`
|
||||
IndexSize int64 `json:"index_size"`
|
||||
CacheSize int64 `json:"cache_size"`
|
||||
ReplicationStatus string `json:"replication_status"`
|
||||
LastSync time.Time `json:"last_sync"`
|
||||
SyncErrors int64 `json:"sync_errors"`
|
||||
AvailableSpace int64 `json:"available_space"`
|
||||
}
|
||||
|
||||
type GenerationStatus struct {
|
||||
ActiveTasks int `json:"active_tasks"`
|
||||
QueuedTasks int `json:"queued_tasks"`
|
||||
CompletedTasks int `json:"completed_tasks"`
|
||||
FailedTasks int `json:"failed_tasks"`
|
||||
EstimatedCompletion time.Time `json:"estimated_completion"`
|
||||
CurrentTask *GenerationTask `json:"current_task,omitempty"`
|
||||
ActiveTasks int `json:"active_tasks"`
|
||||
QueuedTasks int `json:"queued_tasks"`
|
||||
CompletedTasks int `json:"completed_tasks"`
|
||||
FailedTasks int `json:"failed_tasks"`
|
||||
EstimatedCompletion time.Time `json:"estimated_completion"`
|
||||
CurrentTask *GenerationTask `json:"current_task,omitempty"`
|
||||
}
|
||||
|
||||
type GenerationTask struct {
|
||||
ID string `json:"id"`
|
||||
Path string `json:"path"`
|
||||
Status string `json:"status"`
|
||||
Progress float64 `json:"progress"`
|
||||
StartedAt time.Time `json:"started_at"`
|
||||
ID string `json:"id"`
|
||||
Path string `json:"path"`
|
||||
Status string `json:"status"`
|
||||
Progress float64 `json:"progress"`
|
||||
StartedAt time.Time `json:"started_at"`
|
||||
EstimatedCompletion time.Time `json:"estimated_completion"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
type TrendAnalysis struct {
|
||||
TimeRange time.Duration `json:"time_range"`
|
||||
TotalChanges int `json:"total_changes"`
|
||||
ChangeVelocity float64 `json:"change_velocity"`
|
||||
TimeRange time.Duration `json:"time_range"`
|
||||
TotalChanges int `json:"total_changes"`
|
||||
ChangeVelocity float64 `json:"change_velocity"`
|
||||
DominantReasons []ChangeReason `json:"dominant_reasons"`
|
||||
QualityTrend string `json:"quality_trend"`
|
||||
ConfidenceTrend string `json:"confidence_trend"`
|
||||
MostActiveAreas []string `json:"most_active_areas"`
|
||||
EmergingPatterns []*Pattern `json:"emerging_patterns"`
|
||||
AnalyzedAt time.Time `json:"analyzed_at"`
|
||||
QualityTrend string `json:"quality_trend"`
|
||||
ConfidenceTrend string `json:"confidence_trend"`
|
||||
MostActiveAreas []string `json:"most_active_areas"`
|
||||
EmergingPatterns []*Pattern `json:"emerging_patterns"`
|
||||
AnalyzedAt time.Time `json:"analyzed_at"`
|
||||
}
|
||||
|
||||
type ComparisonResult struct {
|
||||
SimilarityScore float64 `json:"similarity_score"`
|
||||
Differences []*Difference `json:"differences"`
|
||||
CommonElements []string `json:"common_elements"`
|
||||
Recommendations []*Suggestion `json:"recommendations"`
|
||||
ComparedAt time.Time `json:"compared_at"`
|
||||
SimilarityScore float64 `json:"similarity_score"`
|
||||
Differences []*Difference `json:"differences"`
|
||||
CommonElements []string `json:"common_elements"`
|
||||
Recommendations []*Suggestion `json:"recommendations"`
|
||||
ComparedAt time.Time `json:"compared_at"`
|
||||
}
|
||||
|
||||
type Difference struct {
|
||||
Field string `json:"field"`
|
||||
Value1 interface{} `json:"value1"`
|
||||
Value2 interface{} `json:"value2"`
|
||||
DifferenceType string `json:"difference_type"`
|
||||
Significance float64 `json:"significance"`
|
||||
Field string `json:"field"`
|
||||
Value1 interface{} `json:"value1"`
|
||||
Value2 interface{} `json:"value2"`
|
||||
DifferenceType string `json:"difference_type"`
|
||||
Significance float64 `json:"significance"`
|
||||
}
|
||||
|
||||
type ConsistencyIssue struct {
|
||||
Type string `json:"type"`
|
||||
Description string `json:"description"`
|
||||
AffectedNodes []string `json:"affected_nodes"`
|
||||
Severity string `json:"severity"`
|
||||
Suggestion string `json:"suggestion"`
|
||||
DetectedAt time.Time `json:"detected_at"`
|
||||
Type string `json:"type"`
|
||||
Description string `json:"description"`
|
||||
AffectedNodes []string `json:"affected_nodes"`
|
||||
Severity string `json:"severity"`
|
||||
Suggestion string `json:"suggestion"`
|
||||
DetectedAt time.Time `json:"detected_at"`
|
||||
}
|
||||
|
||||
type QueryStats struct {
|
||||
TotalQueries int64 `json:"total_queries"`
|
||||
AverageQueryTime time.Duration `json:"average_query_time"`
|
||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||
TotalQueries int64 `json:"total_queries"`
|
||||
AverageQueryTime time.Duration `json:"average_query_time"`
|
||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||
IndexUsage map[string]int64 `json:"index_usage"`
|
||||
PopularQueries []string `json:"popular_queries"`
|
||||
SlowQueries []string `json:"slow_queries"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
PopularQueries []string `json:"popular_queries"`
|
||||
SlowQueries []string `json:"slow_queries"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
}
|
||||
|
||||
type CacheStats struct {
|
||||
@@ -588,17 +589,17 @@ type CacheStats struct {
|
||||
}
|
||||
|
||||
type HealthStatus struct {
|
||||
Overall string `json:"overall"`
|
||||
Components map[string]*ComponentHealth `json:"components"`
|
||||
CheckedAt time.Time `json:"checked_at"`
|
||||
Version string `json:"version"`
|
||||
Uptime time.Duration `json:"uptime"`
|
||||
Overall string `json:"overall"`
|
||||
Components map[string]*ComponentHealth `json:"components"`
|
||||
CheckedAt time.Time `json:"checked_at"`
|
||||
Version string `json:"version"`
|
||||
Uptime time.Duration `json:"uptime"`
|
||||
}
|
||||
|
||||
type ComponentHealth struct {
|
||||
Status string `json:"status"`
|
||||
Message string `json:"message,omitempty"`
|
||||
LastCheck time.Time `json:"last_check"`
|
||||
ResponseTime time.Duration `json:"response_time"`
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
||||
Status string `json:"status"`
|
||||
Message string `json:"message,omitempty"`
|
||||
LastCheck time.Time `json:"last_check"`
|
||||
ResponseTime time.Duration `json:"response_time"`
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
||||
}
|
||||
@@ -631,7 +631,7 @@ func (s *SLURP) GetTemporalEvolution(ctx context.Context, ucxlAddress string) ([
|
||||
return nil, fmt.Errorf("invalid UCXL address: %w", err)
|
||||
}
|
||||
|
||||
return s.temporalGraph.GetEvolutionHistory(ctx, *parsed)
|
||||
return s.temporalGraph.GetEvolutionHistory(ctx, parsed.String())
|
||||
}
|
||||
|
||||
// NavigateDecisionHops navigates through the decision graph by hop distance.
|
||||
@@ -654,7 +654,7 @@ func (s *SLURP) NavigateDecisionHops(ctx context.Context, ucxlAddress string, ho
|
||||
}
|
||||
|
||||
if navigator, ok := s.temporalGraph.(DecisionNavigator); ok {
|
||||
return navigator.NavigateDecisionHops(ctx, *parsed, hops, direction)
|
||||
return navigator.NavigateDecisionHops(ctx, parsed.String(), hops, direction)
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("decision navigation not supported by temporal graph")
|
||||
@@ -1348,26 +1348,42 @@ func (s *SLURP) handleEvent(event *SLURPEvent) {
|
||||
}
|
||||
}
|
||||
|
||||
// validateSLURPConfig validates SLURP configuration for consistency and correctness
|
||||
func validateSLURPConfig(config *SLURPConfig) error {
|
||||
if config.ContextResolution.MaxHierarchyDepth < 1 {
|
||||
return fmt.Errorf("max_hierarchy_depth must be at least 1")
|
||||
// validateSLURPConfig normalises runtime tunables sourced from configuration.
|
||||
func validateSLURPConfig(cfg *config.SlurpConfig) error {
|
||||
if cfg == nil {
|
||||
return fmt.Errorf("slurp config is nil")
|
||||
}
|
||||
|
||||
if config.ContextResolution.MinConfidenceThreshold < 0 || config.ContextResolution.MinConfidenceThreshold > 1 {
|
||||
return fmt.Errorf("min_confidence_threshold must be between 0 and 1")
|
||||
if cfg.Timeout <= 0 {
|
||||
cfg.Timeout = 15 * time.Second
|
||||
}
|
||||
|
||||
if config.TemporalAnalysis.MaxDecisionHops < 1 {
|
||||
return fmt.Errorf("max_decision_hops must be at least 1")
|
||||
if cfg.RetryCount < 0 {
|
||||
cfg.RetryCount = 0
|
||||
}
|
||||
|
||||
if config.TemporalAnalysis.StalenessThreshold < 0 || config.TemporalAnalysis.StalenessThreshold > 1 {
|
||||
return fmt.Errorf("staleness_threshold must be between 0 and 1")
|
||||
if cfg.RetryDelay <= 0 && cfg.RetryCount > 0 {
|
||||
cfg.RetryDelay = 2 * time.Second
|
||||
}
|
||||
|
||||
if config.Performance.MaxConcurrentResolutions < 1 {
|
||||
return fmt.Errorf("max_concurrent_resolutions must be at least 1")
|
||||
if cfg.Performance.MaxConcurrentResolutions <= 0 {
|
||||
cfg.Performance.MaxConcurrentResolutions = 1
|
||||
}
|
||||
|
||||
if cfg.Performance.MetricsCollectionInterval <= 0 {
|
||||
cfg.Performance.MetricsCollectionInterval = time.Minute
|
||||
}
|
||||
|
||||
if cfg.TemporalAnalysis.MaxDecisionHops <= 0 {
|
||||
cfg.TemporalAnalysis.MaxDecisionHops = 1
|
||||
}
|
||||
|
||||
if cfg.TemporalAnalysis.StalenessCheckInterval <= 0 {
|
||||
cfg.TemporalAnalysis.StalenessCheckInterval = 5 * time.Minute
|
||||
}
|
||||
|
||||
if cfg.TemporalAnalysis.StalenessThreshold < 0 || cfg.TemporalAnalysis.StalenessThreshold > 1 {
|
||||
cfg.TemporalAnalysis.StalenessThreshold = 0.2
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -164,6 +164,8 @@ func (bm *BackupManagerImpl) CreateBackup(
|
||||
Incremental: config.Incremental,
|
||||
ParentBackupID: config.ParentBackupID,
|
||||
Status: BackupStatusInProgress,
|
||||
Progress: 0,
|
||||
ErrorMessage: "",
|
||||
CreatedAt: time.Now(),
|
||||
RetentionUntil: time.Now().Add(config.Retention),
|
||||
}
|
||||
@@ -707,6 +709,7 @@ func (bm *BackupManagerImpl) validateFile(filePath string) error {
|
||||
func (bm *BackupManagerImpl) failBackup(job *BackupJob, backupInfo *BackupInfo, err error) {
|
||||
bm.mu.Lock()
|
||||
backupInfo.Status = BackupStatusFailed
|
||||
backupInfo.Progress = 0
|
||||
backupInfo.ErrorMessage = err.Error()
|
||||
job.Error = err
|
||||
bm.mu.Unlock()
|
||||
|
||||
@@ -3,18 +3,19 @@ package storage
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// BatchOperationsImpl provides efficient batch operations for context storage
|
||||
type BatchOperationsImpl struct {
|
||||
contextStore *ContextStoreImpl
|
||||
batchSize int
|
||||
maxConcurrency int
|
||||
contextStore *ContextStoreImpl
|
||||
batchSize int
|
||||
maxConcurrency int
|
||||
operationTimeout time.Duration
|
||||
}
|
||||
|
||||
@@ -22,8 +23,8 @@ type BatchOperationsImpl struct {
|
||||
func NewBatchOperations(contextStore *ContextStoreImpl, batchSize, maxConcurrency int, timeout time.Duration) *BatchOperationsImpl {
|
||||
return &BatchOperationsImpl{
|
||||
contextStore: contextStore,
|
||||
batchSize: batchSize,
|
||||
maxConcurrency: maxConcurrency,
|
||||
batchSize: batchSize,
|
||||
maxConcurrency: maxConcurrency,
|
||||
operationTimeout: timeout,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -13,13 +12,13 @@ import (
|
||||
|
||||
// CacheManagerImpl implements the CacheManager interface using Redis
|
||||
type CacheManagerImpl struct {
|
||||
mu sync.RWMutex
|
||||
client *redis.Client
|
||||
stats *CacheStatistics
|
||||
policy *CachePolicy
|
||||
prefix string
|
||||
nodeID string
|
||||
warmupKeys map[string]bool
|
||||
mu sync.RWMutex
|
||||
client *redis.Client
|
||||
stats *CacheStatistics
|
||||
policy *CachePolicy
|
||||
prefix string
|
||||
nodeID string
|
||||
warmupKeys map[string]bool
|
||||
}
|
||||
|
||||
// NewCacheManager creates a new cache manager with Redis backend
|
||||
@@ -68,13 +67,13 @@ func NewCacheManager(redisAddr, nodeID string, policy *CachePolicy) (*CacheManag
|
||||
// DefaultCachePolicy returns default caching policy
|
||||
func DefaultCachePolicy() *CachePolicy {
|
||||
return &CachePolicy{
|
||||
TTL: 24 * time.Hour,
|
||||
MaxSize: 1024 * 1024 * 1024, // 1GB
|
||||
EvictionPolicy: "LRU",
|
||||
RefreshThreshold: 0.8, // Refresh when 80% of TTL elapsed
|
||||
WarmupEnabled: true,
|
||||
CompressEntries: true,
|
||||
MaxEntrySize: 10 * 1024 * 1024, // 10MB
|
||||
TTL: 24 * time.Hour,
|
||||
MaxSize: 1024 * 1024 * 1024, // 1GB
|
||||
EvictionPolicy: "LRU",
|
||||
RefreshThreshold: 0.8, // Refresh when 80% of TTL elapsed
|
||||
WarmupEnabled: true,
|
||||
CompressEntries: true,
|
||||
MaxEntrySize: 10 * 1024 * 1024, // 10MB
|
||||
}
|
||||
}
|
||||
|
||||
@@ -314,17 +313,17 @@ func (cm *CacheManagerImpl) SetCachePolicy(policy *CachePolicy) error {
|
||||
|
||||
// CacheEntry represents a cached data entry with metadata
|
||||
type CacheEntry struct {
|
||||
Key string `json:"key"`
|
||||
Data []byte `json:"data"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
Key string `json:"key"`
|
||||
Data []byte `json:"data"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
TTL time.Duration `json:"ttl"`
|
||||
AccessCount int64 `json:"access_count"`
|
||||
LastAccessedAt time.Time `json:"last_accessed_at"`
|
||||
Compressed bool `json:"compressed"`
|
||||
OriginalSize int64 `json:"original_size"`
|
||||
CompressedSize int64 `json:"compressed_size"`
|
||||
NodeID string `json:"node_id"`
|
||||
AccessCount int64 `json:"access_count"`
|
||||
LastAccessedAt time.Time `json:"last_accessed_at"`
|
||||
Compressed bool `json:"compressed"`
|
||||
OriginalSize int64 `json:"original_size"`
|
||||
CompressedSize int64 `json:"compressed_size"`
|
||||
NodeID string `json:"node_id"`
|
||||
}
|
||||
|
||||
// Helper methods
|
||||
|
||||
@@ -3,10 +3,8 @@ package storage
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestLocalStorageCompression(t *testing.T) {
|
||||
|
||||
@@ -2,71 +2,68 @@ package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/crypto"
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// ContextStoreImpl is the main implementation of the ContextStore interface
|
||||
// It coordinates between local storage, distributed storage, encryption, caching, and indexing
|
||||
type ContextStoreImpl struct {
|
||||
mu sync.RWMutex
|
||||
localStorage LocalStorage
|
||||
mu sync.RWMutex
|
||||
localStorage LocalStorage
|
||||
distributedStorage DistributedStorage
|
||||
encryptedStorage EncryptedStorage
|
||||
cacheManager CacheManager
|
||||
indexManager IndexManager
|
||||
backupManager BackupManager
|
||||
eventNotifier EventNotifier
|
||||
encryptedStorage EncryptedStorage
|
||||
cacheManager CacheManager
|
||||
indexManager IndexManager
|
||||
backupManager BackupManager
|
||||
eventNotifier EventNotifier
|
||||
|
||||
// Configuration
|
||||
nodeID string
|
||||
options *ContextStoreOptions
|
||||
nodeID string
|
||||
options *ContextStoreOptions
|
||||
|
||||
// Statistics and monitoring
|
||||
statistics *StorageStatistics
|
||||
metricsCollector *MetricsCollector
|
||||
statistics *StorageStatistics
|
||||
metricsCollector *MetricsCollector
|
||||
|
||||
// Background processes
|
||||
stopCh chan struct{}
|
||||
syncTicker *time.Ticker
|
||||
compactionTicker *time.Ticker
|
||||
cleanupTicker *time.Ticker
|
||||
stopCh chan struct{}
|
||||
syncTicker *time.Ticker
|
||||
compactionTicker *time.Ticker
|
||||
cleanupTicker *time.Ticker
|
||||
}
|
||||
|
||||
// ContextStoreOptions configures the context store behavior
|
||||
type ContextStoreOptions struct {
|
||||
// Storage configuration
|
||||
PreferLocal bool `json:"prefer_local"`
|
||||
AutoReplicate bool `json:"auto_replicate"`
|
||||
DefaultReplicas int `json:"default_replicas"`
|
||||
EncryptionEnabled bool `json:"encryption_enabled"`
|
||||
CompressionEnabled bool `json:"compression_enabled"`
|
||||
PreferLocal bool `json:"prefer_local"`
|
||||
AutoReplicate bool `json:"auto_replicate"`
|
||||
DefaultReplicas int `json:"default_replicas"`
|
||||
EncryptionEnabled bool `json:"encryption_enabled"`
|
||||
CompressionEnabled bool `json:"compression_enabled"`
|
||||
|
||||
// Caching configuration
|
||||
CachingEnabled bool `json:"caching_enabled"`
|
||||
CacheTTL time.Duration `json:"cache_ttl"`
|
||||
CacheSize int64 `json:"cache_size"`
|
||||
CachingEnabled bool `json:"caching_enabled"`
|
||||
CacheTTL time.Duration `json:"cache_ttl"`
|
||||
CacheSize int64 `json:"cache_size"`
|
||||
|
||||
// Indexing configuration
|
||||
IndexingEnabled bool `json:"indexing_enabled"`
|
||||
IndexingEnabled bool `json:"indexing_enabled"`
|
||||
IndexRefreshInterval time.Duration `json:"index_refresh_interval"`
|
||||
|
||||
// Background processes
|
||||
SyncInterval time.Duration `json:"sync_interval"`
|
||||
CompactionInterval time.Duration `json:"compaction_interval"`
|
||||
CleanupInterval time.Duration `json:"cleanup_interval"`
|
||||
SyncInterval time.Duration `json:"sync_interval"`
|
||||
CompactionInterval time.Duration `json:"compaction_interval"`
|
||||
CleanupInterval time.Duration `json:"cleanup_interval"`
|
||||
|
||||
// Performance tuning
|
||||
BatchSize int `json:"batch_size"`
|
||||
MaxConcurrentOps int `json:"max_concurrent_ops"`
|
||||
OperationTimeout time.Duration `json:"operation_timeout"`
|
||||
BatchSize int `json:"batch_size"`
|
||||
MaxConcurrentOps int `json:"max_concurrent_ops"`
|
||||
OperationTimeout time.Duration `json:"operation_timeout"`
|
||||
}
|
||||
|
||||
// MetricsCollector collects and aggregates storage metrics
|
||||
@@ -87,16 +84,16 @@ func DefaultContextStoreOptions() *ContextStoreOptions {
|
||||
EncryptionEnabled: true,
|
||||
CompressionEnabled: true,
|
||||
CachingEnabled: true,
|
||||
CacheTTL: 24 * time.Hour,
|
||||
CacheSize: 1024 * 1024 * 1024, // 1GB
|
||||
IndexingEnabled: true,
|
||||
CacheTTL: 24 * time.Hour,
|
||||
CacheSize: 1024 * 1024 * 1024, // 1GB
|
||||
IndexingEnabled: true,
|
||||
IndexRefreshInterval: 5 * time.Minute,
|
||||
SyncInterval: 10 * time.Minute,
|
||||
CompactionInterval: 24 * time.Hour,
|
||||
CleanupInterval: 1 * time.Hour,
|
||||
BatchSize: 100,
|
||||
MaxConcurrentOps: 10,
|
||||
OperationTimeout: 30 * time.Second,
|
||||
SyncInterval: 10 * time.Minute,
|
||||
CompactionInterval: 24 * time.Hour,
|
||||
CleanupInterval: 1 * time.Hour,
|
||||
BatchSize: 100,
|
||||
MaxConcurrentOps: 10,
|
||||
OperationTimeout: 30 * time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -124,8 +121,8 @@ func NewContextStore(
|
||||
indexManager: indexManager,
|
||||
backupManager: backupManager,
|
||||
eventNotifier: eventNotifier,
|
||||
nodeID: nodeID,
|
||||
options: options,
|
||||
nodeID: nodeID,
|
||||
options: options,
|
||||
statistics: &StorageStatistics{
|
||||
LastSyncTime: time.Now(),
|
||||
},
|
||||
@@ -174,11 +171,11 @@ func (cs *ContextStoreImpl) StoreContext(
|
||||
} else {
|
||||
// Store unencrypted
|
||||
storeOptions := &StoreOptions{
|
||||
Encrypt: false,
|
||||
Replicate: cs.options.AutoReplicate,
|
||||
Index: cs.options.IndexingEnabled,
|
||||
Cache: cs.options.CachingEnabled,
|
||||
Compress: cs.options.CompressionEnabled,
|
||||
Encrypt: false,
|
||||
Replicate: cs.options.AutoReplicate,
|
||||
Index: cs.options.IndexingEnabled,
|
||||
Cache: cs.options.CachingEnabled,
|
||||
Compress: cs.options.CompressionEnabled,
|
||||
}
|
||||
storeErr = cs.localStorage.Store(ctx, storageKey, node, storeOptions)
|
||||
}
|
||||
@@ -216,8 +213,8 @@ func (cs *ContextStoreImpl) StoreContext(
|
||||
distOptions := &DistributedStoreOptions{
|
||||
ReplicationFactor: cs.options.DefaultReplicas,
|
||||
ConsistencyLevel: ConsistencyQuorum,
|
||||
Timeout: cs.options.OperationTimeout,
|
||||
SyncMode: SyncAsync,
|
||||
Timeout: cs.options.OperationTimeout,
|
||||
SyncMode: SyncAsync,
|
||||
}
|
||||
|
||||
if err := cs.distributedStorage.Store(replicateCtx, storageKey, node, distOptions); err != nil {
|
||||
@@ -729,7 +726,7 @@ func (cs *ContextStoreImpl) Sync(ctx context.Context) error {
|
||||
Type: EventSynced,
|
||||
Timestamp: time.Now(),
|
||||
Metadata: map[string]interface{}{
|
||||
"node_id": cs.nodeID,
|
||||
"node_id": cs.nodeID,
|
||||
"sync_time": time.Since(start),
|
||||
},
|
||||
}
|
||||
|
||||
@@ -8,69 +8,68 @@ import (
|
||||
"time"
|
||||
|
||||
"chorus/pkg/dht"
|
||||
"chorus/pkg/types"
|
||||
)
|
||||
|
||||
// DistributedStorageImpl implements the DistributedStorage interface
|
||||
type DistributedStorageImpl struct {
|
||||
mu sync.RWMutex
|
||||
dht dht.DHT
|
||||
nodeID string
|
||||
metrics *DistributedStorageStats
|
||||
replicas map[string][]string // key -> replica node IDs
|
||||
heartbeat *HeartbeatManager
|
||||
consensus *ConsensusManager
|
||||
options *DistributedStorageOptions
|
||||
mu sync.RWMutex
|
||||
dht dht.DHT
|
||||
nodeID string
|
||||
metrics *DistributedStorageStats
|
||||
replicas map[string][]string // key -> replica node IDs
|
||||
heartbeat *HeartbeatManager
|
||||
consensus *ConsensusManager
|
||||
options *DistributedStorageOptions
|
||||
}
|
||||
|
||||
// HeartbeatManager manages node heartbeats and health
|
||||
type HeartbeatManager struct {
|
||||
mu sync.RWMutex
|
||||
nodes map[string]*NodeHealth
|
||||
mu sync.RWMutex
|
||||
nodes map[string]*NodeHealth
|
||||
heartbeatInterval time.Duration
|
||||
timeoutThreshold time.Duration
|
||||
stopCh chan struct{}
|
||||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
// NodeHealth tracks the health of a distributed storage node
|
||||
type NodeHealth struct {
|
||||
NodeID string `json:"node_id"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
NodeID string `json:"node_id"`
|
||||
LastSeen time.Time `json:"last_seen"`
|
||||
Latency time.Duration `json:"latency"`
|
||||
IsActive bool `json:"is_active"`
|
||||
FailureCount int `json:"failure_count"`
|
||||
Load float64 `json:"load"`
|
||||
IsActive bool `json:"is_active"`
|
||||
FailureCount int `json:"failure_count"`
|
||||
Load float64 `json:"load"`
|
||||
}
|
||||
|
||||
// ConsensusManager handles consensus operations for distributed storage
|
||||
type ConsensusManager struct {
|
||||
mu sync.RWMutex
|
||||
pendingOps map[string]*ConsensusOperation
|
||||
votingTimeout time.Duration
|
||||
quorumSize int
|
||||
mu sync.RWMutex
|
||||
pendingOps map[string]*ConsensusOperation
|
||||
votingTimeout time.Duration
|
||||
quorumSize int
|
||||
}
|
||||
|
||||
// ConsensusOperation represents a distributed operation requiring consensus
|
||||
type ConsensusOperation struct {
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"`
|
||||
Key string `json:"key"`
|
||||
Data interface{} `json:"data"`
|
||||
Initiator string `json:"initiator"`
|
||||
Votes map[string]bool `json:"votes"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
Status ConsensusStatus `json:"status"`
|
||||
Callback func(bool, error) `json:"-"`
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"`
|
||||
Key string `json:"key"`
|
||||
Data interface{} `json:"data"`
|
||||
Initiator string `json:"initiator"`
|
||||
Votes map[string]bool `json:"votes"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
Status ConsensusStatus `json:"status"`
|
||||
Callback func(bool, error) `json:"-"`
|
||||
}
|
||||
|
||||
// ConsensusStatus represents the status of a consensus operation
|
||||
type ConsensusStatus string
|
||||
|
||||
const (
|
||||
ConsensusPending ConsensusStatus = "pending"
|
||||
ConsensusApproved ConsensusStatus = "approved"
|
||||
ConsensusRejected ConsensusStatus = "rejected"
|
||||
ConsensusTimeout ConsensusStatus = "timeout"
|
||||
ConsensusPending ConsensusStatus = "pending"
|
||||
ConsensusApproved ConsensusStatus = "approved"
|
||||
ConsensusRejected ConsensusStatus = "rejected"
|
||||
ConsensusTimeout ConsensusStatus = "timeout"
|
||||
)
|
||||
|
||||
// NewDistributedStorage creates a new distributed storage implementation
|
||||
@@ -83,9 +82,9 @@ func NewDistributedStorage(
|
||||
options = &DistributedStoreOptions{
|
||||
ReplicationFactor: 3,
|
||||
ConsistencyLevel: ConsistencyQuorum,
|
||||
Timeout: 30 * time.Second,
|
||||
PreferLocal: true,
|
||||
SyncMode: SyncAsync,
|
||||
Timeout: 30 * time.Second,
|
||||
PreferLocal: true,
|
||||
SyncMode: SyncAsync,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -98,10 +97,10 @@ func NewDistributedStorage(
|
||||
LastRebalance: time.Now(),
|
||||
},
|
||||
heartbeat: &HeartbeatManager{
|
||||
nodes: make(map[string]*NodeHealth),
|
||||
nodes: make(map[string]*NodeHealth),
|
||||
heartbeatInterval: 30 * time.Second,
|
||||
timeoutThreshold: 90 * time.Second,
|
||||
stopCh: make(chan struct{}),
|
||||
stopCh: make(chan struct{}),
|
||||
},
|
||||
consensus: &ConsensusManager{
|
||||
pendingOps: make(map[string]*ConsensusOperation),
|
||||
@@ -125,8 +124,6 @@ func (ds *DistributedStorageImpl) Store(
|
||||
data interface{},
|
||||
options *DistributedStoreOptions,
|
||||
) error {
|
||||
start := time.Now()
|
||||
|
||||
if options == nil {
|
||||
options = ds.options
|
||||
}
|
||||
@@ -179,7 +176,7 @@ func (ds *DistributedStorageImpl) Retrieve(
|
||||
|
||||
// Try local first if prefer local is enabled
|
||||
if ds.options.PreferLocal {
|
||||
if localData, err := ds.dht.Get(key); err == nil {
|
||||
if localData, err := ds.dht.GetValue(ctx, key); err == nil {
|
||||
return ds.deserializeEntry(localData)
|
||||
}
|
||||
}
|
||||
@@ -226,25 +223,9 @@ func (ds *DistributedStorageImpl) Exists(
|
||||
ctx context.Context,
|
||||
key string,
|
||||
) (bool, error) {
|
||||
// Try local first
|
||||
if ds.options.PreferLocal {
|
||||
if exists, err := ds.dht.Exists(key); err == nil {
|
||||
return exists, nil
|
||||
}
|
||||
if _, err := ds.dht.GetValue(ctx, key); err == nil {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// Check replicas
|
||||
replicas, err := ds.getReplicationNodes(key)
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to get replication nodes: %w", err)
|
||||
}
|
||||
|
||||
for _, nodeID := range replicas {
|
||||
if exists, err := ds.checkExistsOnNode(ctx, nodeID, key); err == nil && exists {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
@@ -306,10 +287,7 @@ func (ds *DistributedStorageImpl) FindReplicas(
|
||||
|
||||
// Sync synchronizes with other DHT nodes
|
||||
func (ds *DistributedStorageImpl) Sync(ctx context.Context) error {
|
||||
start := time.Now()
|
||||
defer func() {
|
||||
ds.metrics.LastRebalance = time.Now()
|
||||
}()
|
||||
ds.metrics.LastRebalance = time.Now()
|
||||
|
||||
// Get list of active nodes
|
||||
activeNodes := ds.heartbeat.getActiveNodes()
|
||||
@@ -346,7 +324,7 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
|
||||
healthyReplicas := int64(0)
|
||||
underReplicated := int64(0)
|
||||
|
||||
for key, replicas := range ds.replicas {
|
||||
for _, replicas := range ds.replicas {
|
||||
totalReplicas += int64(len(replicas))
|
||||
healthy := 0
|
||||
for _, nodeID := range replicas {
|
||||
@@ -371,14 +349,14 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
|
||||
|
||||
// DistributedEntry represents a distributed storage entry
|
||||
type DistributedEntry struct {
|
||||
Key string `json:"key"`
|
||||
Data []byte `json:"data"`
|
||||
ReplicationFactor int `json:"replication_factor"`
|
||||
Key string `json:"key"`
|
||||
Data []byte `json:"data"`
|
||||
ReplicationFactor int `json:"replication_factor"`
|
||||
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
Version int64 `json:"version"`
|
||||
Checksum string `json:"checksum"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
Version int64 `json:"version"`
|
||||
Checksum string `json:"checksum"`
|
||||
}
|
||||
|
||||
// Helper methods implementation
|
||||
@@ -405,13 +383,13 @@ func (ds *DistributedStorageImpl) selectReplicationNodes(key string, replication
|
||||
}
|
||||
|
||||
func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||
// Store asynchronously on all nodes
|
||||
// Store asynchronously on all nodes for SEC-SLURP-1.1a replication policy
|
||||
errCh := make(chan error, len(nodes))
|
||||
|
||||
for _, nodeID := range nodes {
|
||||
go func(node string) {
|
||||
err := ds.storeOnNode(ctx, node, entry)
|
||||
errorCh <- err
|
||||
errCh <- err
|
||||
}(nodeID)
|
||||
}
|
||||
|
||||
@@ -445,13 +423,13 @@ func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *Dist
|
||||
}
|
||||
|
||||
func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||
// Store synchronously on all nodes
|
||||
// Store synchronously on all nodes per SEC-SLURP-1.1a durability target
|
||||
errCh := make(chan error, len(nodes))
|
||||
|
||||
for _, nodeID := range nodes {
|
||||
go func(node string) {
|
||||
err := ds.storeOnNode(ctx, node, entry)
|
||||
errorCh <- err
|
||||
errCh <- err
|
||||
}(nodeID)
|
||||
}
|
||||
|
||||
@@ -476,14 +454,14 @@ func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *Distri
|
||||
}
|
||||
|
||||
func (ds *DistributedStorageImpl) storeQuorum(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||
// Store on quorum of nodes
|
||||
// Store on quorum of nodes per SEC-SLURP-1.1a availability guardrail
|
||||
quorumSize := (len(nodes) / 2) + 1
|
||||
errCh := make(chan error, len(nodes))
|
||||
|
||||
for _, nodeID := range nodes {
|
||||
go func(node string) {
|
||||
err := ds.storeOnNode(ctx, node, entry)
|
||||
errorCh <- err
|
||||
errCh <- err
|
||||
}(nodeID)
|
||||
}
|
||||
|
||||
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"time"
|
||||
|
||||
"chorus/pkg/crypto"
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
)
|
||||
|
||||
@@ -19,25 +18,25 @@ type EncryptedStorageImpl struct {
|
||||
crypto crypto.RoleCrypto
|
||||
localStorage LocalStorage
|
||||
keyManager crypto.KeyManager
|
||||
accessControl crypto.AccessController
|
||||
auditLogger crypto.AuditLogger
|
||||
accessControl crypto.StorageAccessController
|
||||
auditLogger crypto.StorageAuditLogger
|
||||
metrics *EncryptionMetrics
|
||||
}
|
||||
|
||||
// EncryptionMetrics tracks encryption-related metrics
|
||||
type EncryptionMetrics struct {
|
||||
mu sync.RWMutex
|
||||
EncryptOperations int64
|
||||
DecryptOperations int64
|
||||
KeyRotations int64
|
||||
AccessDenials int64
|
||||
EncryptionErrors int64
|
||||
DecryptionErrors int64
|
||||
LastKeyRotation time.Time
|
||||
AverageEncryptTime time.Duration
|
||||
AverageDecryptTime time.Duration
|
||||
ActiveEncryptionKeys int
|
||||
ExpiredKeys int
|
||||
mu sync.RWMutex
|
||||
EncryptOperations int64
|
||||
DecryptOperations int64
|
||||
KeyRotations int64
|
||||
AccessDenials int64
|
||||
EncryptionErrors int64
|
||||
DecryptionErrors int64
|
||||
LastKeyRotation time.Time
|
||||
AverageEncryptTime time.Duration
|
||||
AverageDecryptTime time.Duration
|
||||
ActiveEncryptionKeys int
|
||||
ExpiredKeys int
|
||||
}
|
||||
|
||||
// NewEncryptedStorage creates a new encrypted storage implementation
|
||||
@@ -45,8 +44,8 @@ func NewEncryptedStorage(
|
||||
crypto crypto.RoleCrypto,
|
||||
localStorage LocalStorage,
|
||||
keyManager crypto.KeyManager,
|
||||
accessControl crypto.AccessController,
|
||||
auditLogger crypto.AuditLogger,
|
||||
accessControl crypto.StorageAccessController,
|
||||
auditLogger crypto.StorageAuditLogger,
|
||||
) *EncryptedStorageImpl {
|
||||
return &EncryptedStorageImpl{
|
||||
crypto: crypto,
|
||||
@@ -286,12 +285,11 @@ func (es *EncryptedStorageImpl) GetAccessRoles(
|
||||
return roles, nil
|
||||
}
|
||||
|
||||
// RotateKeys rotates encryption keys
|
||||
// RotateKeys rotates encryption keys in line with SEC-SLURP-1.1 retention constraints
|
||||
func (es *EncryptedStorageImpl) RotateKeys(
|
||||
ctx context.Context,
|
||||
maxAge time.Duration,
|
||||
) error {
|
||||
start := time.Now()
|
||||
defer func() {
|
||||
es.metrics.mu.Lock()
|
||||
es.metrics.KeyRotations++
|
||||
|
||||
@@ -9,22 +9,23 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
"github.com/blevesearch/bleve/v2"
|
||||
"github.com/blevesearch/bleve/v2/analysis/analyzer/standard"
|
||||
"github.com/blevesearch/bleve/v2/analysis/lang/en"
|
||||
"github.com/blevesearch/bleve/v2/mapping"
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"github.com/blevesearch/bleve/v2/search/query"
|
||||
)
|
||||
|
||||
// IndexManagerImpl implements the IndexManager interface using Bleve
|
||||
type IndexManagerImpl struct {
|
||||
mu sync.RWMutex
|
||||
indexes map[string]bleve.Index
|
||||
stats map[string]*IndexStatistics
|
||||
basePath string
|
||||
nodeID string
|
||||
options *IndexManagerOptions
|
||||
mu sync.RWMutex
|
||||
indexes map[string]bleve.Index
|
||||
stats map[string]*IndexStatistics
|
||||
basePath string
|
||||
nodeID string
|
||||
options *IndexManagerOptions
|
||||
}
|
||||
|
||||
// IndexManagerOptions configures index manager behavior
|
||||
@@ -60,11 +61,11 @@ func NewIndexManager(basePath, nodeID string, options *IndexManagerOptions) (*In
|
||||
}
|
||||
|
||||
im := &IndexManagerImpl{
|
||||
indexes: make(map[string]bleve.Index),
|
||||
stats: make(map[string]*IndexStatistics),
|
||||
basePath: basePath,
|
||||
nodeID: nodeID,
|
||||
options: options,
|
||||
indexes: make(map[string]bleve.Index),
|
||||
stats: make(map[string]*IndexStatistics),
|
||||
basePath: basePath,
|
||||
nodeID: nodeID,
|
||||
options: options,
|
||||
}
|
||||
|
||||
// Start background optimization if enabled
|
||||
@@ -432,31 +433,31 @@ func (im *IndexManagerImpl) createIndexDocument(data interface{}) (map[string]in
|
||||
return doc, nil
|
||||
}
|
||||
|
||||
func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.SearchRequest, error) {
|
||||
// Build Bleve search request from our search query
|
||||
var bleveQuery bleve.Query
|
||||
func (im *IndexManagerImpl) buildSearchRequest(searchQuery *SearchQuery) (*bleve.SearchRequest, error) {
|
||||
// Build Bleve search request from our search query (SEC-SLURP-1.1 search path)
|
||||
var bleveQuery query.Query
|
||||
|
||||
if query.Query == "" {
|
||||
if searchQuery.Query == "" {
|
||||
// Match all query
|
||||
bleveQuery = bleve.NewMatchAllQuery()
|
||||
} else {
|
||||
// Text search query
|
||||
if query.FuzzyMatch {
|
||||
if searchQuery.FuzzyMatch {
|
||||
// Use fuzzy query
|
||||
bleveQuery = bleve.NewFuzzyQuery(query.Query)
|
||||
bleveQuery = bleve.NewFuzzyQuery(searchQuery.Query)
|
||||
} else {
|
||||
// Use match query for better scoring
|
||||
bleveQuery = bleve.NewMatchQuery(query.Query)
|
||||
bleveQuery = bleve.NewMatchQuery(searchQuery.Query)
|
||||
}
|
||||
}
|
||||
|
||||
// Add filters
|
||||
var conjuncts []bleve.Query
|
||||
var conjuncts []query.Query
|
||||
conjuncts = append(conjuncts, bleveQuery)
|
||||
|
||||
// Technology filters
|
||||
if len(query.Technologies) > 0 {
|
||||
for _, tech := range query.Technologies {
|
||||
if len(searchQuery.Technologies) > 0 {
|
||||
for _, tech := range searchQuery.Technologies {
|
||||
techQuery := bleve.NewTermQuery(tech)
|
||||
techQuery.SetField("technologies_facet")
|
||||
conjuncts = append(conjuncts, techQuery)
|
||||
@@ -464,8 +465,8 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
||||
}
|
||||
|
||||
// Tag filters
|
||||
if len(query.Tags) > 0 {
|
||||
for _, tag := range query.Tags {
|
||||
if len(searchQuery.Tags) > 0 {
|
||||
for _, tag := range searchQuery.Tags {
|
||||
tagQuery := bleve.NewTermQuery(tag)
|
||||
tagQuery.SetField("tags_facet")
|
||||
conjuncts = append(conjuncts, tagQuery)
|
||||
@@ -481,18 +482,18 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
||||
searchRequest := bleve.NewSearchRequest(bleveQuery)
|
||||
|
||||
// Set result options
|
||||
if query.Limit > 0 && query.Limit <= im.options.MaxResults {
|
||||
searchRequest.Size = query.Limit
|
||||
if searchQuery.Limit > 0 && searchQuery.Limit <= im.options.MaxResults {
|
||||
searchRequest.Size = searchQuery.Limit
|
||||
} else {
|
||||
searchRequest.Size = im.options.MaxResults
|
||||
}
|
||||
|
||||
if query.Offset > 0 {
|
||||
searchRequest.From = query.Offset
|
||||
if searchQuery.Offset > 0 {
|
||||
searchRequest.From = searchQuery.Offset
|
||||
}
|
||||
|
||||
// Enable highlighting if requested
|
||||
if query.HighlightTerms && im.options.EnableHighlighting {
|
||||
if searchQuery.HighlightTerms && im.options.EnableHighlighting {
|
||||
searchRequest.Highlight = bleve.NewHighlight()
|
||||
searchRequest.Highlight.AddField("content")
|
||||
searchRequest.Highlight.AddField("summary")
|
||||
@@ -500,9 +501,9 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
||||
}
|
||||
|
||||
// Add facets if requested
|
||||
if len(query.Facets) > 0 && im.options.EnableFaceting {
|
||||
if len(searchQuery.Facets) > 0 && im.options.EnableFaceting {
|
||||
searchRequest.Facets = make(bleve.FacetsRequest)
|
||||
for _, facet := range query.Facets {
|
||||
for _, facet := range searchQuery.Facets {
|
||||
switch facet {
|
||||
case "technologies":
|
||||
searchRequest.Facets["technologies"] = bleve.NewFacetRequest("technologies_facet", 10)
|
||||
@@ -535,7 +536,7 @@ func (im *IndexManagerImpl) convertSearchResults(
|
||||
searchHit := &SearchResult{
|
||||
MatchScore: hit.Score,
|
||||
MatchedFields: make([]string, 0),
|
||||
Highlights: make(map[string][]string),
|
||||
Highlights: make(map[string][]string),
|
||||
Rank: i + 1,
|
||||
}
|
||||
|
||||
@@ -558,8 +559,8 @@ func (im *IndexManagerImpl) convertSearchResults(
|
||||
|
||||
// Parse UCXL address
|
||||
if ucxlStr, ok := hit.Fields["ucxl_address"].(string); ok {
|
||||
if addr, err := ucxl.ParseAddress(ucxlStr); err == nil {
|
||||
contextNode.UCXLAddress = addr
|
||||
if addr, err := ucxl.Parse(ucxlStr); err == nil {
|
||||
contextNode.UCXLAddress = *addr
|
||||
}
|
||||
}
|
||||
|
||||
@@ -572,8 +573,10 @@ func (im *IndexManagerImpl) convertSearchResults(
|
||||
results.Facets = make(map[string]map[string]int)
|
||||
for facetName, facetResult := range searchResult.Facets {
|
||||
facetCounts := make(map[string]int)
|
||||
for _, term := range facetResult.Terms {
|
||||
facetCounts[term.Term] = term.Count
|
||||
if facetResult.Terms != nil {
|
||||
for _, term := range facetResult.Terms.Terms() {
|
||||
facetCounts[term.Term] = term.Count
|
||||
}
|
||||
}
|
||||
results.Facets[facetName] = facetCounts
|
||||
}
|
||||
|
||||
@@ -4,9 +4,8 @@ import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
"chorus/pkg/crypto"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// ContextStore provides the main interface for context storage and retrieval
|
||||
@@ -270,35 +269,35 @@ type EventHandler func(event *StorageEvent) error
|
||||
|
||||
// StorageEvent represents a storage operation event
|
||||
type StorageEvent struct {
|
||||
Type EventType `json:"type"` // Event type
|
||||
Key string `json:"key"` // Storage key
|
||||
Data interface{} `json:"data"` // Event data
|
||||
Timestamp time.Time `json:"timestamp"` // When event occurred
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
Type EventType `json:"type"` // Event type
|
||||
Key string `json:"key"` // Storage key
|
||||
Data interface{} `json:"data"` // Event data
|
||||
Timestamp time.Time `json:"timestamp"` // When event occurred
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// Transaction represents a storage transaction
|
||||
type Transaction struct {
|
||||
ID string `json:"id"` // Transaction ID
|
||||
StartTime time.Time `json:"start_time"` // When transaction started
|
||||
ID string `json:"id"` // Transaction ID
|
||||
StartTime time.Time `json:"start_time"` // When transaction started
|
||||
Operations []*TransactionOperation `json:"operations"` // Transaction operations
|
||||
Status TransactionStatus `json:"status"` // Transaction status
|
||||
Status TransactionStatus `json:"status"` // Transaction status
|
||||
}
|
||||
|
||||
// TransactionOperation represents a single operation in a transaction
|
||||
type TransactionOperation struct {
|
||||
Type string `json:"type"` // Operation type
|
||||
Key string `json:"key"` // Storage key
|
||||
Data interface{} `json:"data"` // Operation data
|
||||
Metadata map[string]interface{} `json:"metadata"` // Operation metadata
|
||||
Type string `json:"type"` // Operation type
|
||||
Key string `json:"key"` // Storage key
|
||||
Data interface{} `json:"data"` // Operation data
|
||||
Metadata map[string]interface{} `json:"metadata"` // Operation metadata
|
||||
}
|
||||
|
||||
// TransactionStatus represents transaction status
|
||||
type TransactionStatus string
|
||||
|
||||
const (
|
||||
TransactionActive TransactionStatus = "active"
|
||||
TransactionCommitted TransactionStatus = "committed"
|
||||
TransactionActive TransactionStatus = "active"
|
||||
TransactionCommitted TransactionStatus = "committed"
|
||||
TransactionRolledBack TransactionStatus = "rolled_back"
|
||||
TransactionFailed TransactionStatus = "failed"
|
||||
TransactionFailed TransactionStatus = "failed"
|
||||
)
|
||||
@@ -33,12 +33,12 @@ type LocalStorageImpl struct {
|
||||
|
||||
// LocalStorageOptions configures local storage behavior
|
||||
type LocalStorageOptions struct {
|
||||
Compression bool `json:"compression"` // Enable compression
|
||||
CacheSize int `json:"cache_size"` // Cache size in MB
|
||||
WriteBuffer int `json:"write_buffer"` // Write buffer size in MB
|
||||
MaxOpenFiles int `json:"max_open_files"` // Maximum open files
|
||||
BlockSize int `json:"block_size"` // Block size in KB
|
||||
SyncWrites bool `json:"sync_writes"` // Synchronous writes
|
||||
Compression bool `json:"compression"` // Enable compression
|
||||
CacheSize int `json:"cache_size"` // Cache size in MB
|
||||
WriteBuffer int `json:"write_buffer"` // Write buffer size in MB
|
||||
MaxOpenFiles int `json:"max_open_files"` // Maximum open files
|
||||
BlockSize int `json:"block_size"` // Block size in KB
|
||||
SyncWrites bool `json:"sync_writes"` // Synchronous writes
|
||||
CompactionInterval time.Duration `json:"compaction_interval"` // Auto-compaction interval
|
||||
}
|
||||
|
||||
@@ -46,11 +46,11 @@ type LocalStorageOptions struct {
|
||||
func DefaultLocalStorageOptions() *LocalStorageOptions {
|
||||
return &LocalStorageOptions{
|
||||
Compression: true,
|
||||
CacheSize: 64, // 64MB cache
|
||||
WriteBuffer: 16, // 16MB write buffer
|
||||
MaxOpenFiles: 1000,
|
||||
BlockSize: 4, // 4KB blocks
|
||||
SyncWrites: false,
|
||||
CacheSize: 64, // 64MB cache
|
||||
WriteBuffer: 16, // 16MB write buffer
|
||||
MaxOpenFiles: 1000,
|
||||
BlockSize: 4, // 4KB blocks
|
||||
SyncWrites: false,
|
||||
CompactionInterval: 24 * time.Hour,
|
||||
}
|
||||
}
|
||||
@@ -135,6 +135,7 @@ func (ls *LocalStorageImpl) Store(
|
||||
UpdatedAt: time.Now(),
|
||||
Metadata: make(map[string]interface{}),
|
||||
}
|
||||
entry.Checksum = ls.computeChecksum(dataBytes)
|
||||
|
||||
// Apply options
|
||||
if options != nil {
|
||||
@@ -179,6 +180,7 @@ func (ls *LocalStorageImpl) Store(
|
||||
if entry.Compressed {
|
||||
ls.metrics.CompressedSize += entry.CompressedSize
|
||||
}
|
||||
ls.updateFileMetricsLocked()
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -231,6 +233,14 @@ func (ls *LocalStorageImpl) Retrieve(ctx context.Context, key string) (interface
|
||||
dataBytes = decompressedData
|
||||
}
|
||||
|
||||
// Verify integrity against stored checksum (SEC-SLURP-1.1a requirement)
|
||||
if entry.Checksum != "" {
|
||||
computed := ls.computeChecksum(dataBytes)
|
||||
if computed != entry.Checksum {
|
||||
return nil, fmt.Errorf("data integrity check failed for key %s", key)
|
||||
}
|
||||
}
|
||||
|
||||
// Deserialize data
|
||||
var result interface{}
|
||||
if err := json.Unmarshal(dataBytes, &result); err != nil {
|
||||
@@ -260,6 +270,7 @@ func (ls *LocalStorageImpl) Delete(ctx context.Context, key string) error {
|
||||
if entryBytes != nil {
|
||||
ls.metrics.TotalSize -= int64(len(entryBytes))
|
||||
}
|
||||
ls.updateFileMetricsLocked()
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -397,6 +408,7 @@ type StorageEntry struct {
|
||||
Compressed bool `json:"compressed"`
|
||||
OriginalSize int64 `json:"original_size"`
|
||||
CompressedSize int64 `json:"compressed_size"`
|
||||
Checksum string `json:"checksum"`
|
||||
AccessLevel string `json:"access_level"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
}
|
||||
@@ -434,6 +446,42 @@ func (ls *LocalStorageImpl) compress(data []byte) ([]byte, error) {
|
||||
return compressed, nil
|
||||
}
|
||||
|
||||
func (ls *LocalStorageImpl) computeChecksum(data []byte) string {
|
||||
// Compute SHA-256 checksum to satisfy SEC-SLURP-1.1a integrity tracking
|
||||
digest := sha256.Sum256(data)
|
||||
return fmt.Sprintf("%x", digest)
|
||||
}
|
||||
|
||||
func (ls *LocalStorageImpl) updateFileMetricsLocked() {
|
||||
// Refresh filesystem metrics using io/fs traversal (SEC-SLURP-1.1a durability telemetry)
|
||||
var fileCount int64
|
||||
var aggregateSize int64
|
||||
|
||||
walkErr := fs.WalkDir(os.DirFS(ls.basePath), ".", func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if d.IsDir() {
|
||||
return nil
|
||||
}
|
||||
fileCount++
|
||||
if info, infoErr := d.Info(); infoErr == nil {
|
||||
aggregateSize += info.Size()
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if walkErr != nil {
|
||||
fmt.Printf("filesystem metrics refresh failed: %v\n", walkErr)
|
||||
return
|
||||
}
|
||||
|
||||
ls.metrics.TotalFiles = fileCount
|
||||
if aggregateSize > 0 {
|
||||
ls.metrics.TotalSize = aggregateSize
|
||||
}
|
||||
}
|
||||
|
||||
func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
|
||||
// Create gzip reader
|
||||
reader, err := gzip.NewReader(bytes.NewReader(data))
|
||||
@@ -498,11 +546,11 @@ func (ls *LocalStorageImpl) GetCompressionStats() (*CompressionStats, error) {
|
||||
defer ls.mu.RUnlock()
|
||||
|
||||
stats := &CompressionStats{
|
||||
TotalEntries: 0,
|
||||
TotalEntries: 0,
|
||||
CompressedEntries: 0,
|
||||
TotalSize: ls.metrics.TotalSize,
|
||||
CompressedSize: ls.metrics.CompressedSize,
|
||||
CompressionRatio: 0.0,
|
||||
TotalSize: ls.metrics.TotalSize,
|
||||
CompressedSize: ls.metrics.CompressedSize,
|
||||
CompressionRatio: 0.0,
|
||||
}
|
||||
|
||||
// Iterate through all entries to get accurate stats
|
||||
@@ -599,11 +647,11 @@ func (ls *LocalStorageImpl) OptimizeStorage(ctx context.Context, compressThresho
|
||||
|
||||
// CompressionStats holds compression statistics
|
||||
type CompressionStats struct {
|
||||
TotalEntries int64 `json:"total_entries"`
|
||||
TotalEntries int64 `json:"total_entries"`
|
||||
CompressedEntries int64 `json:"compressed_entries"`
|
||||
TotalSize int64 `json:"total_size"`
|
||||
CompressedSize int64 `json:"compressed_size"`
|
||||
CompressionRatio float64 `json:"compression_ratio"`
|
||||
TotalSize int64 `json:"total_size"`
|
||||
CompressedSize int64 `json:"compressed_size"`
|
||||
CompressionRatio float64 `json:"compression_ratio"`
|
||||
}
|
||||
|
||||
// Close closes the local storage
|
||||
|
||||
@@ -14,77 +14,77 @@ import (
|
||||
|
||||
// MonitoringSystem provides comprehensive monitoring for the storage system
|
||||
type MonitoringSystem struct {
|
||||
mu sync.RWMutex
|
||||
nodeID string
|
||||
metrics *StorageMetrics
|
||||
alerts *AlertManager
|
||||
healthChecker *HealthChecker
|
||||
mu sync.RWMutex
|
||||
nodeID string
|
||||
metrics *StorageMetrics
|
||||
alerts *AlertManager
|
||||
healthChecker *HealthChecker
|
||||
performanceProfiler *PerformanceProfiler
|
||||
logger *StructuredLogger
|
||||
notifications chan *MonitoringEvent
|
||||
stopCh chan struct{}
|
||||
logger *StructuredLogger
|
||||
notifications chan *MonitoringEvent
|
||||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
// StorageMetrics contains all Prometheus metrics for storage operations
|
||||
type StorageMetrics struct {
|
||||
// Operation counters
|
||||
StoreOperations prometheus.Counter
|
||||
RetrieveOperations prometheus.Counter
|
||||
DeleteOperations prometheus.Counter
|
||||
UpdateOperations prometheus.Counter
|
||||
SearchOperations prometheus.Counter
|
||||
BatchOperations prometheus.Counter
|
||||
StoreOperations prometheus.Counter
|
||||
RetrieveOperations prometheus.Counter
|
||||
DeleteOperations prometheus.Counter
|
||||
UpdateOperations prometheus.Counter
|
||||
SearchOperations prometheus.Counter
|
||||
BatchOperations prometheus.Counter
|
||||
|
||||
// Error counters
|
||||
StoreErrors prometheus.Counter
|
||||
RetrieveErrors prometheus.Counter
|
||||
EncryptionErrors prometheus.Counter
|
||||
DecryptionErrors prometheus.Counter
|
||||
ReplicationErrors prometheus.Counter
|
||||
CacheErrors prometheus.Counter
|
||||
IndexErrors prometheus.Counter
|
||||
StoreErrors prometheus.Counter
|
||||
RetrieveErrors prometheus.Counter
|
||||
EncryptionErrors prometheus.Counter
|
||||
DecryptionErrors prometheus.Counter
|
||||
ReplicationErrors prometheus.Counter
|
||||
CacheErrors prometheus.Counter
|
||||
IndexErrors prometheus.Counter
|
||||
|
||||
// Latency histograms
|
||||
StoreLatency prometheus.Histogram
|
||||
RetrieveLatency prometheus.Histogram
|
||||
EncryptionLatency prometheus.Histogram
|
||||
DecryptionLatency prometheus.Histogram
|
||||
ReplicationLatency prometheus.Histogram
|
||||
SearchLatency prometheus.Histogram
|
||||
StoreLatency prometheus.Histogram
|
||||
RetrieveLatency prometheus.Histogram
|
||||
EncryptionLatency prometheus.Histogram
|
||||
DecryptionLatency prometheus.Histogram
|
||||
ReplicationLatency prometheus.Histogram
|
||||
SearchLatency prometheus.Histogram
|
||||
|
||||
// Cache metrics
|
||||
CacheHits prometheus.Counter
|
||||
CacheMisses prometheus.Counter
|
||||
CacheEvictions prometheus.Counter
|
||||
CacheSize prometheus.Gauge
|
||||
CacheHits prometheus.Counter
|
||||
CacheMisses prometheus.Counter
|
||||
CacheEvictions prometheus.Counter
|
||||
CacheSize prometheus.Gauge
|
||||
|
||||
// Storage size metrics
|
||||
LocalStorageSize prometheus.Gauge
|
||||
LocalStorageSize prometheus.Gauge
|
||||
DistributedStorageSize prometheus.Gauge
|
||||
CompressedStorageSize prometheus.Gauge
|
||||
IndexStorageSize prometheus.Gauge
|
||||
|
||||
// Replication metrics
|
||||
ReplicationFactor prometheus.Gauge
|
||||
HealthyReplicas prometheus.Gauge
|
||||
UnderReplicated prometheus.Gauge
|
||||
ReplicationLag prometheus.Histogram
|
||||
ReplicationFactor prometheus.Gauge
|
||||
HealthyReplicas prometheus.Gauge
|
||||
UnderReplicated prometheus.Gauge
|
||||
ReplicationLag prometheus.Histogram
|
||||
|
||||
// Encryption metrics
|
||||
EncryptedContexts prometheus.Gauge
|
||||
KeyRotations prometheus.Counter
|
||||
AccessDenials prometheus.Counter
|
||||
ActiveKeys prometheus.Gauge
|
||||
EncryptedContexts prometheus.Gauge
|
||||
KeyRotations prometheus.Counter
|
||||
AccessDenials prometheus.Counter
|
||||
ActiveKeys prometheus.Gauge
|
||||
|
||||
// Performance metrics
|
||||
Throughput prometheus.Gauge
|
||||
Throughput prometheus.Gauge
|
||||
ConcurrentOperations prometheus.Gauge
|
||||
QueueDepth prometheus.Gauge
|
||||
QueueDepth prometheus.Gauge
|
||||
|
||||
// Health metrics
|
||||
StorageHealth prometheus.Gauge
|
||||
NodeConnectivity prometheus.Gauge
|
||||
SyncLatency prometheus.Histogram
|
||||
StorageHealth prometheus.Gauge
|
||||
NodeConnectivity prometheus.Gauge
|
||||
SyncLatency prometheus.Histogram
|
||||
}
|
||||
|
||||
// AlertManager handles storage-related alerts and notifications
|
||||
@@ -97,18 +97,96 @@ type AlertManager struct {
|
||||
maxHistory int
|
||||
}
|
||||
|
||||
func (am *AlertManager) severityRank(severity AlertSeverity) int {
|
||||
switch severity {
|
||||
case SeverityCritical:
|
||||
return 4
|
||||
case SeverityError:
|
||||
return 3
|
||||
case SeverityWarning:
|
||||
return 2
|
||||
case SeverityInfo:
|
||||
return 1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// GetActiveAlerts returns sorted active alerts (SEC-SLURP-1.1 monitoring path)
|
||||
func (am *AlertManager) GetActiveAlerts() []*Alert {
|
||||
am.mu.RLock()
|
||||
defer am.mu.RUnlock()
|
||||
|
||||
if len(am.activealerts) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
alerts := make([]*Alert, 0, len(am.activealerts))
|
||||
for _, alert := range am.activealerts {
|
||||
alerts = append(alerts, alert)
|
||||
}
|
||||
|
||||
sort.Slice(alerts, func(i, j int) bool {
|
||||
iRank := am.severityRank(alerts[i].Severity)
|
||||
jRank := am.severityRank(alerts[j].Severity)
|
||||
if iRank == jRank {
|
||||
return alerts[i].StartTime.After(alerts[j].StartTime)
|
||||
}
|
||||
return iRank > jRank
|
||||
})
|
||||
|
||||
return alerts
|
||||
}
|
||||
|
||||
// Snapshot marshals monitoring state for UCXL persistence (SEC-SLURP-1.1a telemetry)
|
||||
func (ms *MonitoringSystem) Snapshot(ctx context.Context) (string, error) {
|
||||
ms.mu.RLock()
|
||||
defer ms.mu.RUnlock()
|
||||
|
||||
if ms.alerts == nil {
|
||||
return "", fmt.Errorf("alert manager not initialised")
|
||||
}
|
||||
|
||||
active := ms.alerts.GetActiveAlerts()
|
||||
alertPayload := make([]map[string]interface{}, 0, len(active))
|
||||
for _, alert := range active {
|
||||
alertPayload = append(alertPayload, map[string]interface{}{
|
||||
"id": alert.ID,
|
||||
"name": alert.Name,
|
||||
"severity": alert.Severity,
|
||||
"message": fmt.Sprintf("%s (threshold %.2f)", alert.Description, alert.Threshold),
|
||||
"labels": alert.Labels,
|
||||
"started_at": alert.StartTime,
|
||||
})
|
||||
}
|
||||
|
||||
snapshot := map[string]interface{}{
|
||||
"node_id": ms.nodeID,
|
||||
"generated_at": time.Now().UTC(),
|
||||
"alert_count": len(active),
|
||||
"alerts": alertPayload,
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(snapshot, "", " ")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to marshal monitoring snapshot: %w", err)
|
||||
}
|
||||
|
||||
return string(encoded), nil
|
||||
}
|
||||
|
||||
// AlertRule defines conditions for triggering alerts
|
||||
type AlertRule struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Metric string `json:"metric"`
|
||||
Condition string `json:"condition"` // >, <, ==, !=, etc.
|
||||
Threshold float64 `json:"threshold"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
Enabled bool `json:"enabled"`
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Metric string `json:"metric"`
|
||||
Condition string `json:"condition"` // >, <, ==, !=, etc.
|
||||
Threshold float64 `json:"threshold"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Labels map[string]string `json:"labels"`
|
||||
Enabled bool `json:"enabled"`
|
||||
}
|
||||
|
||||
// Alert represents an active or resolved alert
|
||||
@@ -163,30 +241,30 @@ type HealthChecker struct {
|
||||
|
||||
// HealthCheck defines a single health check
|
||||
type HealthCheck struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Checker func(ctx context.Context) HealthResult `json:"-"`
|
||||
Interval time.Duration `json:"interval"`
|
||||
Timeout time.Duration `json:"timeout"`
|
||||
Enabled bool `json:"enabled"`
|
||||
Interval time.Duration `json:"interval"`
|
||||
Timeout time.Duration `json:"timeout"`
|
||||
Enabled bool `json:"enabled"`
|
||||
}
|
||||
|
||||
// HealthResult represents the result of a health check
|
||||
type HealthResult struct {
|
||||
Healthy bool `json:"healthy"`
|
||||
Message string `json:"message"`
|
||||
Latency time.Duration `json:"latency"`
|
||||
Healthy bool `json:"healthy"`
|
||||
Message string `json:"message"`
|
||||
Latency time.Duration `json:"latency"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
}
|
||||
|
||||
// SystemHealth represents the overall health of the storage system
|
||||
type SystemHealth struct {
|
||||
OverallStatus HealthStatus `json:"overall_status"`
|
||||
Components map[string]HealthResult `json:"components"`
|
||||
LastUpdate time.Time `json:"last_update"`
|
||||
Uptime time.Duration `json:"uptime"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
OverallStatus HealthStatus `json:"overall_status"`
|
||||
Components map[string]HealthResult `json:"components"`
|
||||
LastUpdate time.Time `json:"last_update"`
|
||||
Uptime time.Duration `json:"uptime"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
}
|
||||
|
||||
// HealthStatus represents system health status
|
||||
@@ -200,82 +278,82 @@ const (
|
||||
|
||||
// PerformanceProfiler analyzes storage performance patterns
|
||||
type PerformanceProfiler struct {
|
||||
mu sync.RWMutex
|
||||
mu sync.RWMutex
|
||||
operationProfiles map[string]*OperationProfile
|
||||
resourceUsage *ResourceUsage
|
||||
bottlenecks []*Bottleneck
|
||||
recommendations []*PerformanceRecommendation
|
||||
resourceUsage *ResourceUsage
|
||||
bottlenecks []*Bottleneck
|
||||
recommendations []*PerformanceRecommendation
|
||||
}
|
||||
|
||||
// OperationProfile contains performance analysis for a specific operation type
|
||||
type OperationProfile struct {
|
||||
Operation string `json:"operation"`
|
||||
TotalOperations int64 `json:"total_operations"`
|
||||
AverageLatency time.Duration `json:"average_latency"`
|
||||
P50Latency time.Duration `json:"p50_latency"`
|
||||
P95Latency time.Duration `json:"p95_latency"`
|
||||
P99Latency time.Duration `json:"p99_latency"`
|
||||
Throughput float64 `json:"throughput"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
LatencyHistory []time.Duration `json:"-"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
Operation string `json:"operation"`
|
||||
TotalOperations int64 `json:"total_operations"`
|
||||
AverageLatency time.Duration `json:"average_latency"`
|
||||
P50Latency time.Duration `json:"p50_latency"`
|
||||
P95Latency time.Duration `json:"p95_latency"`
|
||||
P99Latency time.Duration `json:"p99_latency"`
|
||||
Throughput float64 `json:"throughput"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
LatencyHistory []time.Duration `json:"-"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
|
||||
// ResourceUsage tracks resource consumption
|
||||
type ResourceUsage struct {
|
||||
CPUUsage float64 `json:"cpu_usage"`
|
||||
MemoryUsage int64 `json:"memory_usage"`
|
||||
DiskUsage int64 `json:"disk_usage"`
|
||||
NetworkIn int64 `json:"network_in"`
|
||||
NetworkOut int64 `json:"network_out"`
|
||||
OpenFiles int `json:"open_files"`
|
||||
Goroutines int `json:"goroutines"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
CPUUsage float64 `json:"cpu_usage"`
|
||||
MemoryUsage int64 `json:"memory_usage"`
|
||||
DiskUsage int64 `json:"disk_usage"`
|
||||
NetworkIn int64 `json:"network_in"`
|
||||
NetworkOut int64 `json:"network_out"`
|
||||
OpenFiles int `json:"open_files"`
|
||||
Goroutines int `json:"goroutines"`
|
||||
LastUpdated time.Time `json:"last_updated"`
|
||||
}
|
||||
|
||||
// Bottleneck represents a performance bottleneck
|
||||
type Bottleneck struct {
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"` // cpu, memory, disk, network, etc.
|
||||
Component string `json:"component"`
|
||||
Description string `json:"description"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Impact float64 `json:"impact"`
|
||||
DetectedAt time.Time `json:"detected_at"`
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"` // cpu, memory, disk, network, etc.
|
||||
Component string `json:"component"`
|
||||
Description string `json:"description"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Impact float64 `json:"impact"`
|
||||
DetectedAt time.Time `json:"detected_at"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
}
|
||||
|
||||
// PerformanceRecommendation suggests optimizations
|
||||
type PerformanceRecommendation struct {
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"`
|
||||
Title string `json:"title"`
|
||||
Description string `json:"description"`
|
||||
Priority int `json:"priority"`
|
||||
Impact string `json:"impact"`
|
||||
Effort string `json:"effort"`
|
||||
GeneratedAt time.Time `json:"generated_at"`
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"`
|
||||
Title string `json:"title"`
|
||||
Description string `json:"description"`
|
||||
Priority int `json:"priority"`
|
||||
Impact string `json:"impact"`
|
||||
Effort string `json:"effort"`
|
||||
GeneratedAt time.Time `json:"generated_at"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
}
|
||||
|
||||
// MonitoringEvent represents a monitoring system event
|
||||
type MonitoringEvent struct {
|
||||
Type string `json:"type"`
|
||||
Level string `json:"level"`
|
||||
Message string `json:"message"`
|
||||
Component string `json:"component"`
|
||||
NodeID string `json:"node_id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
Type string `json:"type"`
|
||||
Level string `json:"level"`
|
||||
Message string `json:"message"`
|
||||
Component string `json:"component"`
|
||||
NodeID string `json:"node_id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
}
|
||||
|
||||
// StructuredLogger provides structured logging for storage operations
|
||||
type StructuredLogger struct {
|
||||
mu sync.RWMutex
|
||||
level LogLevel
|
||||
output LogOutput
|
||||
mu sync.RWMutex
|
||||
level LogLevel
|
||||
output LogOutput
|
||||
formatter LogFormatter
|
||||
buffer []*LogEntry
|
||||
buffer []*LogEntry
|
||||
maxBuffer int
|
||||
}
|
||||
|
||||
@@ -303,27 +381,27 @@ type LogFormatter interface {
|
||||
|
||||
// LogEntry represents a single log entry
|
||||
type LogEntry struct {
|
||||
Level LogLevel `json:"level"`
|
||||
Message string `json:"message"`
|
||||
Component string `json:"component"`
|
||||
Operation string `json:"operation"`
|
||||
NodeID string `json:"node_id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Level LogLevel `json:"level"`
|
||||
Message string `json:"message"`
|
||||
Component string `json:"component"`
|
||||
Operation string `json:"operation"`
|
||||
NodeID string `json:"node_id"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Fields map[string]interface{} `json:"fields"`
|
||||
Error error `json:"error,omitempty"`
|
||||
Error error `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
// NewMonitoringSystem creates a new monitoring system
|
||||
func NewMonitoringSystem(nodeID string) *MonitoringSystem {
|
||||
ms := &MonitoringSystem{
|
||||
nodeID: nodeID,
|
||||
metrics: initializeMetrics(nodeID),
|
||||
alerts: newAlertManager(),
|
||||
healthChecker: newHealthChecker(),
|
||||
nodeID: nodeID,
|
||||
metrics: initializeMetrics(nodeID),
|
||||
alerts: newAlertManager(),
|
||||
healthChecker: newHealthChecker(),
|
||||
performanceProfiler: newPerformanceProfiler(),
|
||||
logger: newStructuredLogger(),
|
||||
notifications: make(chan *MonitoringEvent, 1000),
|
||||
stopCh: make(chan struct{}),
|
||||
logger: newStructuredLogger(),
|
||||
notifications: make(chan *MonitoringEvent, 1000),
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
|
||||
// Start monitoring goroutines
|
||||
@@ -592,21 +670,21 @@ func (ms *MonitoringSystem) analyzePerformance() {
|
||||
|
||||
func newAlertManager() *AlertManager {
|
||||
return &AlertManager{
|
||||
rules: make([]*AlertRule, 0),
|
||||
rules: make([]*AlertRule, 0),
|
||||
activealerts: make(map[string]*Alert),
|
||||
notifiers: make([]AlertNotifier, 0),
|
||||
history: make([]*Alert, 0),
|
||||
maxHistory: 1000,
|
||||
history: make([]*Alert, 0),
|
||||
maxHistory: 1000,
|
||||
}
|
||||
}
|
||||
|
||||
func newHealthChecker() *HealthChecker {
|
||||
return &HealthChecker{
|
||||
checks: make(map[string]HealthCheck),
|
||||
status: &SystemHealth{
|
||||
checks: make(map[string]HealthCheck),
|
||||
status: &SystemHealth{
|
||||
OverallStatus: HealthHealthy,
|
||||
Components: make(map[string]HealthResult),
|
||||
StartTime: time.Now(),
|
||||
Components: make(map[string]HealthResult),
|
||||
StartTime: time.Now(),
|
||||
},
|
||||
checkInterval: 1 * time.Minute,
|
||||
timeout: 30 * time.Second,
|
||||
@@ -664,8 +742,8 @@ func (ms *MonitoringSystem) GetMonitoringStats() (*MonitoringStats, error) {
|
||||
defer ms.mu.RUnlock()
|
||||
|
||||
stats := &MonitoringStats{
|
||||
NodeID: ms.nodeID,
|
||||
Timestamp: time.Now(),
|
||||
NodeID: ms.nodeID,
|
||||
Timestamp: time.Now(),
|
||||
HealthStatus: ms.healthChecker.status.OverallStatus,
|
||||
ActiveAlerts: len(ms.alerts.activealerts),
|
||||
Bottlenecks: len(ms.performanceProfiler.bottlenecks),
|
||||
|
||||
@@ -3,9 +3,8 @@ package storage
|
||||
import (
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
"chorus/pkg/crypto"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// DatabaseSchema defines the complete schema for encrypted context storage
|
||||
@@ -14,325 +13,325 @@ import (
|
||||
// ContextRecord represents the main context storage record
|
||||
type ContextRecord struct {
|
||||
// Primary identification
|
||||
ID string `json:"id" db:"id"` // Unique record ID
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` // UCXL address
|
||||
Path string `json:"path" db:"path"` // File system path
|
||||
PathHash string `json:"path_hash" db:"path_hash"` // Hash of path for indexing
|
||||
ID string `json:"id" db:"id"` // Unique record ID
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` // UCXL address
|
||||
Path string `json:"path" db:"path"` // File system path
|
||||
PathHash string `json:"path_hash" db:"path_hash"` // Hash of path for indexing
|
||||
|
||||
// Core context data
|
||||
Summary string `json:"summary" db:"summary"`
|
||||
Purpose string `json:"purpose" db:"purpose"`
|
||||
Technologies []byte `json:"technologies" db:"technologies"` // JSON array
|
||||
Tags []byte `json:"tags" db:"tags"` // JSON array
|
||||
Insights []byte `json:"insights" db:"insights"` // JSON array
|
||||
Summary string `json:"summary" db:"summary"`
|
||||
Purpose string `json:"purpose" db:"purpose"`
|
||||
Technologies []byte `json:"technologies" db:"technologies"` // JSON array
|
||||
Tags []byte `json:"tags" db:"tags"` // JSON array
|
||||
Insights []byte `json:"insights" db:"insights"` // JSON array
|
||||
|
||||
// Hierarchy control
|
||||
OverridesParent bool `json:"overrides_parent" db:"overrides_parent"`
|
||||
ContextSpecificity int `json:"context_specificity" db:"context_specificity"`
|
||||
AppliesToChildren bool `json:"applies_to_children" db:"applies_to_children"`
|
||||
OverridesParent bool `json:"overrides_parent" db:"overrides_parent"`
|
||||
ContextSpecificity int `json:"context_specificity" db:"context_specificity"`
|
||||
AppliesToChildren bool `json:"applies_to_children" db:"applies_to_children"`
|
||||
|
||||
// Quality metrics
|
||||
RAGConfidence float64 `json:"rag_confidence" db:"rag_confidence"`
|
||||
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
|
||||
ValidationScore float64 `json:"validation_score" db:"validation_score"`
|
||||
RAGConfidence float64 `json:"rag_confidence" db:"rag_confidence"`
|
||||
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
|
||||
ValidationScore float64 `json:"validation_score" db:"validation_score"`
|
||||
|
||||
// Versioning
|
||||
Version int64 `json:"version" db:"version"`
|
||||
ParentVersion *int64 `json:"parent_version" db:"parent_version"`
|
||||
ContextHash string `json:"context_hash" db:"context_hash"`
|
||||
Version int64 `json:"version" db:"version"`
|
||||
ParentVersion *int64 `json:"parent_version" db:"parent_version"`
|
||||
ContextHash string `json:"context_hash" db:"context_hash"`
|
||||
|
||||
// Temporal metadata
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
GeneratedAt time.Time `json:"generated_at" db:"generated_at"`
|
||||
LastAccessedAt *time.Time `json:"last_accessed_at" db:"last_accessed_at"`
|
||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
GeneratedAt time.Time `json:"generated_at" db:"generated_at"`
|
||||
LastAccessedAt *time.Time `json:"last_accessed_at" db:"last_accessed_at"`
|
||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||
|
||||
// Storage metadata
|
||||
StorageType string `json:"storage_type" db:"storage_type"` // local, distributed, hybrid
|
||||
CompressionType string `json:"compression_type" db:"compression_type"`
|
||||
EncryptionLevel int `json:"encryption_level" db:"encryption_level"`
|
||||
ReplicationFactor int `json:"replication_factor" db:"replication_factor"`
|
||||
Checksum string `json:"checksum" db:"checksum"`
|
||||
DataSize int64 `json:"data_size" db:"data_size"`
|
||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||
StorageType string `json:"storage_type" db:"storage_type"` // local, distributed, hybrid
|
||||
CompressionType string `json:"compression_type" db:"compression_type"`
|
||||
EncryptionLevel int `json:"encryption_level" db:"encryption_level"`
|
||||
ReplicationFactor int `json:"replication_factor" db:"replication_factor"`
|
||||
Checksum string `json:"checksum" db:"checksum"`
|
||||
DataSize int64 `json:"data_size" db:"data_size"`
|
||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||
}
|
||||
|
||||
// EncryptedContextRecord represents role-based encrypted context storage
|
||||
type EncryptedContextRecord struct {
|
||||
// Primary keys
|
||||
ID string `json:"id" db:"id"`
|
||||
ContextID string `json:"context_id" db:"context_id"` // FK to ContextRecord
|
||||
Role string `json:"role" db:"role"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
ID string `json:"id" db:"id"`
|
||||
ContextID string `json:"context_id" db:"context_id"` // FK to ContextRecord
|
||||
Role string `json:"role" db:"role"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
|
||||
// Encryption details
|
||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
|
||||
EncryptedData []byte `json:"encrypted_data" db:"encrypted_data"`
|
||||
KeyFingerprint string `json:"key_fingerprint" db:"key_fingerprint"`
|
||||
EncryptionAlgo string `json:"encryption_algo" db:"encryption_algo"`
|
||||
KeyVersion int `json:"key_version" db:"key_version"`
|
||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
|
||||
EncryptedData []byte `json:"encrypted_data" db:"encrypted_data"`
|
||||
KeyFingerprint string `json:"key_fingerprint" db:"key_fingerprint"`
|
||||
EncryptionAlgo string `json:"encryption_algo" db:"encryption_algo"`
|
||||
KeyVersion int `json:"key_version" db:"key_version"`
|
||||
|
||||
// Data integrity
|
||||
DataChecksum string `json:"data_checksum" db:"data_checksum"`
|
||||
EncryptionHash string `json:"encryption_hash" db:"encryption_hash"`
|
||||
DataChecksum string `json:"data_checksum" db:"data_checksum"`
|
||||
EncryptionHash string `json:"encryption_hash" db:"encryption_hash"`
|
||||
|
||||
// Temporal data
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
LastDecryptedAt *time.Time `json:"last_decrypted_at" db:"last_decrypted_at"`
|
||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
LastDecryptedAt *time.Time `json:"last_decrypted_at" db:"last_decrypted_at"`
|
||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||
|
||||
// Access tracking
|
||||
AccessCount int64 `json:"access_count" db:"access_count"`
|
||||
LastAccessedBy string `json:"last_accessed_by" db:"last_accessed_by"`
|
||||
AccessHistory []byte `json:"access_history" db:"access_history"` // JSON access log
|
||||
AccessCount int64 `json:"access_count" db:"access_count"`
|
||||
LastAccessedBy string `json:"last_accessed_by" db:"last_accessed_by"`
|
||||
AccessHistory []byte `json:"access_history" db:"access_history"` // JSON access log
|
||||
}
|
||||
|
||||
// ContextHierarchyRecord represents hierarchical relationships between contexts
|
||||
type ContextHierarchyRecord struct {
|
||||
ID string `json:"id" db:"id"`
|
||||
ParentAddress ucxl.Address `json:"parent_address" db:"parent_address"`
|
||||
ChildAddress ucxl.Address `json:"child_address" db:"child_address"`
|
||||
ParentPath string `json:"parent_path" db:"parent_path"`
|
||||
ChildPath string `json:"child_path" db:"child_path"`
|
||||
ID string `json:"id" db:"id"`
|
||||
ParentAddress ucxl.Address `json:"parent_address" db:"parent_address"`
|
||||
ChildAddress ucxl.Address `json:"child_address" db:"child_address"`
|
||||
ParentPath string `json:"parent_path" db:"parent_path"`
|
||||
ChildPath string `json:"child_path" db:"child_path"`
|
||||
|
||||
// Relationship metadata
|
||||
RelationshipType string `json:"relationship_type" db:"relationship_type"` // parent, sibling, dependency
|
||||
InheritanceWeight float64 `json:"inheritance_weight" db:"inheritance_weight"`
|
||||
OverrideStrength int `json:"override_strength" db:"override_strength"`
|
||||
Distance int `json:"distance" db:"distance"` // Hierarchy depth distance
|
||||
RelationshipType string `json:"relationship_type" db:"relationship_type"` // parent, sibling, dependency
|
||||
InheritanceWeight float64 `json:"inheritance_weight" db:"inheritance_weight"`
|
||||
OverrideStrength int `json:"override_strength" db:"override_strength"`
|
||||
Distance int `json:"distance" db:"distance"` // Hierarchy depth distance
|
||||
|
||||
// Temporal tracking
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
ValidatedAt time.Time `json:"validated_at" db:"validated_at"`
|
||||
LastResolvedAt *time.Time `json:"last_resolved_at" db:"last_resolved_at"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
ValidatedAt time.Time `json:"validated_at" db:"validated_at"`
|
||||
LastResolvedAt *time.Time `json:"last_resolved_at" db:"last_resolved_at"`
|
||||
|
||||
// Resolution statistics
|
||||
ResolutionCount int64 `json:"resolution_count" db:"resolution_count"`
|
||||
ResolutionTime float64 `json:"resolution_time" db:"resolution_time"` // Average ms
|
||||
ResolutionCount int64 `json:"resolution_count" db:"resolution_count"`
|
||||
ResolutionTime float64 `json:"resolution_time" db:"resolution_time"` // Average ms
|
||||
}
|
||||
|
||||
// DecisionHopRecord represents temporal decision analysis storage
|
||||
type DecisionHopRecord struct {
|
||||
// Primary identification
|
||||
ID string `json:"id" db:"id"`
|
||||
DecisionID string `json:"decision_id" db:"decision_id"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
ContextVersion int64 `json:"context_version" db:"context_version"`
|
||||
ID string `json:"id" db:"id"`
|
||||
DecisionID string `json:"decision_id" db:"decision_id"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
ContextVersion int64 `json:"context_version" db:"context_version"`
|
||||
|
||||
// Decision metadata
|
||||
ChangeReason string `json:"change_reason" db:"change_reason"`
|
||||
DecisionMaker string `json:"decision_maker" db:"decision_maker"`
|
||||
DecisionRationale string `json:"decision_rationale" db:"decision_rationale"`
|
||||
ImpactScope string `json:"impact_scope" db:"impact_scope"`
|
||||
ConfidenceLevel float64 `json:"confidence_level" db:"confidence_level"`
|
||||
ChangeReason string `json:"change_reason" db:"change_reason"`
|
||||
DecisionMaker string `json:"decision_maker" db:"decision_maker"`
|
||||
DecisionRationale string `json:"decision_rationale" db:"decision_rationale"`
|
||||
ImpactScope string `json:"impact_scope" db:"impact_scope"`
|
||||
ConfidenceLevel float64 `json:"confidence_level" db:"confidence_level"`
|
||||
|
||||
// Context evolution
|
||||
PreviousHash string `json:"previous_hash" db:"previous_hash"`
|
||||
CurrentHash string `json:"current_hash" db:"current_hash"`
|
||||
ContextDelta []byte `json:"context_delta" db:"context_delta"` // JSON diff
|
||||
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
|
||||
PreviousHash string `json:"previous_hash" db:"previous_hash"`
|
||||
CurrentHash string `json:"current_hash" db:"current_hash"`
|
||||
ContextDelta []byte `json:"context_delta" db:"context_delta"` // JSON diff
|
||||
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
|
||||
|
||||
// Temporal data
|
||||
Timestamp time.Time `json:"timestamp" db:"timestamp"`
|
||||
PreviousDecisionTime *time.Time `json:"previous_decision_time" db:"previous_decision_time"`
|
||||
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
|
||||
Timestamp time.Time `json:"timestamp" db:"timestamp"`
|
||||
PreviousDecisionTime *time.Time `json:"previous_decision_time" db:"previous_decision_time"`
|
||||
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
|
||||
|
||||
// External references
|
||||
ExternalRefs []byte `json:"external_refs" db:"external_refs"` // JSON array
|
||||
CommitHash string `json:"commit_hash" db:"commit_hash"`
|
||||
TicketID string `json:"ticket_id" db:"ticket_id"`
|
||||
ExternalRefs []byte `json:"external_refs" db:"external_refs"` // JSON array
|
||||
CommitHash string `json:"commit_hash" db:"commit_hash"`
|
||||
TicketID string `json:"ticket_id" db:"ticket_id"`
|
||||
}
|
||||
|
||||
// DecisionInfluenceRecord represents decision influence relationships
|
||||
type DecisionInfluenceRecord struct {
|
||||
ID string `json:"id" db:"id"`
|
||||
SourceDecisionID string `json:"source_decision_id" db:"source_decision_id"`
|
||||
TargetDecisionID string `json:"target_decision_id" db:"target_decision_id"`
|
||||
SourceAddress ucxl.Address `json:"source_address" db:"source_address"`
|
||||
TargetAddress ucxl.Address `json:"target_address" db:"target_address"`
|
||||
ID string `json:"id" db:"id"`
|
||||
SourceDecisionID string `json:"source_decision_id" db:"source_decision_id"`
|
||||
TargetDecisionID string `json:"target_decision_id" db:"target_decision_id"`
|
||||
SourceAddress ucxl.Address `json:"source_address" db:"source_address"`
|
||||
TargetAddress ucxl.Address `json:"target_address" db:"target_address"`
|
||||
|
||||
// Influence metrics
|
||||
InfluenceStrength float64 `json:"influence_strength" db:"influence_strength"`
|
||||
InfluenceType string `json:"influence_type" db:"influence_type"` // direct, indirect, cascading
|
||||
PropagationDelay float64 `json:"propagation_delay" db:"propagation_delay"` // hours
|
||||
HopDistance int `json:"hop_distance" db:"hop_distance"`
|
||||
InfluenceStrength float64 `json:"influence_strength" db:"influence_strength"`
|
||||
InfluenceType string `json:"influence_type" db:"influence_type"` // direct, indirect, cascading
|
||||
PropagationDelay float64 `json:"propagation_delay" db:"propagation_delay"` // hours
|
||||
HopDistance int `json:"hop_distance" db:"hop_distance"`
|
||||
|
||||
// Path analysis
|
||||
ShortestPath []byte `json:"shortest_path" db:"shortest_path"` // JSON path array
|
||||
AlternatePaths []byte `json:"alternate_paths" db:"alternate_paths"` // JSON paths
|
||||
PathConfidence float64 `json:"path_confidence" db:"path_confidence"`
|
||||
ShortestPath []byte `json:"shortest_path" db:"shortest_path"` // JSON path array
|
||||
AlternatePaths []byte `json:"alternate_paths" db:"alternate_paths"` // JSON paths
|
||||
PathConfidence float64 `json:"path_confidence" db:"path_confidence"`
|
||||
|
||||
// Temporal tracking
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
LastAnalyzedAt time.Time `json:"last_analyzed_at" db:"last_analyzed_at"`
|
||||
ValidatedAt *time.Time `json:"validated_at" db:"validated_at"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
LastAnalyzedAt time.Time `json:"last_analyzed_at" db:"last_analyzed_at"`
|
||||
ValidatedAt *time.Time `json:"validated_at" db:"validated_at"`
|
||||
}
|
||||
|
||||
// AccessControlRecord represents role-based access control metadata
|
||||
type AccessControlRecord struct {
|
||||
ID string `json:"id" db:"id"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
Role string `json:"role" db:"role"`
|
||||
Permissions []byte `json:"permissions" db:"permissions"` // JSON permissions array
|
||||
ID string `json:"id" db:"id"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
Role string `json:"role" db:"role"`
|
||||
Permissions []byte `json:"permissions" db:"permissions"` // JSON permissions array
|
||||
|
||||
// Access levels
|
||||
ReadAccess bool `json:"read_access" db:"read_access"`
|
||||
WriteAccess bool `json:"write_access" db:"write_access"`
|
||||
DeleteAccess bool `json:"delete_access" db:"delete_access"`
|
||||
AdminAccess bool `json:"admin_access" db:"admin_access"`
|
||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
|
||||
ReadAccess bool `json:"read_access" db:"read_access"`
|
||||
WriteAccess bool `json:"write_access" db:"write_access"`
|
||||
DeleteAccess bool `json:"delete_access" db:"delete_access"`
|
||||
AdminAccess bool `json:"admin_access" db:"admin_access"`
|
||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
|
||||
|
||||
// Constraints
|
||||
TimeConstraints []byte `json:"time_constraints" db:"time_constraints"` // JSON time rules
|
||||
IPConstraints []byte `json:"ip_constraints" db:"ip_constraints"` // JSON IP rules
|
||||
ContextFilters []byte `json:"context_filters" db:"context_filters"` // JSON filter rules
|
||||
TimeConstraints []byte `json:"time_constraints" db:"time_constraints"` // JSON time rules
|
||||
IPConstraints []byte `json:"ip_constraints" db:"ip_constraints"` // JSON IP rules
|
||||
ContextFilters []byte `json:"context_filters" db:"context_filters"` // JSON filter rules
|
||||
|
||||
// Audit trail
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
CreatedBy string `json:"created_by" db:"created_by"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
UpdatedBy string `json:"updated_by" db:"updated_by"`
|
||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
CreatedBy string `json:"created_by" db:"created_by"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
UpdatedBy string `json:"updated_by" db:"updated_by"`
|
||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||
}
|
||||
|
||||
// ContextIndexRecord represents search index entries for contexts
|
||||
type ContextIndexRecord struct {
|
||||
ID string `json:"id" db:"id"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
IndexName string `json:"index_name" db:"index_name"`
|
||||
ID string `json:"id" db:"id"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
IndexName string `json:"index_name" db:"index_name"`
|
||||
|
||||
// Indexed content
|
||||
Tokens []byte `json:"tokens" db:"tokens"` // JSON token array
|
||||
NGrams []byte `json:"ngrams" db:"ngrams"` // JSON n-gram array
|
||||
SemanticVector []byte `json:"semantic_vector" db:"semantic_vector"` // Embedding vector
|
||||
Tokens []byte `json:"tokens" db:"tokens"` // JSON token array
|
||||
NGrams []byte `json:"ngrams" db:"ngrams"` // JSON n-gram array
|
||||
SemanticVector []byte `json:"semantic_vector" db:"semantic_vector"` // Embedding vector
|
||||
|
||||
// Search metadata
|
||||
IndexWeight float64 `json:"index_weight" db:"index_weight"`
|
||||
BoostFactor float64 `json:"boost_factor" db:"boost_factor"`
|
||||
Language string `json:"language" db:"language"`
|
||||
ContentType string `json:"content_type" db:"content_type"`
|
||||
IndexWeight float64 `json:"index_weight" db:"index_weight"`
|
||||
BoostFactor float64 `json:"boost_factor" db:"boost_factor"`
|
||||
Language string `json:"language" db:"language"`
|
||||
ContentType string `json:"content_type" db:"content_type"`
|
||||
|
||||
// Quality metrics
|
||||
RelevanceScore float64 `json:"relevance_score" db:"relevance_score"`
|
||||
FreshnessScore float64 `json:"freshness_score" db:"freshness_score"`
|
||||
PopularityScore float64 `json:"popularity_score" db:"popularity_score"`
|
||||
RelevanceScore float64 `json:"relevance_score" db:"relevance_score"`
|
||||
FreshnessScore float64 `json:"freshness_score" db:"freshness_score"`
|
||||
PopularityScore float64 `json:"popularity_score" db:"popularity_score"`
|
||||
|
||||
// Temporal tracking
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
LastReindexed time.Time `json:"last_reindexed" db:"last_reindexed"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
LastReindexed time.Time `json:"last_reindexed" db:"last_reindexed"`
|
||||
}
|
||||
|
||||
// CacheEntryRecord represents cached context data
|
||||
type CacheEntryRecord struct {
|
||||
ID string `json:"id" db:"id"`
|
||||
CacheKey string `json:"cache_key" db:"cache_key"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
Role string `json:"role" db:"role"`
|
||||
ID string `json:"id" db:"id"`
|
||||
CacheKey string `json:"cache_key" db:"cache_key"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
Role string `json:"role" db:"role"`
|
||||
|
||||
// Cached data
|
||||
CachedData []byte `json:"cached_data" db:"cached_data"`
|
||||
DataHash string `json:"data_hash" db:"data_hash"`
|
||||
Compressed bool `json:"compressed" db:"compressed"`
|
||||
OriginalSize int64 `json:"original_size" db:"original_size"`
|
||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||
CachedData []byte `json:"cached_data" db:"cached_data"`
|
||||
DataHash string `json:"data_hash" db:"data_hash"`
|
||||
Compressed bool `json:"compressed" db:"compressed"`
|
||||
OriginalSize int64 `json:"original_size" db:"original_size"`
|
||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||
|
||||
// Cache metadata
|
||||
TTL int64 `json:"ttl" db:"ttl"` // seconds
|
||||
Priority int `json:"priority" db:"priority"`
|
||||
AccessCount int64 `json:"access_count" db:"access_count"`
|
||||
HitCount int64 `json:"hit_count" db:"hit_count"`
|
||||
TTL int64 `json:"ttl" db:"ttl"` // seconds
|
||||
Priority int `json:"priority" db:"priority"`
|
||||
AccessCount int64 `json:"access_count" db:"access_count"`
|
||||
HitCount int64 `json:"hit_count" db:"hit_count"`
|
||||
|
||||
// Temporal data
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
LastAccessedAt time.Time `json:"last_accessed_at" db:"last_accessed_at"`
|
||||
LastHitAt *time.Time `json:"last_hit_at" db:"last_hit_at"`
|
||||
ExpiresAt time.Time `json:"expires_at" db:"expires_at"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
LastAccessedAt time.Time `json:"last_accessed_at" db:"last_accessed_at"`
|
||||
LastHitAt *time.Time `json:"last_hit_at" db:"last_hit_at"`
|
||||
ExpiresAt time.Time `json:"expires_at" db:"expires_at"`
|
||||
}
|
||||
|
||||
// BackupRecord represents backup metadata
|
||||
type BackupRecord struct {
|
||||
ID string `json:"id" db:"id"`
|
||||
BackupID string `json:"backup_id" db:"backup_id"`
|
||||
Name string `json:"name" db:"name"`
|
||||
Destination string `json:"destination" db:"destination"`
|
||||
ID string `json:"id" db:"id"`
|
||||
BackupID string `json:"backup_id" db:"backup_id"`
|
||||
Name string `json:"name" db:"name"`
|
||||
Destination string `json:"destination" db:"destination"`
|
||||
|
||||
// Backup content
|
||||
ContextCount int64 `json:"context_count" db:"context_count"`
|
||||
DataSize int64 `json:"data_size" db:"data_size"`
|
||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||
Checksum string `json:"checksum" db:"checksum"`
|
||||
ContextCount int64 `json:"context_count" db:"context_count"`
|
||||
DataSize int64 `json:"data_size" db:"data_size"`
|
||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||
Checksum string `json:"checksum" db:"checksum"`
|
||||
|
||||
// Backup metadata
|
||||
IncludesIndexes bool `json:"includes_indexes" db:"includes_indexes"`
|
||||
IncludesCache bool `json:"includes_cache" db:"includes_cache"`
|
||||
Encrypted bool `json:"encrypted" db:"encrypted"`
|
||||
Incremental bool `json:"incremental" db:"incremental"`
|
||||
ParentBackupID string `json:"parent_backup_id" db:"parent_backup_id"`
|
||||
IncludesIndexes bool `json:"includes_indexes" db:"includes_indexes"`
|
||||
IncludesCache bool `json:"includes_cache" db:"includes_cache"`
|
||||
Encrypted bool `json:"encrypted" db:"encrypted"`
|
||||
Incremental bool `json:"incremental" db:"incremental"`
|
||||
ParentBackupID string `json:"parent_backup_id" db:"parent_backup_id"`
|
||||
|
||||
// Status tracking
|
||||
Status BackupStatus `json:"status" db:"status"`
|
||||
Progress float64 `json:"progress" db:"progress"`
|
||||
ErrorMessage string `json:"error_message" db:"error_message"`
|
||||
Status BackupStatus `json:"status" db:"status"`
|
||||
Progress float64 `json:"progress" db:"progress"`
|
||||
ErrorMessage string `json:"error_message" db:"error_message"`
|
||||
|
||||
// Temporal data
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
StartedAt *time.Time `json:"started_at" db:"started_at"`
|
||||
CompletedAt *time.Time `json:"completed_at" db:"completed_at"`
|
||||
RetentionUntil time.Time `json:"retention_until" db:"retention_until"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
StartedAt *time.Time `json:"started_at" db:"started_at"`
|
||||
CompletedAt *time.Time `json:"completed_at" db:"completed_at"`
|
||||
RetentionUntil time.Time `json:"retention_until" db:"retention_until"`
|
||||
}
|
||||
|
||||
// MetricsRecord represents storage performance metrics
|
||||
type MetricsRecord struct {
|
||||
ID string `json:"id" db:"id"`
|
||||
MetricType string `json:"metric_type" db:"metric_type"` // storage, encryption, cache, etc.
|
||||
NodeID string `json:"node_id" db:"node_id"`
|
||||
ID string `json:"id" db:"id"`
|
||||
MetricType string `json:"metric_type" db:"metric_type"` // storage, encryption, cache, etc.
|
||||
NodeID string `json:"node_id" db:"node_id"`
|
||||
|
||||
// Metric data
|
||||
MetricName string `json:"metric_name" db:"metric_name"`
|
||||
MetricValue float64 `json:"metric_value" db:"metric_value"`
|
||||
MetricUnit string `json:"metric_unit" db:"metric_unit"`
|
||||
Tags []byte `json:"tags" db:"tags"` // JSON tag object
|
||||
MetricName string `json:"metric_name" db:"metric_name"`
|
||||
MetricValue float64 `json:"metric_value" db:"metric_value"`
|
||||
MetricUnit string `json:"metric_unit" db:"metric_unit"`
|
||||
Tags []byte `json:"tags" db:"tags"` // JSON tag object
|
||||
|
||||
// Aggregation data
|
||||
AggregationType string `json:"aggregation_type" db:"aggregation_type"` // avg, sum, count, etc.
|
||||
TimeWindow int64 `json:"time_window" db:"time_window"` // seconds
|
||||
SampleCount int64 `json:"sample_count" db:"sample_count"`
|
||||
AggregationType string `json:"aggregation_type" db:"aggregation_type"` // avg, sum, count, etc.
|
||||
TimeWindow int64 `json:"time_window" db:"time_window"` // seconds
|
||||
SampleCount int64 `json:"sample_count" db:"sample_count"`
|
||||
|
||||
// Temporal tracking
|
||||
Timestamp time.Time `json:"timestamp" db:"timestamp"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
Timestamp time.Time `json:"timestamp" db:"timestamp"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
}
|
||||
|
||||
// ContextEvolutionRecord tracks how contexts evolve over time
|
||||
type ContextEvolutionRecord struct {
|
||||
ID string `json:"id" db:"id"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
FromVersion int64 `json:"from_version" db:"from_version"`
|
||||
ToVersion int64 `json:"to_version" db:"to_version"`
|
||||
ID string `json:"id" db:"id"`
|
||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||
FromVersion int64 `json:"from_version" db:"from_version"`
|
||||
ToVersion int64 `json:"to_version" db:"to_version"`
|
||||
|
||||
// Evolution analysis
|
||||
EvolutionType string `json:"evolution_type" db:"evolution_type"` // enhancement, refactor, fix, etc.
|
||||
SimilarityScore float64 `json:"similarity_score" db:"similarity_score"`
|
||||
ChangesMagnitude float64 `json:"changes_magnitude" db:"changes_magnitude"`
|
||||
SemanticDrift float64 `json:"semantic_drift" db:"semantic_drift"`
|
||||
EvolutionType string `json:"evolution_type" db:"evolution_type"` // enhancement, refactor, fix, etc.
|
||||
SimilarityScore float64 `json:"similarity_score" db:"similarity_score"`
|
||||
ChangesMagnitude float64 `json:"changes_magnitude" db:"changes_magnitude"`
|
||||
SemanticDrift float64 `json:"semantic_drift" db:"semantic_drift"`
|
||||
|
||||
// Change details
|
||||
ChangedFields []byte `json:"changed_fields" db:"changed_fields"` // JSON array
|
||||
FieldDeltas []byte `json:"field_deltas" db:"field_deltas"` // JSON delta object
|
||||
ImpactAnalysis []byte `json:"impact_analysis" db:"impact_analysis"` // JSON analysis
|
||||
ChangedFields []byte `json:"changed_fields" db:"changed_fields"` // JSON array
|
||||
FieldDeltas []byte `json:"field_deltas" db:"field_deltas"` // JSON delta object
|
||||
ImpactAnalysis []byte `json:"impact_analysis" db:"impact_analysis"` // JSON analysis
|
||||
|
||||
// Quality assessment
|
||||
QualityImprovement float64 `json:"quality_improvement" db:"quality_improvement"`
|
||||
ConfidenceChange float64 `json:"confidence_change" db:"confidence_change"`
|
||||
ValidationPassed bool `json:"validation_passed" db:"validation_passed"`
|
||||
QualityImprovement float64 `json:"quality_improvement" db:"quality_improvement"`
|
||||
ConfidenceChange float64 `json:"confidence_change" db:"confidence_change"`
|
||||
ValidationPassed bool `json:"validation_passed" db:"validation_passed"`
|
||||
|
||||
// Temporal tracking
|
||||
EvolutionTime time.Time `json:"evolution_time" db:"evolution_time"`
|
||||
AnalyzedAt time.Time `json:"analyzed_at" db:"analyzed_at"`
|
||||
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
|
||||
EvolutionTime time.Time `json:"evolution_time" db:"evolution_time"`
|
||||
AnalyzedAt time.Time `json:"analyzed_at" db:"analyzed_at"`
|
||||
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
|
||||
}
|
||||
|
||||
// Schema validation and creation functions
|
||||
|
||||
@@ -283,32 +283,42 @@ type IndexStatistics struct {
|
||||
|
||||
// BackupConfig represents backup configuration
|
||||
type BackupConfig struct {
|
||||
Name string `json:"name"` // Backup name
|
||||
Destination string `json:"destination"` // Backup destination
|
||||
IncludeIndexes bool `json:"include_indexes"` // Include search indexes
|
||||
IncludeCache bool `json:"include_cache"` // Include cache data
|
||||
Compression bool `json:"compression"` // Enable compression
|
||||
Encryption bool `json:"encryption"` // Enable encryption
|
||||
EncryptionKey string `json:"encryption_key"` // Encryption key
|
||||
Incremental bool `json:"incremental"` // Incremental backup
|
||||
Retention time.Duration `json:"retention"` // Backup retention period
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
Name string `json:"name"` // Backup name
|
||||
Destination string `json:"destination"` // Backup destination
|
||||
IncludeIndexes bool `json:"include_indexes"` // Include search indexes
|
||||
IncludeCache bool `json:"include_cache"` // Include cache data
|
||||
Compression bool `json:"compression"` // Enable compression
|
||||
Encryption bool `json:"encryption"` // Enable encryption
|
||||
EncryptionKey string `json:"encryption_key"` // Encryption key
|
||||
Incremental bool `json:"incremental"` // Incremental backup
|
||||
ParentBackupID string `json:"parent_backup_id"` // Parent backup reference
|
||||
Retention time.Duration `json:"retention"` // Backup retention period
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// BackupInfo represents information about a backup
|
||||
type BackupInfo struct {
|
||||
ID string `json:"id"` // Backup ID
|
||||
Name string `json:"name"` // Backup name
|
||||
CreatedAt time.Time `json:"created_at"` // Creation time
|
||||
Size int64 `json:"size"` // Backup size
|
||||
CompressedSize int64 `json:"compressed_size"` // Compressed size
|
||||
ContextCount int64 `json:"context_count"` // Number of contexts
|
||||
Encrypted bool `json:"encrypted"` // Whether encrypted
|
||||
Incremental bool `json:"incremental"` // Whether incremental
|
||||
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
|
||||
Checksum string `json:"checksum"` // Backup checksum
|
||||
Status BackupStatus `json:"status"` // Backup status
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
ID string `json:"id"` // Backup ID
|
||||
BackupID string `json:"backup_id"` // Legacy identifier
|
||||
Name string `json:"name"` // Backup name
|
||||
Destination string `json:"destination"` // Destination path
|
||||
CreatedAt time.Time `json:"created_at"` // Creation time
|
||||
Size int64 `json:"size"` // Backup size
|
||||
CompressedSize int64 `json:"compressed_size"` // Compressed size
|
||||
DataSize int64 `json:"data_size"` // Total data size
|
||||
ContextCount int64 `json:"context_count"` // Number of contexts
|
||||
Encrypted bool `json:"encrypted"` // Whether encrypted
|
||||
Incremental bool `json:"incremental"` // Whether incremental
|
||||
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
|
||||
IncludesIndexes bool `json:"includes_indexes"` // Include indexes
|
||||
IncludesCache bool `json:"includes_cache"` // Include cache data
|
||||
Checksum string `json:"checksum"` // Backup checksum
|
||||
Status BackupStatus `json:"status"` // Backup status
|
||||
Progress float64 `json:"progress"` // Completion progress 0-1
|
||||
ErrorMessage string `json:"error_message"` // Last error message
|
||||
RetentionUntil time.Time `json:"retention_until"` // Retention deadline
|
||||
CompletedAt *time.Time `json:"completed_at"` // Completion time
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// BackupStatus represents backup status
|
||||
|
||||
@@ -5,7 +5,9 @@ import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/slurp/storage"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// TemporalGraphFactory creates and configures temporal graph components
|
||||
@@ -17,9 +19,9 @@ type TemporalGraphFactory struct {
|
||||
// TemporalConfig represents configuration for the temporal graph system
|
||||
type TemporalConfig struct {
|
||||
// Core graph settings
|
||||
MaxDepth int `json:"max_depth"`
|
||||
StalenessWeights *StalenessWeights `json:"staleness_weights"`
|
||||
CacheTimeout time.Duration `json:"cache_timeout"`
|
||||
MaxDepth int `json:"max_depth"`
|
||||
StalenessWeights *StalenessWeights `json:"staleness_weights"`
|
||||
CacheTimeout time.Duration `json:"cache_timeout"`
|
||||
|
||||
// Analysis settings
|
||||
InfluenceAnalysisConfig *InfluenceAnalysisConfig `json:"influence_analysis_config"`
|
||||
@@ -27,34 +29,34 @@ type TemporalConfig struct {
|
||||
QueryConfig *QueryConfig `json:"query_config"`
|
||||
|
||||
// Persistence settings
|
||||
PersistenceConfig *PersistenceConfig `json:"persistence_config"`
|
||||
PersistenceConfig *PersistenceConfig `json:"persistence_config"`
|
||||
|
||||
// Performance settings
|
||||
EnableCaching bool `json:"enable_caching"`
|
||||
EnableCompression bool `json:"enable_compression"`
|
||||
EnableMetrics bool `json:"enable_metrics"`
|
||||
EnableCaching bool `json:"enable_caching"`
|
||||
EnableCompression bool `json:"enable_compression"`
|
||||
EnableMetrics bool `json:"enable_metrics"`
|
||||
|
||||
// Debug settings
|
||||
EnableDebugLogging bool `json:"enable_debug_logging"`
|
||||
EnableValidation bool `json:"enable_validation"`
|
||||
EnableDebugLogging bool `json:"enable_debug_logging"`
|
||||
EnableValidation bool `json:"enable_validation"`
|
||||
}
|
||||
|
||||
// InfluenceAnalysisConfig represents configuration for influence analysis
|
||||
type InfluenceAnalysisConfig struct {
|
||||
DampingFactor float64 `json:"damping_factor"`
|
||||
MaxIterations int `json:"max_iterations"`
|
||||
ConvergenceThreshold float64 `json:"convergence_threshold"`
|
||||
CacheValidDuration time.Duration `json:"cache_valid_duration"`
|
||||
EnableCentralityMetrics bool `json:"enable_centrality_metrics"`
|
||||
EnableCommunityDetection bool `json:"enable_community_detection"`
|
||||
DampingFactor float64 `json:"damping_factor"`
|
||||
MaxIterations int `json:"max_iterations"`
|
||||
ConvergenceThreshold float64 `json:"convergence_threshold"`
|
||||
CacheValidDuration time.Duration `json:"cache_valid_duration"`
|
||||
EnableCentralityMetrics bool `json:"enable_centrality_metrics"`
|
||||
EnableCommunityDetection bool `json:"enable_community_detection"`
|
||||
}
|
||||
|
||||
// NavigationConfig represents configuration for decision navigation
|
||||
type NavigationConfig struct {
|
||||
MaxNavigationHistory int `json:"max_navigation_history"`
|
||||
BookmarkRetention time.Duration `json:"bookmark_retention"`
|
||||
SessionTimeout time.Duration `json:"session_timeout"`
|
||||
EnablePathCaching bool `json:"enable_path_caching"`
|
||||
MaxNavigationHistory int `json:"max_navigation_history"`
|
||||
BookmarkRetention time.Duration `json:"bookmark_retention"`
|
||||
SessionTimeout time.Duration `json:"session_timeout"`
|
||||
EnablePathCaching bool `json:"enable_path_caching"`
|
||||
}
|
||||
|
||||
// QueryConfig represents configuration for decision-hop queries
|
||||
@@ -68,17 +70,17 @@ type QueryConfig struct {
|
||||
|
||||
// TemporalGraphSystem represents the complete temporal graph system
|
||||
type TemporalGraphSystem struct {
|
||||
Graph TemporalGraph
|
||||
Navigator DecisionNavigator
|
||||
InfluenceAnalyzer InfluenceAnalyzer
|
||||
StalenessDetector StalenessDetector
|
||||
ConflictDetector ConflictDetector
|
||||
PatternAnalyzer PatternAnalyzer
|
||||
VersionManager VersionManager
|
||||
HistoryManager HistoryManager
|
||||
MetricsCollector MetricsCollector
|
||||
QuerySystem *querySystemImpl
|
||||
PersistenceManager *persistenceManagerImpl
|
||||
Graph TemporalGraph
|
||||
Navigator DecisionNavigator
|
||||
InfluenceAnalyzer InfluenceAnalyzer
|
||||
StalenessDetector StalenessDetector
|
||||
ConflictDetector ConflictDetector
|
||||
PatternAnalyzer PatternAnalyzer
|
||||
VersionManager VersionManager
|
||||
HistoryManager HistoryManager
|
||||
MetricsCollector MetricsCollector
|
||||
QuerySystem *querySystemImpl
|
||||
PersistenceManager *persistenceManagerImpl
|
||||
}
|
||||
|
||||
// NewTemporalGraphFactory creates a new temporal graph factory
|
||||
@@ -135,17 +137,17 @@ func (tgf *TemporalGraphFactory) CreateTemporalGraphSystem(
|
||||
metricsCollector := NewMetricsCollector(graph)
|
||||
|
||||
system := &TemporalGraphSystem{
|
||||
Graph: graph,
|
||||
Navigator: navigator,
|
||||
InfluenceAnalyzer: analyzer,
|
||||
StalenessDetector: detector,
|
||||
ConflictDetector: conflictDetector,
|
||||
PatternAnalyzer: patternAnalyzer,
|
||||
VersionManager: versionManager,
|
||||
HistoryManager: historyManager,
|
||||
MetricsCollector: metricsCollector,
|
||||
QuerySystem: querySystem,
|
||||
PersistenceManager: persistenceManager,
|
||||
Graph: graph,
|
||||
Navigator: navigator,
|
||||
InfluenceAnalyzer: analyzer,
|
||||
StalenessDetector: detector,
|
||||
ConflictDetector: conflictDetector,
|
||||
PatternAnalyzer: patternAnalyzer,
|
||||
VersionManager: versionManager,
|
||||
HistoryManager: historyManager,
|
||||
MetricsCollector: metricsCollector,
|
||||
QuerySystem: querySystem,
|
||||
PersistenceManager: persistenceManager,
|
||||
}
|
||||
|
||||
return system, nil
|
||||
@@ -190,11 +192,11 @@ func DefaultTemporalConfig() *TemporalConfig {
|
||||
CacheTimeout: time.Minute * 15,
|
||||
|
||||
InfluenceAnalysisConfig: &InfluenceAnalysisConfig{
|
||||
DampingFactor: 0.85,
|
||||
MaxIterations: 100,
|
||||
ConvergenceThreshold: 1e-6,
|
||||
CacheValidDuration: time.Minute * 30,
|
||||
EnableCentralityMetrics: true,
|
||||
DampingFactor: 0.85,
|
||||
MaxIterations: 100,
|
||||
ConvergenceThreshold: 1e-6,
|
||||
CacheValidDuration: time.Minute * 30,
|
||||
EnableCentralityMetrics: true,
|
||||
EnableCommunityDetection: true,
|
||||
},
|
||||
|
||||
@@ -214,24 +216,24 @@ func DefaultTemporalConfig() *TemporalConfig {
|
||||
},
|
||||
|
||||
PersistenceConfig: &PersistenceConfig{
|
||||
EnableLocalStorage: true,
|
||||
EnableDistributedStorage: true,
|
||||
EnableEncryption: true,
|
||||
EncryptionRoles: []string{"analyst", "architect", "developer"},
|
||||
SyncInterval: time.Minute * 15,
|
||||
EnableLocalStorage: true,
|
||||
EnableDistributedStorage: true,
|
||||
EnableEncryption: true,
|
||||
EncryptionRoles: []string{"analyst", "architect", "developer"},
|
||||
SyncInterval: time.Minute * 15,
|
||||
ConflictResolutionStrategy: "latest_wins",
|
||||
EnableAutoSync: true,
|
||||
MaxSyncRetries: 3,
|
||||
BatchSize: 50,
|
||||
FlushInterval: time.Second * 30,
|
||||
EnableWriteBuffer: true,
|
||||
EnableAutoBackup: true,
|
||||
BackupInterval: time.Hour * 6,
|
||||
RetainBackupCount: 10,
|
||||
KeyPrefix: "temporal_graph",
|
||||
NodeKeyPattern: "temporal_graph/nodes/%s",
|
||||
GraphKeyPattern: "temporal_graph/graph/%s",
|
||||
MetadataKeyPattern: "temporal_graph/metadata/%s",
|
||||
EnableAutoSync: true,
|
||||
MaxSyncRetries: 3,
|
||||
BatchSize: 50,
|
||||
FlushInterval: time.Second * 30,
|
||||
EnableWriteBuffer: true,
|
||||
EnableAutoBackup: true,
|
||||
BackupInterval: time.Hour * 6,
|
||||
RetainBackupCount: 10,
|
||||
KeyPrefix: "temporal_graph",
|
||||
NodeKeyPattern: "temporal_graph/nodes/%s",
|
||||
GraphKeyPattern: "temporal_graph/graph/%s",
|
||||
MetadataKeyPattern: "temporal_graph/metadata/%s",
|
||||
},
|
||||
|
||||
EnableCaching: true,
|
||||
@@ -308,11 +310,11 @@ func (cd *conflictDetectorImpl) ValidateDecisionSequence(ctx context.Context, ad
|
||||
func (cd *conflictDetectorImpl) ResolveTemporalConflict(ctx context.Context, conflict *TemporalConflict) (*ConflictResolution, error) {
|
||||
// Implementation would resolve specific temporal conflicts
|
||||
return &ConflictResolution{
|
||||
ConflictID: conflict.ID,
|
||||
Resolution: "auto_resolved",
|
||||
ResolvedAt: time.Now(),
|
||||
ResolvedBy: "system",
|
||||
Confidence: 0.8,
|
||||
ConflictID: conflict.ID,
|
||||
ResolutionMethod: "auto_resolved",
|
||||
ResolvedAt: time.Now(),
|
||||
ResolvedBy: "system",
|
||||
Confidence: 0.8,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -539,13 +541,13 @@ func (mc *metricsCollectorImpl) GetInfluenceMetrics(ctx context.Context) (*Influ
|
||||
func (mc *metricsCollectorImpl) GetQualityMetrics(ctx context.Context) (*QualityMetrics, error) {
|
||||
// Implementation would get temporal data quality metrics
|
||||
return &QualityMetrics{
|
||||
DataCompleteness: 1.0,
|
||||
DataConsistency: 1.0,
|
||||
DataAccuracy: 1.0,
|
||||
AverageConfidence: 0.8,
|
||||
ConflictsDetected: 0,
|
||||
ConflictsResolved: 0,
|
||||
LastQualityCheck: time.Now(),
|
||||
DataCompleteness: 1.0,
|
||||
DataConsistency: 1.0,
|
||||
DataAccuracy: 1.0,
|
||||
AverageConfidence: 0.8,
|
||||
ConflictsDetected: 0,
|
||||
ConflictsResolved: 0,
|
||||
LastQualityCheck: time.Now(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -9,9 +9,9 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
slurpContext "chorus/pkg/slurp/context"
|
||||
"chorus/pkg/slurp/storage"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// temporalGraphImpl implements the TemporalGraph interface
|
||||
@@ -22,23 +22,23 @@ type temporalGraphImpl struct {
|
||||
storage storage.ContextStore
|
||||
|
||||
// In-memory graph structures for fast access
|
||||
nodes map[string]*TemporalNode // nodeID -> TemporalNode
|
||||
addressToNodes map[string][]*TemporalNode // address -> list of temporal nodes
|
||||
influences map[string][]string // nodeID -> list of influenced nodeIDs
|
||||
influencedBy map[string][]string // nodeID -> list of influencer nodeIDs
|
||||
nodes map[string]*TemporalNode // nodeID -> TemporalNode
|
||||
addressToNodes map[string][]*TemporalNode // address -> list of temporal nodes
|
||||
influences map[string][]string // nodeID -> list of influenced nodeIDs
|
||||
influencedBy map[string][]string // nodeID -> list of influencer nodeIDs
|
||||
|
||||
// Decision tracking
|
||||
decisions map[string]*DecisionMetadata // decisionID -> DecisionMetadata
|
||||
decisionToNodes map[string][]*TemporalNode // decisionID -> list of affected nodes
|
||||
decisions map[string]*DecisionMetadata // decisionID -> DecisionMetadata
|
||||
decisionToNodes map[string][]*TemporalNode // decisionID -> list of affected nodes
|
||||
|
||||
// Performance optimization
|
||||
pathCache map[string][]*DecisionStep // cache for decision paths
|
||||
metricsCache map[string]interface{} // cache for expensive metrics
|
||||
cacheTimeout time.Duration
|
||||
lastCacheClean time.Time
|
||||
pathCache map[string][]*DecisionStep // cache for decision paths
|
||||
metricsCache map[string]interface{} // cache for expensive metrics
|
||||
cacheTimeout time.Duration
|
||||
lastCacheClean time.Time
|
||||
|
||||
// Configuration
|
||||
maxDepth int // Maximum depth for path finding
|
||||
maxDepth int // Maximum depth for path finding
|
||||
stalenessWeight *StalenessWeights
|
||||
}
|
||||
|
||||
@@ -80,24 +80,24 @@ func (tg *temporalGraphImpl) CreateInitialContext(ctx context.Context, address u
|
||||
|
||||
// Create temporal node
|
||||
temporalNode := &TemporalNode{
|
||||
ID: nodeID,
|
||||
UCXLAddress: address,
|
||||
Version: 1,
|
||||
Context: contextData,
|
||||
Timestamp: time.Now(),
|
||||
DecisionID: fmt.Sprintf("initial-%s", creator),
|
||||
ChangeReason: ReasonInitialCreation,
|
||||
ParentNode: nil,
|
||||
ContextHash: tg.calculateContextHash(contextData),
|
||||
Confidence: contextData.RAGConfidence,
|
||||
Staleness: 0.0,
|
||||
Influences: make([]ucxl.Address, 0),
|
||||
InfluencedBy: make([]ucxl.Address, 0),
|
||||
ValidatedBy: []string{creator},
|
||||
ID: nodeID,
|
||||
UCXLAddress: address,
|
||||
Version: 1,
|
||||
Context: contextData,
|
||||
Timestamp: time.Now(),
|
||||
DecisionID: fmt.Sprintf("initial-%s", creator),
|
||||
ChangeReason: ReasonInitialCreation,
|
||||
ParentNode: nil,
|
||||
ContextHash: tg.calculateContextHash(contextData),
|
||||
Confidence: contextData.RAGConfidence,
|
||||
Staleness: 0.0,
|
||||
Influences: make([]ucxl.Address, 0),
|
||||
InfluencedBy: make([]ucxl.Address, 0),
|
||||
ValidatedBy: []string{creator},
|
||||
LastValidated: time.Now(),
|
||||
ImpactScope: ImpactLocal,
|
||||
PropagatedTo: make([]ucxl.Address, 0),
|
||||
Metadata: make(map[string]interface{}),
|
||||
ImpactScope: ImpactLocal,
|
||||
PropagatedTo: make([]ucxl.Address, 0),
|
||||
Metadata: make(map[string]interface{}),
|
||||
}
|
||||
|
||||
// Store in memory structures
|
||||
@@ -111,15 +111,15 @@ func (tg *temporalGraphImpl) CreateInitialContext(ctx context.Context, address u
|
||||
|
||||
// Store decision metadata
|
||||
decisionMeta := &DecisionMetadata{
|
||||
ID: temporalNode.DecisionID,
|
||||
Maker: creator,
|
||||
Rationale: "Initial context creation",
|
||||
Scope: ImpactLocal,
|
||||
ConfidenceLevel: contextData.RAGConfidence,
|
||||
ExternalRefs: make([]string, 0),
|
||||
CreatedAt: time.Now(),
|
||||
ID: temporalNode.DecisionID,
|
||||
Maker: creator,
|
||||
Rationale: "Initial context creation",
|
||||
Scope: ImpactLocal,
|
||||
ConfidenceLevel: contextData.RAGConfidence,
|
||||
ExternalRefs: make([]string, 0),
|
||||
CreatedAt: time.Now(),
|
||||
ImplementationStatus: "complete",
|
||||
Metadata: make(map[string]interface{}),
|
||||
Metadata: make(map[string]interface{}),
|
||||
}
|
||||
tg.decisions[temporalNode.DecisionID] = decisionMeta
|
||||
tg.decisionToNodes[temporalNode.DecisionID] = []*TemporalNode{temporalNode}
|
||||
@@ -156,24 +156,24 @@ func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Add
|
||||
|
||||
// Create new temporal node
|
||||
temporalNode := &TemporalNode{
|
||||
ID: nodeID,
|
||||
UCXLAddress: address,
|
||||
Version: newVersion,
|
||||
Context: newContext,
|
||||
Timestamp: time.Now(),
|
||||
DecisionID: decision.ID,
|
||||
ChangeReason: reason,
|
||||
ParentNode: &latestNode.ID,
|
||||
ContextHash: tg.calculateContextHash(newContext),
|
||||
Confidence: newContext.RAGConfidence,
|
||||
Staleness: 0.0, // New version, not stale
|
||||
Influences: make([]ucxl.Address, 0),
|
||||
InfluencedBy: make([]ucxl.Address, 0),
|
||||
ValidatedBy: []string{decision.Maker},
|
||||
ID: nodeID,
|
||||
UCXLAddress: address,
|
||||
Version: newVersion,
|
||||
Context: newContext,
|
||||
Timestamp: time.Now(),
|
||||
DecisionID: decision.ID,
|
||||
ChangeReason: reason,
|
||||
ParentNode: &latestNode.ID,
|
||||
ContextHash: tg.calculateContextHash(newContext),
|
||||
Confidence: newContext.RAGConfidence,
|
||||
Staleness: 0.0, // New version, not stale
|
||||
Influences: make([]ucxl.Address, 0),
|
||||
InfluencedBy: make([]ucxl.Address, 0),
|
||||
ValidatedBy: []string{decision.Maker},
|
||||
LastValidated: time.Now(),
|
||||
ImpactScope: decision.Scope,
|
||||
PropagatedTo: make([]ucxl.Address, 0),
|
||||
Metadata: make(map[string]interface{}),
|
||||
ImpactScope: decision.Scope,
|
||||
PropagatedTo: make([]ucxl.Address, 0),
|
||||
Metadata: make(map[string]interface{}),
|
||||
}
|
||||
|
||||
// Copy influence relationships from parent
|
||||
@@ -534,7 +534,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
|
||||
return nil, fmt.Errorf("from node not found: %w", err)
|
||||
}
|
||||
|
||||
toNode, err := tg.getLatestNodeUnsafe(to)
|
||||
_, err := tg.getLatestNodeUnsafe(to)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("to node not found: %w", err)
|
||||
}
|
||||
@@ -620,8 +620,8 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
|
||||
MostInfluentialDecisions: make([]*InfluentialDecision, 0),
|
||||
DecisionClusters: make([]*DecisionCluster, 0),
|
||||
Patterns: make([]*DecisionPattern, 0),
|
||||
Anomalies: make([]*AnomalousDecision, 0),
|
||||
AnalyzedAt: time.Now(),
|
||||
Anomalies: make([]*AnomalousDecision, 0),
|
||||
AnalyzedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Calculate decision velocity
|
||||
@@ -652,18 +652,18 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
|
||||
// Find most influential decisions (simplified)
|
||||
influenceScores := make(map[string]float64)
|
||||
for nodeID, node := range tg.nodes {
|
||||
score := float64(len(tg.influences[nodeID])) * 1.0 // Direct influences
|
||||
score := float64(len(tg.influences[nodeID])) * 1.0 // Direct influences
|
||||
score += float64(len(tg.influencedBy[nodeID])) * 0.5 // Being influenced
|
||||
influenceScores[nodeID] = score
|
||||
|
||||
if score > 3.0 { // Threshold for "influential"
|
||||
influential := &InfluentialDecision{
|
||||
Address: node.UCXLAddress,
|
||||
DecisionHop: node.Version,
|
||||
InfluenceScore: score,
|
||||
AffectedContexts: node.Influences,
|
||||
DecisionMetadata: tg.decisions[node.DecisionID],
|
||||
InfluenceReasons: []string{"high_connectivity", "multiple_influences"},
|
||||
Address: node.UCXLAddress,
|
||||
DecisionHop: node.Version,
|
||||
InfluenceScore: score,
|
||||
AffectedContexts: node.Influences,
|
||||
DecisionMetadata: tg.decisions[node.DecisionID],
|
||||
InfluenceReasons: []string{"high_connectivity", "multiple_influences"},
|
||||
}
|
||||
analysis.MostInfluentialDecisions = append(analysis.MostInfluentialDecisions, influential)
|
||||
}
|
||||
@@ -869,8 +869,8 @@ func (tg *temporalGraphImpl) calculateStaleness(node *TemporalNode, changedNode
|
||||
|
||||
return math.Min(
|
||||
tg.stalenessWeight.TimeWeight*timeWeight+
|
||||
tg.stalenessWeight.InfluenceWeight*influenceWeight+
|
||||
tg.stalenessWeight.ImportanceWeight*impactWeight, 1.0)
|
||||
tg.stalenessWeight.InfluenceWeight*influenceWeight+
|
||||
tg.stalenessWeight.ImportanceWeight*impactWeight, 1.0)
|
||||
}
|
||||
|
||||
func (tg *temporalGraphImpl) clearCacheForAddress(address ucxl.Address) {
|
||||
|
||||
@@ -210,13 +210,13 @@ func (ia *influenceAnalyzerImpl) FindInfluentialDecisions(ctx context.Context, l
|
||||
impact := ia.analyzeDecisionImpactInternal(node)
|
||||
|
||||
decision := &InfluentialDecision{
|
||||
Address: node.UCXLAddress,
|
||||
DecisionHop: node.Version,
|
||||
InfluenceScore: nodeScore.score,
|
||||
AffectedContexts: node.Influences,
|
||||
DecisionMetadata: ia.graph.decisions[node.DecisionID],
|
||||
ImpactAnalysis: impact,
|
||||
InfluenceReasons: ia.getInfluenceReasons(node, nodeScore.score),
|
||||
Address: node.UCXLAddress,
|
||||
DecisionHop: node.Version,
|
||||
InfluenceScore: nodeScore.score,
|
||||
AffectedContexts: node.Influences,
|
||||
DecisionMetadata: ia.graph.decisions[node.DecisionID],
|
||||
ImpactAnalysis: impact,
|
||||
InfluenceReasons: ia.getInfluenceReasons(node, nodeScore.score),
|
||||
}
|
||||
|
||||
influential = append(influential, decision)
|
||||
@@ -899,7 +899,6 @@ func (ia *influenceAnalyzerImpl) findShortestPathLength(fromID, toID string) int
|
||||
|
||||
func (ia *influenceAnalyzerImpl) getNodeCentrality(nodeID string) float64 {
|
||||
// Simple centrality based on degree
|
||||
influences := len(ia.graph.influences[nodeID])
|
||||
influencedBy := len(ia.graph.influencedBy[nodeID])
|
||||
totalNodes := len(ia.graph.nodes)
|
||||
|
||||
|
||||
@@ -27,22 +27,22 @@ type decisionNavigatorImpl struct {
|
||||
|
||||
// NavigationSession represents a navigation session
|
||||
type NavigationSession struct {
|
||||
ID string `json:"id"`
|
||||
UserID string `json:"user_id"`
|
||||
StartedAt time.Time `json:"started_at"`
|
||||
LastActivity time.Time `json:"last_activity"`
|
||||
CurrentPosition ucxl.Address `json:"current_position"`
|
||||
History []*DecisionStep `json:"history"`
|
||||
Bookmarks []string `json:"bookmarks"`
|
||||
Preferences *NavPreferences `json:"preferences"`
|
||||
ID string `json:"id"`
|
||||
UserID string `json:"user_id"`
|
||||
StartedAt time.Time `json:"started_at"`
|
||||
LastActivity time.Time `json:"last_activity"`
|
||||
CurrentPosition ucxl.Address `json:"current_position"`
|
||||
History []*DecisionStep `json:"history"`
|
||||
Bookmarks []string `json:"bookmarks"`
|
||||
Preferences *NavPreferences `json:"preferences"`
|
||||
}
|
||||
|
||||
// NavPreferences represents navigation preferences
|
||||
type NavPreferences struct {
|
||||
MaxHops int `json:"max_hops"`
|
||||
MaxHops int `json:"max_hops"`
|
||||
PreferRecentDecisions bool `json:"prefer_recent_decisions"`
|
||||
FilterByConfidence float64 `json:"filter_by_confidence"`
|
||||
IncludeStaleContexts bool `json:"include_stale_contexts"`
|
||||
FilterByConfidence float64 `json:"filter_by_confidence"`
|
||||
IncludeStaleContexts bool `json:"include_stale_contexts"`
|
||||
}
|
||||
|
||||
// NewDecisionNavigator creates a new decision navigator
|
||||
@@ -50,7 +50,7 @@ func NewDecisionNavigator(graph *temporalGraphImpl) DecisionNavigator {
|
||||
return &decisionNavigatorImpl{
|
||||
graph: graph,
|
||||
navigationSessions: make(map[string]*NavigationSession),
|
||||
bookmarks: make(map[string]*DecisionBookmark),
|
||||
bookmarks: make(map[string]*DecisionBookmark),
|
||||
maxNavigationHistory: 100,
|
||||
}
|
||||
}
|
||||
@@ -169,14 +169,14 @@ func (dn *decisionNavigatorImpl) FindStaleContexts(ctx context.Context, stalenes
|
||||
for _, node := range dn.graph.nodes {
|
||||
if node.Staleness >= stalenessThreshold {
|
||||
staleness := &StaleContext{
|
||||
UCXLAddress: node.UCXLAddress,
|
||||
TemporalNode: node,
|
||||
StalenessScore: node.Staleness,
|
||||
LastUpdated: node.Timestamp,
|
||||
Reasons: dn.getStalenessReasons(node),
|
||||
UCXLAddress: node.UCXLAddress,
|
||||
TemporalNode: node,
|
||||
StalenessScore: node.Staleness,
|
||||
LastUpdated: node.Timestamp,
|
||||
Reasons: dn.getStalenessReasons(node),
|
||||
SuggestedActions: dn.getSuggestedActions(node),
|
||||
RelatedChanges: dn.getRelatedChanges(node),
|
||||
Priority: dn.calculateStalePriority(node),
|
||||
RelatedChanges: dn.getRelatedChanges(node),
|
||||
Priority: dn.calculateStalePriority(node),
|
||||
}
|
||||
staleContexts = append(staleContexts, staleness)
|
||||
}
|
||||
@@ -252,7 +252,7 @@ func (dn *decisionNavigatorImpl) ResetNavigation(ctx context.Context, address uc
|
||||
defer dn.mu.Unlock()
|
||||
|
||||
// Clear any navigation sessions for this address
|
||||
for sessionID, session := range dn.navigationSessions {
|
||||
for _, session := range dn.navigationSessions {
|
||||
if session.CurrentPosition.String() == address.String() {
|
||||
// Reset to latest version
|
||||
latestNode, err := dn.graph.getLatestNodeUnsafe(address)
|
||||
|
||||
@@ -7,8 +7,8 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"chorus/pkg/ucxl"
|
||||
"chorus/pkg/slurp/storage"
|
||||
"chorus/pkg/ucxl"
|
||||
)
|
||||
|
||||
// persistenceManagerImpl handles persistence and synchronization of temporal graph data
|
||||
@@ -35,65 +35,65 @@ type persistenceManagerImpl struct {
|
||||
conflictResolver ConflictResolver
|
||||
|
||||
// Performance optimization
|
||||
batchSize int
|
||||
writeBuffer []*TemporalNode
|
||||
bufferMutex sync.Mutex
|
||||
flushInterval time.Duration
|
||||
lastFlush time.Time
|
||||
batchSize int
|
||||
writeBuffer []*TemporalNode
|
||||
bufferMutex sync.Mutex
|
||||
flushInterval time.Duration
|
||||
lastFlush time.Time
|
||||
}
|
||||
|
||||
// PersistenceConfig represents configuration for temporal graph persistence
|
||||
type PersistenceConfig struct {
|
||||
// Storage settings
|
||||
EnableLocalStorage bool `json:"enable_local_storage"`
|
||||
EnableDistributedStorage bool `json:"enable_distributed_storage"`
|
||||
EnableEncryption bool `json:"enable_encryption"`
|
||||
EncryptionRoles []string `json:"encryption_roles"`
|
||||
EnableLocalStorage bool `json:"enable_local_storage"`
|
||||
EnableDistributedStorage bool `json:"enable_distributed_storage"`
|
||||
EnableEncryption bool `json:"enable_encryption"`
|
||||
EncryptionRoles []string `json:"encryption_roles"`
|
||||
|
||||
// Synchronization settings
|
||||
SyncInterval time.Duration `json:"sync_interval"`
|
||||
ConflictResolutionStrategy string `json:"conflict_resolution_strategy"`
|
||||
EnableAutoSync bool `json:"enable_auto_sync"`
|
||||
MaxSyncRetries int `json:"max_sync_retries"`
|
||||
SyncInterval time.Duration `json:"sync_interval"`
|
||||
ConflictResolutionStrategy string `json:"conflict_resolution_strategy"`
|
||||
EnableAutoSync bool `json:"enable_auto_sync"`
|
||||
MaxSyncRetries int `json:"max_sync_retries"`
|
||||
|
||||
// Performance settings
|
||||
BatchSize int `json:"batch_size"`
|
||||
FlushInterval time.Duration `json:"flush_interval"`
|
||||
EnableWriteBuffer bool `json:"enable_write_buffer"`
|
||||
BatchSize int `json:"batch_size"`
|
||||
FlushInterval time.Duration `json:"flush_interval"`
|
||||
EnableWriteBuffer bool `json:"enable_write_buffer"`
|
||||
|
||||
// Backup settings
|
||||
EnableAutoBackup bool `json:"enable_auto_backup"`
|
||||
BackupInterval time.Duration `json:"backup_interval"`
|
||||
RetainBackupCount int `json:"retain_backup_count"`
|
||||
EnableAutoBackup bool `json:"enable_auto_backup"`
|
||||
BackupInterval time.Duration `json:"backup_interval"`
|
||||
RetainBackupCount int `json:"retain_backup_count"`
|
||||
|
||||
// Storage keys and patterns
|
||||
KeyPrefix string `json:"key_prefix"`
|
||||
NodeKeyPattern string `json:"node_key_pattern"`
|
||||
GraphKeyPattern string `json:"graph_key_pattern"`
|
||||
MetadataKeyPattern string `json:"metadata_key_pattern"`
|
||||
KeyPrefix string `json:"key_prefix"`
|
||||
NodeKeyPattern string `json:"node_key_pattern"`
|
||||
GraphKeyPattern string `json:"graph_key_pattern"`
|
||||
MetadataKeyPattern string `json:"metadata_key_pattern"`
|
||||
}
|
||||
|
||||
// PendingChange represents a change waiting to be synchronized
|
||||
type PendingChange struct {
|
||||
ID string `json:"id"`
|
||||
Type ChangeType `json:"type"`
|
||||
NodeID string `json:"node_id"`
|
||||
Data interface{} `json:"data"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Retries int `json:"retries"`
|
||||
LastError string `json:"last_error"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
ID string `json:"id"`
|
||||
Type ChangeType `json:"type"`
|
||||
NodeID string `json:"node_id"`
|
||||
Data interface{} `json:"data"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Retries int `json:"retries"`
|
||||
LastError string `json:"last_error"`
|
||||
Metadata map[string]interface{} `json:"metadata"`
|
||||
}
|
||||
|
||||
// ChangeType represents the type of change to be synchronized
|
||||
type ChangeType string
|
||||
|
||||
const (
|
||||
ChangeTypeNodeCreated ChangeType = "node_created"
|
||||
ChangeTypeNodeUpdated ChangeType = "node_updated"
|
||||
ChangeTypeNodeDeleted ChangeType = "node_deleted"
|
||||
ChangeTypeGraphUpdated ChangeType = "graph_updated"
|
||||
ChangeTypeInfluenceAdded ChangeType = "influence_added"
|
||||
ChangeTypeNodeCreated ChangeType = "node_created"
|
||||
ChangeTypeNodeUpdated ChangeType = "node_updated"
|
||||
ChangeTypeNodeDeleted ChangeType = "node_deleted"
|
||||
ChangeTypeGraphUpdated ChangeType = "graph_updated"
|
||||
ChangeTypeInfluenceAdded ChangeType = "influence_added"
|
||||
ChangeTypeInfluenceRemoved ChangeType = "influence_removed"
|
||||
)
|
||||
|
||||
@@ -105,39 +105,39 @@ type ConflictResolver interface {
|
||||
|
||||
// GraphSnapshot represents a snapshot of the temporal graph for synchronization
|
||||
type GraphSnapshot struct {
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Nodes map[string]*TemporalNode `json:"nodes"`
|
||||
Influences map[string][]string `json:"influences"`
|
||||
InfluencedBy map[string][]string `json:"influenced_by"`
|
||||
Decisions map[string]*DecisionMetadata `json:"decisions"`
|
||||
Metadata *GraphMetadata `json:"metadata"`
|
||||
Checksum string `json:"checksum"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Nodes map[string]*TemporalNode `json:"nodes"`
|
||||
Influences map[string][]string `json:"influences"`
|
||||
InfluencedBy map[string][]string `json:"influenced_by"`
|
||||
Decisions map[string]*DecisionMetadata `json:"decisions"`
|
||||
Metadata *GraphMetadata `json:"metadata"`
|
||||
Checksum string `json:"checksum"`
|
||||
}
|
||||
|
||||
// GraphMetadata represents metadata about the temporal graph
|
||||
type GraphMetadata struct {
|
||||
Version int `json:"version"`
|
||||
LastModified time.Time `json:"last_modified"`
|
||||
NodeCount int `json:"node_count"`
|
||||
EdgeCount int `json:"edge_count"`
|
||||
DecisionCount int `json:"decision_count"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
Version int `json:"version"`
|
||||
LastModified time.Time `json:"last_modified"`
|
||||
NodeCount int `json:"node_count"`
|
||||
EdgeCount int `json:"edge_count"`
|
||||
DecisionCount int `json:"decision_count"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
}
|
||||
|
||||
// SyncResult represents the result of a synchronization operation
|
||||
type SyncResult struct {
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
NodesProcessed int `json:"nodes_processed"`
|
||||
NodesCreated int `json:"nodes_created"`
|
||||
NodesUpdated int `json:"nodes_updated"`
|
||||
NodesDeleted int `json:"nodes_deleted"`
|
||||
ConflictsFound int `json:"conflicts_found"`
|
||||
ConflictsResolved int `json:"conflicts_resolved"`
|
||||
Errors []string `json:"errors"`
|
||||
Success bool `json:"success"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
NodesProcessed int `json:"nodes_processed"`
|
||||
NodesCreated int `json:"nodes_created"`
|
||||
NodesUpdated int `json:"nodes_updated"`
|
||||
NodesDeleted int `json:"nodes_deleted"`
|
||||
ConflictsFound int `json:"conflicts_found"`
|
||||
ConflictsResolved int `json:"conflicts_resolved"`
|
||||
Errors []string `json:"errors"`
|
||||
Success bool `json:"success"`
|
||||
}
|
||||
|
||||
// NewPersistenceManager creates a new persistence manager
|
||||
@@ -289,17 +289,9 @@ func (pm *persistenceManagerImpl) BackupGraph(ctx context.Context) error {
|
||||
return fmt.Errorf("failed to create snapshot: %w", err)
|
||||
}
|
||||
|
||||
// Serialize snapshot
|
||||
data, err := json.Marshal(snapshot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to serialize snapshot: %w", err)
|
||||
}
|
||||
|
||||
// Create backup configuration
|
||||
backupConfig := &storage.BackupConfig{
|
||||
Type: "temporal_graph",
|
||||
Description: "Temporal graph backup",
|
||||
Tags: []string{"temporal", "graph", "decision"},
|
||||
Name: "temporal_graph",
|
||||
Metadata: map[string]interface{}{
|
||||
"node_count": snapshot.Metadata.NodeCount,
|
||||
"edge_count": snapshot.Metadata.EdgeCount,
|
||||
@@ -356,17 +348,15 @@ func (pm *persistenceManagerImpl) flushWriteBuffer() error {
|
||||
|
||||
// Create batch store request
|
||||
batch := &storage.BatchStoreRequest{
|
||||
Operations: make([]*storage.BatchStoreOperation, len(pm.writeBuffer)),
|
||||
Contexts: make([]*storage.ContextStoreItem, len(pm.writeBuffer)),
|
||||
Roles: pm.config.EncryptionRoles,
|
||||
FailOnError: true,
|
||||
}
|
||||
|
||||
for i, node := range pm.writeBuffer {
|
||||
key := pm.generateNodeKey(node)
|
||||
|
||||
batch.Operations[i] = &storage.BatchStoreOperation{
|
||||
Type: "store",
|
||||
Key: key,
|
||||
Data: node,
|
||||
Roles: pm.config.EncryptionRoles,
|
||||
batch.Contexts[i] = &storage.ContextStoreItem{
|
||||
Context: node,
|
||||
Roles: pm.config.EncryptionRoles,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -734,10 +724,10 @@ func (pm *persistenceManagerImpl) resolveConflict(ctx context.Context, conflict
|
||||
}
|
||||
|
||||
return &ConflictResolution{
|
||||
ConflictID: conflict.NodeID,
|
||||
Resolution: "merged",
|
||||
ResolvedData: resolvedNode,
|
||||
ResolvedAt: time.Now(),
|
||||
ConflictID: conflict.NodeID,
|
||||
Resolution: "merged",
|
||||
ResolvedData: resolvedNode,
|
||||
ResolvedAt: time.Now(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -834,28 +824,14 @@ func (pm *persistenceManagerImpl) syncRemoteToLocal(ctx context.Context, remote,
|
||||
// Supporting types for conflict resolution
|
||||
|
||||
type SyncConflict struct {
|
||||
Type ConflictType `json:"type"`
|
||||
NodeID string `json:"node_id"`
|
||||
LocalData interface{} `json:"local_data"`
|
||||
RemoteData interface{} `json:"remote_data"`
|
||||
Severity string `json:"severity"`
|
||||
Type ConflictType `json:"type"`
|
||||
NodeID string `json:"node_id"`
|
||||
LocalData interface{} `json:"local_data"`
|
||||
RemoteData interface{} `json:"remote_data"`
|
||||
Severity string `json:"severity"`
|
||||
}
|
||||
|
||||
type ConflictType string
|
||||
|
||||
const (
|
||||
ConflictTypeNodeMismatch ConflictType = "node_mismatch"
|
||||
ConflictTypeInfluenceMismatch ConflictType = "influence_mismatch"
|
||||
ConflictTypeMetadataMismatch ConflictType = "metadata_mismatch"
|
||||
)
|
||||
|
||||
type ConflictResolution struct {
|
||||
ConflictID string `json:"conflict_id"`
|
||||
Resolution string `json:"resolution"`
|
||||
ResolvedData interface{} `json:"resolved_data"`
|
||||
ResolvedAt time.Time `json:"resolved_at"`
|
||||
ResolvedBy string `json:"resolved_by"`
|
||||
}
|
||||
// Default conflict resolver implementation
|
||||
|
||||
// Default conflict resolver implementation
|
||||
|
||||
|
||||
@@ -17,45 +17,46 @@ import (
|
||||
// cascading context resolution with bounded depth traversal.
|
||||
type ContextNode struct {
|
||||
// Identity and addressing
|
||||
ID string `json:"id"` // Unique identifier
|
||||
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
|
||||
Path string `json:"path"` // Filesystem path
|
||||
ID string `json:"id"` // Unique identifier
|
||||
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
|
||||
Path string `json:"path"` // Filesystem path
|
||||
|
||||
// Core context information
|
||||
Summary string `json:"summary"` // Brief description
|
||||
Purpose string `json:"purpose"` // What this component does
|
||||
Technologies []string `json:"technologies"` // Technologies used
|
||||
Tags []string `json:"tags"` // Categorization tags
|
||||
Insights []string `json:"insights"` // Analytical insights
|
||||
Summary string `json:"summary"` // Brief description
|
||||
Purpose string `json:"purpose"` // What this component does
|
||||
Technologies []string `json:"technologies"` // Technologies used
|
||||
Tags []string `json:"tags"` // Categorization tags
|
||||
Insights []string `json:"insights"` // Analytical insights
|
||||
|
||||
// Hierarchy relationships
|
||||
Parent *string `json:"parent,omitempty"` // Parent context ID
|
||||
Children []string `json:"children"` // Child context IDs
|
||||
Specificity int `json:"specificity"` // Specificity level (higher = more specific)
|
||||
Parent *string `json:"parent,omitempty"` // Parent context ID
|
||||
Children []string `json:"children"` // Child context IDs
|
||||
Specificity int `json:"specificity"` // Specificity level (higher = more specific)
|
||||
|
||||
// File metadata
|
||||
FileType string `json:"file_type"` // File extension or type
|
||||
Language *string `json:"language,omitempty"` // Programming language
|
||||
Size *int64 `json:"size,omitempty"` // File size in bytes
|
||||
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification time
|
||||
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
|
||||
FileType string `json:"file_type"` // File extension or type
|
||||
Language *string `json:"language,omitempty"` // Programming language
|
||||
Size *int64 `json:"size,omitempty"` // File size in bytes
|
||||
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification time
|
||||
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
|
||||
|
||||
// Resolution metadata
|
||||
CreatedBy string `json:"created_by"` // Who/what created this context
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
|
||||
CreatedBy string `json:"created_by"` // Who/what created this context
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||
UpdatedBy string `json:"updated_by"` // Who performed the last update
|
||||
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
|
||||
|
||||
// Cascading behavior rules
|
||||
AppliesTo ContextScope `json:"applies_to"` // Scope of application
|
||||
Overrides bool `json:"overrides"` // Whether this overrides parent context
|
||||
AppliesTo ContextScope `json:"applies_to"` // Scope of application
|
||||
Overrides bool `json:"overrides"` // Whether this overrides parent context
|
||||
|
||||
// Security and access control
|
||||
EncryptedFor []string `json:"encrypted_for"` // Roles that can access
|
||||
AccessLevel crypto.AccessLevel `json:"access_level"` // Access level required
|
||||
EncryptedFor []string `json:"encrypted_for"` // Roles that can access
|
||||
AccessLevel crypto.AccessLevel `json:"access_level"` // Access level required
|
||||
|
||||
// Custom metadata
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||
}
|
||||
|
||||
// ResolvedContext represents the final resolved context for a UCXL address.
|
||||
@@ -64,27 +65,27 @@ type ContextNode struct {
|
||||
// information from multiple hierarchy levels and applying global contexts.
|
||||
type ResolvedContext struct {
|
||||
// Resolved context data
|
||||
UCXLAddress string `json:"ucxl_address"` // Original UCXL address
|
||||
Summary string `json:"summary"` // Resolved summary
|
||||
Purpose string `json:"purpose"` // Resolved purpose
|
||||
Technologies []string `json:"technologies"` // Merged technologies
|
||||
Tags []string `json:"tags"` // Merged tags
|
||||
Insights []string `json:"insights"` // Merged insights
|
||||
UCXLAddress string `json:"ucxl_address"` // Original UCXL address
|
||||
Summary string `json:"summary"` // Resolved summary
|
||||
Purpose string `json:"purpose"` // Resolved purpose
|
||||
Technologies []string `json:"technologies"` // Merged technologies
|
||||
Tags []string `json:"tags"` // Merged tags
|
||||
Insights []string `json:"insights"` // Merged insights
|
||||
|
||||
// File information
|
||||
FileType string `json:"file_type"` // File type
|
||||
Language *string `json:"language,omitempty"` // Programming language
|
||||
Size *int64 `json:"size,omitempty"` // File size
|
||||
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification
|
||||
ContentHash *string `json:"content_hash,omitempty"` // Content hash
|
||||
FileType string `json:"file_type"` // File type
|
||||
Language *string `json:"language,omitempty"` // Programming language
|
||||
Size *int64 `json:"size,omitempty"` // File size
|
||||
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification
|
||||
ContentHash *string `json:"content_hash,omitempty"` // Content hash
|
||||
|
||||
// Resolution metadata
|
||||
SourcePath string `json:"source_path"` // Primary source context path
|
||||
InheritanceChain []string `json:"inheritance_chain"` // Context inheritance chain
|
||||
Confidence float64 `json:"confidence"` // Overall confidence (0-1)
|
||||
BoundedDepth int `json:"bounded_depth"` // Actual traversal depth used
|
||||
GlobalApplied bool `json:"global_applied"` // Whether global contexts were applied
|
||||
ResolvedAt time.Time `json:"resolved_at"` // When resolution occurred
|
||||
SourcePath string `json:"source_path"` // Primary source context path
|
||||
InheritanceChain []string `json:"inheritance_chain"` // Context inheritance chain
|
||||
Confidence float64 `json:"confidence"` // Overall confidence (0-1)
|
||||
BoundedDepth int `json:"bounded_depth"` // Actual traversal depth used
|
||||
GlobalApplied bool `json:"global_applied"` // Whether global contexts were applied
|
||||
ResolvedAt time.Time `json:"resolved_at"` // When resolution occurred
|
||||
|
||||
// Temporal information
|
||||
Version int `json:"version"` // Current version number
|
||||
@@ -92,13 +93,13 @@ type ResolvedContext struct {
|
||||
EvolutionHistory []string `json:"evolution_history"` // Brief evolution history
|
||||
|
||||
// Access control
|
||||
AccessibleBy []string `json:"accessible_by"` // Roles that can access this
|
||||
EncryptionKeys []string `json:"encryption_keys"` // Keys used for encryption
|
||||
AccessibleBy []string `json:"accessible_by"` // Roles that can access this
|
||||
EncryptionKeys []string `json:"encryption_keys"` // Keys used for encryption
|
||||
|
||||
// Performance metadata
|
||||
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
|
||||
CacheHit bool `json:"cache_hit"` // Whether result was cached
|
||||
NodesTraversed int `json:"nodes_traversed"` // Number of hierarchy nodes traversed
|
||||
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
|
||||
CacheHit bool `json:"cache_hit"` // Whether result was cached
|
||||
NodesTraversed int `json:"nodes_traversed"` // Number of hierarchy nodes traversed
|
||||
}
|
||||
|
||||
// ContextScope defines the scope of a context node's application
|
||||
@@ -117,23 +118,23 @@ const (
|
||||
// simple chronological progression.
|
||||
type TemporalNode struct {
|
||||
// Node identity
|
||||
ID string `json:"id"` // Unique temporal node ID
|
||||
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
|
||||
Version int `json:"version"` // Version number (monotonic)
|
||||
ID string `json:"id"` // Unique temporal node ID
|
||||
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
|
||||
Version int `json:"version"` // Version number (monotonic)
|
||||
|
||||
// Context snapshot
|
||||
Context ContextNode `json:"context"` // Context data at this point
|
||||
Context ContextNode `json:"context"` // Context data at this point
|
||||
|
||||
// Temporal metadata
|
||||
Timestamp time.Time `json:"timestamp"` // When this version was created
|
||||
DecisionID string `json:"decision_id"` // Associated decision identifier
|
||||
ChangeReason ChangeReason `json:"change_reason"` // Why context changed
|
||||
Timestamp time.Time `json:"timestamp"` // When this version was created
|
||||
DecisionID string `json:"decision_id"` // Associated decision identifier
|
||||
ChangeReason ChangeReason `json:"change_reason"` // Why context changed
|
||||
ParentNode *string `json:"parent_node,omitempty"` // Previous version ID
|
||||
|
||||
// Evolution tracking
|
||||
ContextHash string `json:"context_hash"` // Hash of context content
|
||||
Confidence float64 `json:"confidence"` // Confidence in this version (0-1)
|
||||
Staleness float64 `json:"staleness"` // Staleness indicator (0-1)
|
||||
ContextHash string `json:"context_hash"` // Hash of context content
|
||||
Confidence float64 `json:"confidence"` // Confidence in this version (0-1)
|
||||
Staleness float64 `json:"staleness"` // Staleness indicator (0-1)
|
||||
|
||||
// Decision graph relationships
|
||||
Influences []string `json:"influences"` // UCXL addresses this influences
|
||||
@@ -144,11 +145,11 @@ type TemporalNode struct {
|
||||
LastValidated time.Time `json:"last_validated"` // When last validated
|
||||
|
||||
// Change impact analysis
|
||||
ImpactScope ImpactScope `json:"impact_scope"` // Scope of change impact
|
||||
PropagatedTo []string `json:"propagated_to"` // Addresses that received impact
|
||||
ImpactScope ImpactScope `json:"impact_scope"` // Scope of change impact
|
||||
PropagatedTo []string `json:"propagated_to"` // Addresses that received impact
|
||||
|
||||
// Custom temporal metadata
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||
}
|
||||
|
||||
// DecisionMetadata represents metadata about a decision that changed context.
|
||||
@@ -157,56 +158,56 @@ type TemporalNode struct {
|
||||
// representing why and how context evolved rather than just when.
|
||||
type DecisionMetadata struct {
|
||||
// Decision identity
|
||||
ID string `json:"id"` // Unique decision identifier
|
||||
Maker string `json:"maker"` // Who/what made the decision
|
||||
Rationale string `json:"rationale"` // Why the decision was made
|
||||
ID string `json:"id"` // Unique decision identifier
|
||||
Maker string `json:"maker"` // Who/what made the decision
|
||||
Rationale string `json:"rationale"` // Why the decision was made
|
||||
|
||||
// Impact and scope
|
||||
Scope ImpactScope `json:"scope"` // Scope of impact
|
||||
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in decision (0-1)
|
||||
Scope ImpactScope `json:"scope"` // Scope of impact
|
||||
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in decision (0-1)
|
||||
|
||||
// External references
|
||||
ExternalRefs []string `json:"external_refs"` // External references (URLs, docs)
|
||||
GitCommit *string `json:"git_commit,omitempty"` // Associated git commit
|
||||
IssueNumber *int `json:"issue_number,omitempty"` // Associated issue number
|
||||
PullRequestNumber *int `json:"pull_request,omitempty"` // Associated PR number
|
||||
ExternalRefs []string `json:"external_refs"` // External references (URLs, docs)
|
||||
GitCommit *string `json:"git_commit,omitempty"` // Associated git commit
|
||||
IssueNumber *int `json:"issue_number,omitempty"` // Associated issue number
|
||||
PullRequestNumber *int `json:"pull_request,omitempty"` // Associated PR number
|
||||
|
||||
// Timing information
|
||||
CreatedAt time.Time `json:"created_at"` // When decision was made
|
||||
EffectiveAt *time.Time `json:"effective_at,omitempty"` // When decision takes effect
|
||||
ExpiresAt *time.Time `json:"expires_at,omitempty"` // When decision expires
|
||||
CreatedAt time.Time `json:"created_at"` // When decision was made
|
||||
EffectiveAt *time.Time `json:"effective_at,omitempty"` // When decision takes effect
|
||||
ExpiresAt *time.Time `json:"expires_at,omitempty"` // When decision expires
|
||||
|
||||
// Decision quality
|
||||
ReviewedBy []string `json:"reviewed_by,omitempty"` // Who reviewed this decision
|
||||
ApprovedBy []string `json:"approved_by,omitempty"` // Who approved this decision
|
||||
ReviewedBy []string `json:"reviewed_by,omitempty"` // Who reviewed this decision
|
||||
ApprovedBy []string `json:"approved_by,omitempty"` // Who approved this decision
|
||||
|
||||
// Implementation tracking
|
||||
ImplementationStatus string `json:"implementation_status"` // Status: planned, active, complete, cancelled
|
||||
ImplementationNotes string `json:"implementation_notes"` // Implementation details
|
||||
ImplementationStatus string `json:"implementation_status"` // Status: planned, active, complete, cancelled
|
||||
ImplementationNotes string `json:"implementation_notes"` // Implementation details
|
||||
|
||||
// Custom metadata
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||
}
|
||||
|
||||
// ChangeReason represents why context changed
|
||||
type ChangeReason string
|
||||
|
||||
const (
|
||||
ReasonInitialCreation ChangeReason = "initial_creation" // First time context creation
|
||||
ReasonCodeChange ChangeReason = "code_change" // Code modification
|
||||
ReasonDesignDecision ChangeReason = "design_decision" // Design/architecture decision
|
||||
ReasonRefactoring ChangeReason = "refactoring" // Code refactoring
|
||||
ReasonArchitectureChange ChangeReason = "architecture_change" // Major architecture change
|
||||
ReasonRequirementsChange ChangeReason = "requirements_change" // Requirements modification
|
||||
ReasonLearningEvolution ChangeReason = "learning_evolution" // Improved understanding
|
||||
ReasonRAGEnhancement ChangeReason = "rag_enhancement" // RAG system enhancement
|
||||
ReasonTeamInput ChangeReason = "team_input" // Team member input
|
||||
ReasonBugDiscovery ChangeReason = "bug_discovery" // Bug found that changes understanding
|
||||
ReasonPerformanceInsight ChangeReason = "performance_insight" // Performance analysis insight
|
||||
ReasonSecurityReview ChangeReason = "security_review" // Security analysis
|
||||
ReasonDependencyChange ChangeReason = "dependency_change" // Dependency update
|
||||
ReasonEnvironmentChange ChangeReason = "environment_change" // Environment configuration change
|
||||
ReasonToolingUpdate ChangeReason = "tooling_update" // Development tooling update
|
||||
ReasonInitialCreation ChangeReason = "initial_creation" // First time context creation
|
||||
ReasonCodeChange ChangeReason = "code_change" // Code modification
|
||||
ReasonDesignDecision ChangeReason = "design_decision" // Design/architecture decision
|
||||
ReasonRefactoring ChangeReason = "refactoring" // Code refactoring
|
||||
ReasonArchitectureChange ChangeReason = "architecture_change" // Major architecture change
|
||||
ReasonRequirementsChange ChangeReason = "requirements_change" // Requirements modification
|
||||
ReasonLearningEvolution ChangeReason = "learning_evolution" // Improved understanding
|
||||
ReasonRAGEnhancement ChangeReason = "rag_enhancement" // RAG system enhancement
|
||||
ReasonTeamInput ChangeReason = "team_input" // Team member input
|
||||
ReasonBugDiscovery ChangeReason = "bug_discovery" // Bug found that changes understanding
|
||||
ReasonPerformanceInsight ChangeReason = "performance_insight" // Performance analysis insight
|
||||
ReasonSecurityReview ChangeReason = "security_review" // Security analysis
|
||||
ReasonDependencyChange ChangeReason = "dependency_change" // Dependency update
|
||||
ReasonEnvironmentChange ChangeReason = "environment_change" // Environment configuration change
|
||||
ReasonToolingUpdate ChangeReason = "tooling_update" // Development tooling update
|
||||
ReasonDocumentationUpdate ChangeReason = "documentation_update" // Documentation improvement
|
||||
)
|
||||
|
||||
@@ -222,11 +223,11 @@ const (
|
||||
|
||||
// DecisionPath represents a path between two decision points in the temporal graph
|
||||
type DecisionPath struct {
|
||||
From string `json:"from"` // Starting UCXL address
|
||||
To string `json:"to"` // Ending UCXL address
|
||||
Steps []*DecisionStep `json:"steps"` // Path steps
|
||||
TotalHops int `json:"total_hops"` // Total decision hops
|
||||
PathType string `json:"path_type"` // Type of path (direct, influence, etc.)
|
||||
From string `json:"from"` // Starting UCXL address
|
||||
To string `json:"to"` // Ending UCXL address
|
||||
Steps []*DecisionStep `json:"steps"` // Path steps
|
||||
TotalHops int `json:"total_hops"` // Total decision hops
|
||||
PathType string `json:"path_type"` // Type of path (direct, influence, etc.)
|
||||
}
|
||||
|
||||
// DecisionStep represents a single step in a decision path
|
||||
@@ -239,7 +240,7 @@ type DecisionStep struct {
|
||||
|
||||
// DecisionTimeline represents the decision evolution timeline for a context
|
||||
type DecisionTimeline struct {
|
||||
PrimaryAddress string `json:"primary_address"` // Main UCXL address
|
||||
PrimaryAddress string `json:"primary_address"` // Main UCXL address
|
||||
DecisionSequence []*DecisionTimelineEntry `json:"decision_sequence"` // Ordered by decision hops
|
||||
RelatedDecisions []*RelatedDecision `json:"related_decisions"` // Related decisions within hop limit
|
||||
TotalDecisions int `json:"total_decisions"` // Total decisions in timeline
|
||||
@@ -249,40 +250,40 @@ type DecisionTimeline struct {
|
||||
|
||||
// DecisionTimelineEntry represents an entry in the decision timeline
|
||||
type DecisionTimelineEntry struct {
|
||||
Version int `json:"version"` // Version number
|
||||
DecisionHop int `json:"decision_hop"` // Decision distance from initial
|
||||
ChangeReason ChangeReason `json:"change_reason"` // Why it changed
|
||||
DecisionMaker string `json:"decision_maker"` // Who made the decision
|
||||
DecisionRationale string `json:"decision_rationale"` // Rationale for decision
|
||||
ConfidenceEvolution float64 `json:"confidence_evolution"` // Confidence at this point
|
||||
Timestamp time.Time `json:"timestamp"` // When decision occurred
|
||||
InfluencesCount int `json:"influences_count"` // Number of influenced addresses
|
||||
InfluencedByCount int `json:"influenced_by_count"` // Number of influencing addresses
|
||||
ImpactScope ImpactScope `json:"impact_scope"` // Scope of this decision
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||
Version int `json:"version"` // Version number
|
||||
DecisionHop int `json:"decision_hop"` // Decision distance from initial
|
||||
ChangeReason ChangeReason `json:"change_reason"` // Why it changed
|
||||
DecisionMaker string `json:"decision_maker"` // Who made the decision
|
||||
DecisionRationale string `json:"decision_rationale"` // Rationale for decision
|
||||
ConfidenceEvolution float64 `json:"confidence_evolution"` // Confidence at this point
|
||||
Timestamp time.Time `json:"timestamp"` // When decision occurred
|
||||
InfluencesCount int `json:"influences_count"` // Number of influenced addresses
|
||||
InfluencedByCount int `json:"influenced_by_count"` // Number of influencing addresses
|
||||
ImpactScope ImpactScope `json:"impact_scope"` // Scope of this decision
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||
}
|
||||
|
||||
// RelatedDecision represents a decision related through the influence graph
|
||||
type RelatedDecision struct {
|
||||
Address string `json:"address"` // UCXL address
|
||||
DecisionHops int `json:"decision_hops"` // Hops from primary address
|
||||
LatestVersion int `json:"latest_version"` // Latest version number
|
||||
ChangeReason ChangeReason `json:"change_reason"` // Latest change reason
|
||||
DecisionMaker string `json:"decision_maker"` // Latest decision maker
|
||||
Confidence float64 `json:"confidence"` // Current confidence
|
||||
LastDecisionTimestamp time.Time `json:"last_decision_timestamp"` // When last decision occurred
|
||||
RelationshipType string `json:"relationship_type"` // Type of relationship (influences, influenced_by)
|
||||
Address string `json:"address"` // UCXL address
|
||||
DecisionHops int `json:"decision_hops"` // Hops from primary address
|
||||
LatestVersion int `json:"latest_version"` // Latest version number
|
||||
ChangeReason ChangeReason `json:"change_reason"` // Latest change reason
|
||||
DecisionMaker string `json:"decision_maker"` // Latest decision maker
|
||||
Confidence float64 `json:"confidence"` // Current confidence
|
||||
LastDecisionTimestamp time.Time `json:"last_decision_timestamp"` // When last decision occurred
|
||||
RelationshipType string `json:"relationship_type"` // Type of relationship (influences, influenced_by)
|
||||
}
|
||||
|
||||
// TimelineAnalysis contains analysis metadata for decision timelines
|
||||
type TimelineAnalysis struct {
|
||||
ChangeVelocity float64 `json:"change_velocity"` // Changes per unit time
|
||||
ConfidenceTrend string `json:"confidence_trend"` // increasing, decreasing, stable
|
||||
DominantChangeReasons []ChangeReason `json:"dominant_change_reasons"` // Most common reasons
|
||||
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
|
||||
ChangeVelocity float64 `json:"change_velocity"` // Changes per unit time
|
||||
ConfidenceTrend string `json:"confidence_trend"` // increasing, decreasing, stable
|
||||
DominantChangeReasons []ChangeReason `json:"dominant_change_reasons"` // Most common reasons
|
||||
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
|
||||
ImpactScopeDistribution map[ImpactScope]int `json:"impact_scope_distribution"` // Distribution of impact scopes
|
||||
InfluenceNetworkSize int `json:"influence_network_size"` // Size of influence network
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
InfluenceNetworkSize int `json:"influence_network_size"` // Size of influence network
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
}
|
||||
|
||||
// NavigationDirection represents direction for temporal navigation
|
||||
@@ -295,76 +296,76 @@ const (
|
||||
|
||||
// StaleContext represents a potentially outdated context
|
||||
type StaleContext struct {
|
||||
UCXLAddress string `json:"ucxl_address"` // Address of stale context
|
||||
TemporalNode *TemporalNode `json:"temporal_node"` // Latest temporal node
|
||||
StalenessScore float64 `json:"staleness_score"` // Staleness score (0-1)
|
||||
LastUpdated time.Time `json:"last_updated"` // When last updated
|
||||
Reasons []string `json:"reasons"` // Reasons why considered stale
|
||||
SuggestedActions []string `json:"suggested_actions"` // Suggested remediation actions
|
||||
UCXLAddress string `json:"ucxl_address"` // Address of stale context
|
||||
TemporalNode *TemporalNode `json:"temporal_node"` // Latest temporal node
|
||||
StalenessScore float64 `json:"staleness_score"` // Staleness score (0-1)
|
||||
LastUpdated time.Time `json:"last_updated"` // When last updated
|
||||
Reasons []string `json:"reasons"` // Reasons why considered stale
|
||||
SuggestedActions []string `json:"suggested_actions"` // Suggested remediation actions
|
||||
}
|
||||
|
||||
// GenerationOptions configures context generation behavior
|
||||
type GenerationOptions struct {
|
||||
// Analysis options
|
||||
AnalyzeContent bool `json:"analyze_content"` // Analyze file content
|
||||
AnalyzeStructure bool `json:"analyze_structure"` // Analyze directory structure
|
||||
AnalyzeHistory bool `json:"analyze_history"` // Analyze git history
|
||||
AnalyzeDependencies bool `json:"analyze_dependencies"` // Analyze dependencies
|
||||
AnalyzeContent bool `json:"analyze_content"` // Analyze file content
|
||||
AnalyzeStructure bool `json:"analyze_structure"` // Analyze directory structure
|
||||
AnalyzeHistory bool `json:"analyze_history"` // Analyze git history
|
||||
AnalyzeDependencies bool `json:"analyze_dependencies"` // Analyze dependencies
|
||||
|
||||
// Generation scope
|
||||
MaxDepth int `json:"max_depth"` // Maximum directory depth
|
||||
IncludePatterns []string `json:"include_patterns"` // File patterns to include
|
||||
ExcludePatterns []string `json:"exclude_patterns"` // File patterns to exclude
|
||||
MaxDepth int `json:"max_depth"` // Maximum directory depth
|
||||
IncludePatterns []string `json:"include_patterns"` // File patterns to include
|
||||
ExcludePatterns []string `json:"exclude_patterns"` // File patterns to exclude
|
||||
|
||||
// Quality settings
|
||||
MinConfidence float64 `json:"min_confidence"` // Minimum confidence threshold
|
||||
RequireValidation bool `json:"require_validation"` // Require human validation
|
||||
MinConfidence float64 `json:"min_confidence"` // Minimum confidence threshold
|
||||
RequireValidation bool `json:"require_validation"` // Require human validation
|
||||
|
||||
// External integration
|
||||
UseRAG bool `json:"use_rag"` // Use RAG for enhancement
|
||||
RAGEndpoint string `json:"rag_endpoint"` // RAG service endpoint
|
||||
UseRAG bool `json:"use_rag"` // Use RAG for enhancement
|
||||
RAGEndpoint string `json:"rag_endpoint"` // RAG service endpoint
|
||||
|
||||
// Output options
|
||||
EncryptForRoles []string `json:"encrypt_for_roles"` // Roles to encrypt for
|
||||
EncryptForRoles []string `json:"encrypt_for_roles"` // Roles to encrypt for
|
||||
|
||||
// Performance limits
|
||||
Timeout time.Duration `json:"timeout"` // Generation timeout
|
||||
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
|
||||
Timeout time.Duration `json:"timeout"` // Generation timeout
|
||||
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
|
||||
|
||||
// Custom options
|
||||
CustomOptions map[string]interface{} `json:"custom_options,omitempty"` // Additional options
|
||||
CustomOptions map[string]interface{} `json:"custom_options,omitempty"` // Additional options
|
||||
}
|
||||
|
||||
// HierarchyStats represents statistics about hierarchy generation
|
||||
type HierarchyStats struct {
|
||||
NodesCreated int `json:"nodes_created"` // Number of nodes created
|
||||
NodesUpdated int `json:"nodes_updated"` // Number of nodes updated
|
||||
FilesAnalyzed int `json:"files_analyzed"` // Number of files analyzed
|
||||
DirectoriesScanned int `json:"directories_scanned"` // Number of directories scanned
|
||||
GenerationTime time.Duration `json:"generation_time"` // Time taken for generation
|
||||
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
|
||||
TotalSize int64 `json:"total_size"` // Total size of analyzed content
|
||||
SkippedFiles int `json:"skipped_files"` // Number of files skipped
|
||||
Errors []string `json:"errors"` // Generation errors
|
||||
NodesCreated int `json:"nodes_created"` // Number of nodes created
|
||||
NodesUpdated int `json:"nodes_updated"` // Number of nodes updated
|
||||
FilesAnalyzed int `json:"files_analyzed"` // Number of files analyzed
|
||||
DirectoriesScanned int `json:"directories_scanned"` // Number of directories scanned
|
||||
GenerationTime time.Duration `json:"generation_time"` // Time taken for generation
|
||||
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
|
||||
TotalSize int64 `json:"total_size"` // Total size of analyzed content
|
||||
SkippedFiles int `json:"skipped_files"` // Number of files skipped
|
||||
Errors []string `json:"errors"` // Generation errors
|
||||
}
|
||||
|
||||
// ValidationResult represents the result of context validation
|
||||
type ValidationResult struct {
|
||||
Valid bool `json:"valid"` // Whether context is valid
|
||||
ConfidenceScore float64 `json:"confidence_score"` // Overall confidence (0-1)
|
||||
QualityScore float64 `json:"quality_score"` // Quality assessment (0-1)
|
||||
Issues []*ValidationIssue `json:"issues"` // Validation issues found
|
||||
Suggestions []*ValidationSuggestion `json:"suggestions"` // Improvement suggestions
|
||||
ValidatedAt time.Time `json:"validated_at"` // When validation occurred
|
||||
ValidatedBy string `json:"validated_by"` // Who/what performed validation
|
||||
Valid bool `json:"valid"` // Whether context is valid
|
||||
ConfidenceScore float64 `json:"confidence_score"` // Overall confidence (0-1)
|
||||
QualityScore float64 `json:"quality_score"` // Quality assessment (0-1)
|
||||
Issues []*ValidationIssue `json:"issues"` // Validation issues found
|
||||
Suggestions []*ValidationSuggestion `json:"suggestions"` // Improvement suggestions
|
||||
ValidatedAt time.Time `json:"validated_at"` // When validation occurred
|
||||
ValidatedBy string `json:"validated_by"` // Who/what performed validation
|
||||
}
|
||||
|
||||
// ValidationIssue represents an issue found during validation
|
||||
type ValidationIssue struct {
|
||||
Severity string `json:"severity"` // error, warning, info
|
||||
Message string `json:"message"` // Issue description
|
||||
Field string `json:"field"` // Affected field
|
||||
Suggestion string `json:"suggestion"` // How to fix
|
||||
Severity string `json:"severity"` // error, warning, info
|
||||
Message string `json:"message"` // Issue description
|
||||
Field string `json:"field"` // Affected field
|
||||
Suggestion string `json:"suggestion"` // How to fix
|
||||
|
||||
}
|
||||
|
||||
@@ -378,24 +379,24 @@ type ValidationSuggestion struct {
|
||||
|
||||
// CostEstimate represents estimated resource cost for operations
|
||||
type CostEstimate struct {
|
||||
CPUCost float64 `json:"cpu_cost"` // Estimated CPU cost
|
||||
MemoryCost float64 `json:"memory_cost"` // Estimated memory cost
|
||||
StorageCost float64 `json:"storage_cost"` // Estimated storage cost
|
||||
TimeCost time.Duration `json:"time_cost"` // Estimated time cost
|
||||
TotalCost float64 `json:"total_cost"` // Total normalized cost
|
||||
CPUCost float64 `json:"cpu_cost"` // Estimated CPU cost
|
||||
MemoryCost float64 `json:"memory_cost"` // Estimated memory cost
|
||||
StorageCost float64 `json:"storage_cost"` // Estimated storage cost
|
||||
TimeCost time.Duration `json:"time_cost"` // Estimated time cost
|
||||
TotalCost float64 `json:"total_cost"` // Total normalized cost
|
||||
CostBreakdown map[string]float64 `json:"cost_breakdown"` // Detailed cost breakdown
|
||||
}
|
||||
|
||||
// AnalysisResult represents the result of context analysis
|
||||
type AnalysisResult struct {
|
||||
QualityScore float64 `json:"quality_score"` // Overall quality (0-1)
|
||||
ConsistencyScore float64 `json:"consistency_score"` // Consistency with hierarchy
|
||||
CompletenessScore float64 `json:"completeness_score"` // Completeness assessment
|
||||
AccuracyScore float64 `json:"accuracy_score"` // Accuracy assessment
|
||||
Issues []*AnalysisIssue `json:"issues"` // Issues found
|
||||
Strengths []string `json:"strengths"` // Context strengths
|
||||
Improvements []*Suggestion `json:"improvements"` // Improvement suggestions
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis occurred
|
||||
QualityScore float64 `json:"quality_score"` // Overall quality (0-1)
|
||||
ConsistencyScore float64 `json:"consistency_score"` // Consistency with hierarchy
|
||||
CompletenessScore float64 `json:"completeness_score"` // Completeness assessment
|
||||
AccuracyScore float64 `json:"accuracy_score"` // Accuracy assessment
|
||||
Issues []*AnalysisIssue `json:"issues"` // Issues found
|
||||
Strengths []string `json:"strengths"` // Context strengths
|
||||
Improvements []*Suggestion `json:"improvements"` // Improvement suggestions
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis occurred
|
||||
}
|
||||
|
||||
// AnalysisIssue represents an issue found during analysis
|
||||
@@ -418,86 +419,86 @@ type Suggestion struct {
|
||||
|
||||
// Pattern represents a detected context pattern
|
||||
type Pattern struct {
|
||||
ID string `json:"id"` // Pattern identifier
|
||||
Name string `json:"name"` // Pattern name
|
||||
Description string `json:"description"` // Pattern description
|
||||
ID string `json:"id"` // Pattern identifier
|
||||
Name string `json:"name"` // Pattern name
|
||||
Description string `json:"description"` // Pattern description
|
||||
MatchCriteria map[string]interface{} `json:"match_criteria"` // Criteria for matching
|
||||
Confidence float64 `json:"confidence"` // Pattern confidence (0-1)
|
||||
Frequency int `json:"frequency"` // How often pattern appears
|
||||
Examples []string `json:"examples"` // Example contexts that match
|
||||
CreatedAt time.Time `json:"created_at"` // When pattern was detected
|
||||
Confidence float64 `json:"confidence"` // Pattern confidence (0-1)
|
||||
Frequency int `json:"frequency"` // How often pattern appears
|
||||
Examples []string `json:"examples"` // Example contexts that match
|
||||
CreatedAt time.Time `json:"created_at"` // When pattern was detected
|
||||
}
|
||||
|
||||
// PatternMatch represents a match between context and pattern
|
||||
type PatternMatch struct {
|
||||
PatternID string `json:"pattern_id"` // ID of matched pattern
|
||||
MatchScore float64 `json:"match_score"` // How well it matches (0-1)
|
||||
PatternID string `json:"pattern_id"` // ID of matched pattern
|
||||
MatchScore float64 `json:"match_score"` // How well it matches (0-1)
|
||||
MatchedFields []string `json:"matched_fields"` // Which fields matched
|
||||
Confidence float64 `json:"confidence"` // Confidence in match
|
||||
Confidence float64 `json:"confidence"` // Confidence in match
|
||||
}
|
||||
|
||||
// ContextPattern represents a registered context pattern template
|
||||
type ContextPattern struct {
|
||||
ID string `json:"id"` // Pattern identifier
|
||||
Name string `json:"name"` // Human-readable name
|
||||
Description string `json:"description"` // Pattern description
|
||||
Template *ContextNode `json:"template"` // Template for matching
|
||||
Criteria map[string]interface{} `json:"criteria"` // Matching criteria
|
||||
Priority int `json:"priority"` // Pattern priority
|
||||
CreatedBy string `json:"created_by"` // Who created pattern
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||
UsageCount int `json:"usage_count"` // How often used
|
||||
ID string `json:"id"` // Pattern identifier
|
||||
Name string `json:"name"` // Human-readable name
|
||||
Description string `json:"description"` // Pattern description
|
||||
Template *ContextNode `json:"template"` // Template for matching
|
||||
Criteria map[string]interface{} `json:"criteria"` // Matching criteria
|
||||
Priority int `json:"priority"` // Pattern priority
|
||||
CreatedBy string `json:"created_by"` // Who created pattern
|
||||
CreatedAt time.Time `json:"created_at"` // When created
|
||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||
UsageCount int `json:"usage_count"` // How often used
|
||||
}
|
||||
|
||||
// Inconsistency represents a detected inconsistency in the context hierarchy
|
||||
type Inconsistency struct {
|
||||
Type string `json:"type"` // Type of inconsistency
|
||||
Description string `json:"description"` // Description of the issue
|
||||
AffectedNodes []string `json:"affected_nodes"` // Nodes involved
|
||||
Severity string `json:"severity"` // Severity level
|
||||
Suggestion string `json:"suggestion"` // How to resolve
|
||||
DetectedAt time.Time `json:"detected_at"` // When detected
|
||||
Type string `json:"type"` // Type of inconsistency
|
||||
Description string `json:"description"` // Description of the issue
|
||||
AffectedNodes []string `json:"affected_nodes"` // Nodes involved
|
||||
Severity string `json:"severity"` // Severity level
|
||||
Suggestion string `json:"suggestion"` // How to resolve
|
||||
DetectedAt time.Time `json:"detected_at"` // When detected
|
||||
}
|
||||
|
||||
// SearchQuery represents a context search query
|
||||
type SearchQuery struct {
|
||||
// Query terms
|
||||
Query string `json:"query"` // Main search query
|
||||
Tags []string `json:"tags"` // Required tags
|
||||
Technologies []string `json:"technologies"` // Required technologies
|
||||
FileTypes []string `json:"file_types"` // File types to include
|
||||
Query string `json:"query"` // Main search query
|
||||
Tags []string `json:"tags"` // Required tags
|
||||
Technologies []string `json:"technologies"` // Required technologies
|
||||
FileTypes []string `json:"file_types"` // File types to include
|
||||
|
||||
// Filters
|
||||
MinConfidence float64 `json:"min_confidence"` // Minimum confidence
|
||||
MaxAge *time.Duration `json:"max_age"` // Maximum age
|
||||
Roles []string `json:"roles"` // Required access roles
|
||||
MinConfidence float64 `json:"min_confidence"` // Minimum confidence
|
||||
MaxAge *time.Duration `json:"max_age"` // Maximum age
|
||||
Roles []string `json:"roles"` // Required access roles
|
||||
|
||||
// Scope
|
||||
Scope []string `json:"scope"` // Paths to search within
|
||||
ExcludeScope []string `json:"exclude_scope"` // Paths to exclude
|
||||
Scope []string `json:"scope"` // Paths to search within
|
||||
ExcludeScope []string `json:"exclude_scope"` // Paths to exclude
|
||||
|
||||
// Result options
|
||||
Limit int `json:"limit"` // Maximum results
|
||||
Offset int `json:"offset"` // Result offset
|
||||
SortBy string `json:"sort_by"` // Sort field
|
||||
SortOrder string `json:"sort_order"` // asc, desc
|
||||
Limit int `json:"limit"` // Maximum results
|
||||
Offset int `json:"offset"` // Result offset
|
||||
SortBy string `json:"sort_by"` // Sort field
|
||||
SortOrder string `json:"sort_order"` // asc, desc
|
||||
|
||||
// Advanced options
|
||||
FuzzyMatch bool `json:"fuzzy_match"` // Enable fuzzy matching
|
||||
IncludeStale bool `json:"include_stale"` // Include stale contexts
|
||||
FuzzyMatch bool `json:"fuzzy_match"` // Enable fuzzy matching
|
||||
IncludeStale bool `json:"include_stale"` // Include stale contexts
|
||||
TemporalFilter *TemporalFilter `json:"temporal_filter"` // Temporal filtering
|
||||
}
|
||||
|
||||
// TemporalFilter represents temporal filtering options
|
||||
type TemporalFilter struct {
|
||||
FromTime *time.Time `json:"from_time"` // Start time
|
||||
ToTime *time.Time `json:"to_time"` // End time
|
||||
VersionRange *VersionRange `json:"version_range"` // Version range
|
||||
ChangeReasons []ChangeReason `json:"change_reasons"` // Specific change reasons
|
||||
DecisionMakers []string `json:"decision_makers"` // Specific decision makers
|
||||
MinDecisionHops int `json:"min_decision_hops"` // Minimum decision hops
|
||||
MaxDecisionHops int `json:"max_decision_hops"` // Maximum decision hops
|
||||
FromTime *time.Time `json:"from_time"` // Start time
|
||||
ToTime *time.Time `json:"to_time"` // End time
|
||||
VersionRange *VersionRange `json:"version_range"` // Version range
|
||||
ChangeReasons []ChangeReason `json:"change_reasons"` // Specific change reasons
|
||||
DecisionMakers []string `json:"decision_makers"` // Specific decision makers
|
||||
MinDecisionHops int `json:"min_decision_hops"` // Minimum decision hops
|
||||
MaxDecisionHops int `json:"max_decision_hops"` // Maximum decision hops
|
||||
}
|
||||
|
||||
// VersionRange represents a range of versions
|
||||
@@ -509,58 +510,58 @@ type VersionRange struct {
|
||||
// SearchResult represents a single search result
|
||||
type SearchResult struct {
|
||||
Context *ResolvedContext `json:"context"` // Resolved context
|
||||
TemporalNode *TemporalNode `json:"temporal_node"` // Associated temporal node
|
||||
MatchScore float64 `json:"match_score"` // How well it matches query (0-1)
|
||||
MatchedFields []string `json:"matched_fields"` // Which fields matched
|
||||
Snippet string `json:"snippet"` // Text snippet showing match
|
||||
Rank int `json:"rank"` // Result rank
|
||||
TemporalNode *TemporalNode `json:"temporal_node"` // Associated temporal node
|
||||
MatchScore float64 `json:"match_score"` // How well it matches query (0-1)
|
||||
MatchedFields []string `json:"matched_fields"` // Which fields matched
|
||||
Snippet string `json:"snippet"` // Text snippet showing match
|
||||
Rank int `json:"rank"` // Result rank
|
||||
}
|
||||
|
||||
// IndexMetadata represents metadata for context indexing
|
||||
type IndexMetadata struct {
|
||||
IndexType string `json:"index_type"` // Type of index
|
||||
IndexedFields []string `json:"indexed_fields"` // Fields that are indexed
|
||||
IndexedAt time.Time `json:"indexed_at"` // When indexed
|
||||
IndexVersion string `json:"index_version"` // Index version
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
IndexType string `json:"index_type"` // Type of index
|
||||
IndexedFields []string `json:"indexed_fields"` // Fields that are indexed
|
||||
IndexedAt time.Time `json:"indexed_at"` // When indexed
|
||||
IndexVersion string `json:"index_version"` // Index version
|
||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||
}
|
||||
|
||||
// DecisionAnalysis represents analysis of decision patterns
|
||||
type DecisionAnalysis struct {
|
||||
TotalDecisions int `json:"total_decisions"` // Total decisions analyzed
|
||||
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
|
||||
ChangeReasons map[ChangeReason]int `json:"change_reasons"` // Change reason frequency
|
||||
ImpactScopes map[ImpactScope]int `json:"impact_scopes"` // Impact scope distribution
|
||||
ConfidenceTrends map[string]float64 `json:"confidence_trends"` // Confidence trends over time
|
||||
DecisionFrequency map[string]int `json:"decision_frequency"` // Decisions per time period
|
||||
InfluenceNetworkStats *InfluenceNetworkStats `json:"influence_network_stats"` // Network statistics
|
||||
Patterns []*DecisionPattern `json:"patterns"` // Detected decision patterns
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
AnalysisTimeSpan time.Duration `json:"analysis_time_span"` // Time span analyzed
|
||||
TotalDecisions int `json:"total_decisions"` // Total decisions analyzed
|
||||
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
|
||||
ChangeReasons map[ChangeReason]int `json:"change_reasons"` // Change reason frequency
|
||||
ImpactScopes map[ImpactScope]int `json:"impact_scopes"` // Impact scope distribution
|
||||
ConfidenceTrends map[string]float64 `json:"confidence_trends"` // Confidence trends over time
|
||||
DecisionFrequency map[string]int `json:"decision_frequency"` // Decisions per time period
|
||||
InfluenceNetworkStats *InfluenceNetworkStats `json:"influence_network_stats"` // Network statistics
|
||||
Patterns []*DecisionPattern `json:"patterns"` // Detected decision patterns
|
||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||
AnalysisTimeSpan time.Duration `json:"analysis_time_span"` // Time span analyzed
|
||||
}
|
||||
|
||||
// InfluenceNetworkStats represents statistics about the influence network
|
||||
type InfluenceNetworkStats struct {
|
||||
TotalNodes int `json:"total_nodes"` // Total nodes in network
|
||||
TotalEdges int `json:"total_edges"` // Total influence relationships
|
||||
AverageConnections float64 `json:"average_connections"` // Average connections per node
|
||||
MaxConnections int `json:"max_connections"` // Maximum connections for any node
|
||||
NetworkDensity float64 `json:"network_density"` // Network density (0-1)
|
||||
ClusteringCoeff float64 `json:"clustering_coeff"` // Clustering coefficient
|
||||
MaxPathLength int `json:"max_path_length"` // Maximum path length in network
|
||||
CentralNodes []string `json:"central_nodes"` // Most central nodes
|
||||
TotalNodes int `json:"total_nodes"` // Total nodes in network
|
||||
TotalEdges int `json:"total_edges"` // Total influence relationships
|
||||
AverageConnections float64 `json:"average_connections"` // Average connections per node
|
||||
MaxConnections int `json:"max_connections"` // Maximum connections for any node
|
||||
NetworkDensity float64 `json:"network_density"` // Network density (0-1)
|
||||
ClusteringCoeff float64 `json:"clustering_coeff"` // Clustering coefficient
|
||||
MaxPathLength int `json:"max_path_length"` // Maximum path length in network
|
||||
CentralNodes []string `json:"central_nodes"` // Most central nodes
|
||||
}
|
||||
|
||||
// DecisionPattern represents a detected pattern in decision-making
|
||||
type DecisionPattern struct {
|
||||
ID string `json:"id"` // Pattern identifier
|
||||
Name string `json:"name"` // Pattern name
|
||||
Description string `json:"description"` // Pattern description
|
||||
Frequency int `json:"frequency"` // How often this pattern occurs
|
||||
Confidence float64 `json:"confidence"` // Confidence in pattern (0-1)
|
||||
ExampleDecisions []string `json:"example_decisions"` // Example decisions that match
|
||||
Characteristics map[string]interface{} `json:"characteristics"` // Pattern characteristics
|
||||
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
|
||||
ID string `json:"id"` // Pattern identifier
|
||||
Name string `json:"name"` // Pattern name
|
||||
Description string `json:"description"` // Pattern description
|
||||
Frequency int `json:"frequency"` // How often this pattern occurs
|
||||
Confidence float64 `json:"confidence"` // Confidence in pattern (0-1)
|
||||
ExampleDecisions []string `json:"example_decisions"` // Example decisions that match
|
||||
Characteristics map[string]interface{} `json:"characteristics"` // Pattern characteristics
|
||||
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
|
||||
}
|
||||
|
||||
// ResolverStatistics represents statistics about context resolution operations
|
||||
|
||||
Reference in New Issue
Block a user