chore: align slurp config and scaffolding
This commit is contained in:
94
docs/development/sec-slurp-ucxl-beacon-pin-steward.md
Normal file
94
docs/development/sec-slurp-ucxl-beacon-pin-steward.md
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
# SEC-SLURP UCXL Beacon & Pin Steward Design Notes
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
- Establish the authoritative UCXL context beacon that bridges SLURP persistence with WHOOSH/role-aware agents.
|
||||||
|
- Define the Pin Steward responsibilities so DHT replication, healing, and telemetry satisfy SEC-SLURP 1.1a acceptance criteria.
|
||||||
|
- Provide an incremental execution plan aligned with the Persistence Wiring Report and DHT Resilience Supplement.
|
||||||
|
|
||||||
|
## UCXL Beacon Data Model
|
||||||
|
- **manifest_id** (`string`): deterministic hash of `project:task:address:version`.
|
||||||
|
- **ucxl_address** (`ucxl.Address`): canonical address that produced the manifest.
|
||||||
|
- **context_version** (`int`): monotonic version from SLURP temporal graph.
|
||||||
|
- **source_hash** (`string`): content hash emitted by `persistContext` (LevelDB) for change detection.
|
||||||
|
- **generated_by** (`string`): CHORUS agent id / role bundle that wrote the context.
|
||||||
|
- **generated_at** (`time.Time`): timestamp from SLURP persistence event.
|
||||||
|
- **replica_targets** (`[]string`): desired replica node ids (Pin Steward enforces `replication_factor`).
|
||||||
|
- **replica_state** (`[]ReplicaInfo`): health snapshot (`node_id`, `provider_id`, `status`, `last_checked`, `latency_ms`).
|
||||||
|
- **encryption** (`EncryptionMetadata`):
|
||||||
|
- `dek_fingerprint` (`string`)
|
||||||
|
- `kek_policy` (`string`): BACKBEAT rotation policy identifier.
|
||||||
|
- `rotation_due` (`time.Time`)
|
||||||
|
- **compliance_tags** (`[]string`): SHHH/WHOOSH governance hooks (e.g. `sec-high`, `audit-required`).
|
||||||
|
- **beacon_metrics** (`BeaconMetrics`): summarized counters for cache hits, DHT retrieves, validation errors.
|
||||||
|
|
||||||
|
### Storage Strategy
|
||||||
|
- Primary persistence in LevelDB (`pkg/slurp/slurp.go`) using key prefix `beacon::<manifest_id>`.
|
||||||
|
- Secondary replication to DHT under `dht://beacon/<manifest_id>` enabling WHOOSH agents to read via Pin Steward API.
|
||||||
|
- Optional export to UCXL Decision Record envelope for historical traceability.
|
||||||
|
|
||||||
|
## Beacon APIs
|
||||||
|
| Endpoint | Purpose | Notes |
|
||||||
|
|----------|---------|-------|
|
||||||
|
| `Beacon.Upsert(manifest)` | Persist/update manifest | Called by SLURP after `persistContext` success. |
|
||||||
|
| `Beacon.Get(ucxlAddress)` | Resolve latest manifest | Used by WHOOSH/agents to locate canonical context. |
|
||||||
|
| `Beacon.List(filter)` | Query manifests by tags/roles/time | Backs dashboards and Pin Steward audits. |
|
||||||
|
| `Beacon.StreamChanges(since)` | Provide change feed for Pin Steward anti-entropy jobs | Implements backpressure and bookmark tokens. |
|
||||||
|
|
||||||
|
All APIs return envelope with UCXL citation + checksum to make SLURP⇄WHOOSH handoff auditable.
|
||||||
|
|
||||||
|
## Pin Steward Responsibilities
|
||||||
|
1. **Replication Planning**
|
||||||
|
- Read manifests via `Beacon.StreamChanges`.
|
||||||
|
- Evaluate current replica_state vs. `replication_factor` from configuration.
|
||||||
|
- Produce queue of DHT store/refresh tasks (`storeAsync`, `storeSync`, `storeQuorum`).
|
||||||
|
2. **Healing & Anti-Entropy**
|
||||||
|
- Schedule `heal_under_replicated` jobs every `anti_entropy_interval`.
|
||||||
|
- Re-announce providers on Pulse/Reverb when TTL < threshold.
|
||||||
|
- Record outcomes back into manifest (`replica_state`).
|
||||||
|
3. **Envelope Encryption Enforcement**
|
||||||
|
- Request KEK material from KACHING/SHHH as described in SEC-SLURP 1.1a.
|
||||||
|
- Ensure DEK fingerprints match `encryption` metadata; trigger rotation if stale.
|
||||||
|
4. **Telemetry Export**
|
||||||
|
- Emit Prometheus counters: `pin_steward_replica_heal_total`, `pin_steward_replica_unhealthy`, `pin_steward_encryption_rotations_total`.
|
||||||
|
- Surface aggregated health to WHOOSH dashboards for council visibility.
|
||||||
|
|
||||||
|
## Interaction Flow
|
||||||
|
1. **SLURP Persistence**
|
||||||
|
- `UpsertContext` → LevelDB write → manifests assembled (`persistContext`).
|
||||||
|
- Beacon `Upsert` called with manifest + context hash.
|
||||||
|
2. **Pin Steward Intake**
|
||||||
|
- `StreamChanges` yields manifest → steward verifies encryption metadata and schedules replication tasks.
|
||||||
|
3. **DHT Coordination**
|
||||||
|
- `ReplicationManager.EnsureReplication` invoked with target factor.
|
||||||
|
- `defaultVectorClockManager` (temporary) to be replaced with libp2p-aware implementation for provider TTL tracking.
|
||||||
|
4. **WHOOSH Consumption**
|
||||||
|
- WHOOSH SLURP proxy fetches manifest via `Beacon.Get`, caches in WHOOSH DB, attaches to deliverable artifacts.
|
||||||
|
- Council UI surfaces replication state + encryption posture for operator decisions.
|
||||||
|
|
||||||
|
## Incremental Delivery Plan
|
||||||
|
1. **Sprint A (Persistence parity)**
|
||||||
|
- Finalize LevelDB manifest schema + tests (extend `slurp_persistence_test.go`).
|
||||||
|
- Implement Beacon interfaces within SLURP service (in-memory + LevelDB).
|
||||||
|
- Add Prometheus metrics for persistence reads/misses.
|
||||||
|
2. **Sprint B (Pin Steward MVP)**
|
||||||
|
- Build steward worker with configurable reconciliation loop.
|
||||||
|
- Wire to existing `DistributedStorage` stubs (`StoreAsync/Sync/Quorum`).
|
||||||
|
- Emit health logs; integrate with CLI diagnostics.
|
||||||
|
3. **Sprint C (DHT Resilience)**
|
||||||
|
- Swap `defaultVectorClockManager` with libp2p implementation; add provider TTL probes.
|
||||||
|
- Implement envelope encryption path leveraging KACHING/SHHH interfaces (replace stubs in `pkg/crypto`).
|
||||||
|
- Add CI checks: replica factor assertions, provider refresh tests, beacon schema validation.
|
||||||
|
4. **Sprint D (WHOOSH Integration)**
|
||||||
|
- Expose REST/gRPC endpoint for WHOOSH to query manifests.
|
||||||
|
- Update WHOOSH SLURPArtifactManager to require beacon confirmation before submission.
|
||||||
|
- Surface Pin Steward alerts in WHOOSH admin UI.
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
- Confirm whether Beacon manifests should include DER signatures or rely on UCXL envelope hash.
|
||||||
|
- Determine storage for historical manifests (append-only log vs. latest-only) to support temporal rewind.
|
||||||
|
- Align Pin Steward job scheduling with existing BACKBEAT cadence to avoid conflicting rotations.
|
||||||
|
|
||||||
|
## Next Actions
|
||||||
|
- Prototype `BeaconStore` interface + LevelDB implementation in SLURP package.
|
||||||
|
- Document Pin Steward anti-entropy algorithm with pseudocode and integrate into SEC-SLURP test plan.
|
||||||
|
- Sync with WHOOSH team on manifest query contract (REST vs. gRPC; pagination semantics).
|
||||||
52
docs/development/sec-slurp-whoosh-integration-demo.md
Normal file
52
docs/development/sec-slurp-whoosh-integration-demo.md
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
# WHOOSH ↔ CHORUS Integration Demo Plan (SEC-SLURP Track)
|
||||||
|
|
||||||
|
## Demo Objectives
|
||||||
|
- Showcase end-to-end persistence → UCXL beacon → Pin Steward → WHOOSH artifact submission flow.
|
||||||
|
- Validate role-based agent interactions with SLURP contexts (resolver + temporal graph) prior to DHT hardening.
|
||||||
|
- Capture metrics/telemetry needed for SEC-SLURP exit criteria and WHOOSH Phase 1 sign-off.
|
||||||
|
|
||||||
|
## Sequenced Milestones
|
||||||
|
1. **Persistence Validation Session**
|
||||||
|
- Run `GOWORK=off go test ./pkg/slurp/...` with stubs patched; demo LevelDB warm/load using `slurp_persistence_test.go`.
|
||||||
|
- Inspect beacon manifests via CLI (`slurpctl beacon list`).
|
||||||
|
- Deliverable: test log + manifest sample archived in UCXL.
|
||||||
|
|
||||||
|
2. **Beacon → Pin Steward Dry Run**
|
||||||
|
- Replay stored manifests through Pin Steward worker with mock DHT backend.
|
||||||
|
- Show replication planner queue + telemetry counters (`pin_steward_replica_heal_total`).
|
||||||
|
- Deliverable: decision record linking manifest to replication outcome.
|
||||||
|
|
||||||
|
3. **WHOOSH SLURP Proxy Alignment**
|
||||||
|
- Point WHOOSH dev stack (`npm run dev`) at local SLURP with beacon API enabled.
|
||||||
|
- Walk through council formation, capture SLURP artifact submission with beacon confirmation modal.
|
||||||
|
- Deliverable: screen recording + WHOOSH DB entry referencing beacon manifest id.
|
||||||
|
|
||||||
|
4. **DHT Resilience Checkpoint**
|
||||||
|
- Switch Pin Steward to libp2p DHT (once wired) and run replication + provider TTL check.
|
||||||
|
- Fail one node intentionally, demonstrate heal path + alert surfaced in WHOOSH UI.
|
||||||
|
- Deliverable: telemetry dump + alert screenshot.
|
||||||
|
|
||||||
|
5. **Governance & Telemetry Wrap-Up**
|
||||||
|
- Export Prometheus metrics (cache hit/miss, beacon writes, replication heals) into KACHING dashboard.
|
||||||
|
- Publish Decision Record documenting UCXL address flow, referencing SEC-SLURP docs.
|
||||||
|
|
||||||
|
## Roles & Responsibilities
|
||||||
|
- **SLURP Team:** finalize persistence build, implement beacon APIs, own Pin Steward worker.
|
||||||
|
- **WHOOSH Team:** wire beacon client, expose replication/encryption status in UI, capture council telemetry.
|
||||||
|
- **KACHING/SHHH Stakeholders:** validate telemetry ingestion and encryption custody notes.
|
||||||
|
- **Program Management:** schedule demo rehearsal, ensure Decision Records and UCXL addresses recorded.
|
||||||
|
|
||||||
|
## Tooling & Environments
|
||||||
|
- Local cluster via `docker compose up slurp whoosh pin-steward` (to be scripted in `commands/`).
|
||||||
|
- Use `make demo-sec-slurp` target to run integration harness (to be added).
|
||||||
|
- Prometheus/Grafana docker compose for metrics validation.
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- Beacon manifest accessible from WHOOSH UI within 2s average latency.
|
||||||
|
- Pin Steward resolves under-replicated manifest within demo timeline (<30s) and records healing event.
|
||||||
|
- All demo steps logged with UCXL references and SHHH redaction checks passing.
|
||||||
|
|
||||||
|
## Open Items
|
||||||
|
- Need sample repo/issues to feed WHOOSH analyzer (consider `project-queues/active/WHOOSH/demo-data`).
|
||||||
|
- Determine minimal DHT cluster footprint for the demo (3 vs 5 nodes).
|
||||||
|
- Align on telemetry retention window for demo (24h?).
|
||||||
32
docs/progress/SEC-SLURP-1.1a-supplemental.md
Normal file
32
docs/progress/SEC-SLURP-1.1a-supplemental.md
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
# SEC-SLURP 1.1a – DHT Resilience Supplement
|
||||||
|
|
||||||
|
## Requirements (derived from `docs/Modules/DHT.md`)
|
||||||
|
|
||||||
|
1. **Real DHT state & persistence**
|
||||||
|
- Replace mock DHT usage with libp2p-based storage or equivalent real implementation.
|
||||||
|
- Store DHT/blockstore data on persistent volumes (named volumes/ZFS/NFS) with node placement constraints.
|
||||||
|
- Ensure bootstrap nodes are stateful and survive container churn.
|
||||||
|
|
||||||
|
2. **Pin Steward + replication policy**
|
||||||
|
- Introduce a Pin Steward service that tracks UCXL CID manifests and enforces replication factor (e.g. 3–5 replicas).
|
||||||
|
- Re-announce providers on Pulse/Reverb and heal under-replicated content.
|
||||||
|
- Schedule anti-entropy jobs to verify and repair replicas.
|
||||||
|
|
||||||
|
3. **Envelope encryption & shared key custody**
|
||||||
|
- Implement envelope encryption (DEK+KEK) with threshold/organizational custody rather than per-role ownership.
|
||||||
|
- Store KEK metadata with UCXL manifests; rotate via BACKBEAT.
|
||||||
|
- Update crypto/key-manager stubs to real implementations once available.
|
||||||
|
|
||||||
|
4. **Shared UCXL Beacon index**
|
||||||
|
- Maintain an authoritative CID registry (DR/UCXL) replicated outside individual agents.
|
||||||
|
- Ensure metadata updates are durable and role-agnostic to prevent stranded CIDs.
|
||||||
|
|
||||||
|
5. **CI/SLO validation**
|
||||||
|
- Add automated tests/health checks covering provider refresh, replication factor, and persistent-storage guarantees.
|
||||||
|
- Gate releases on DHT resilience checks (provider TTLs, replica counts).
|
||||||
|
|
||||||
|
## Integration Path for SEC-SLURP 1.1
|
||||||
|
|
||||||
|
- Incorporate the above requirements as acceptance criteria alongside LevelDB persistence.
|
||||||
|
- Sequence work to: migrate DHT interactions, introduce Pin Steward, implement envelope crypto, and wire CI validation.
|
||||||
|
- Attach artifacts (Pin Steward design, envelope crypto spec, CI scripts) to the Phase 1 deliverable checklist.
|
||||||
@@ -5,10 +5,14 @@
|
|||||||
- Upgraded SLURP’s lifecycle so initialization bootstraps cached context data from disk, cache misses hydrate from persistence, successful `UpsertContext` calls write back to LevelDB, and shutdown closes the store with error telemetry.
|
- Upgraded SLURP’s lifecycle so initialization bootstraps cached context data from disk, cache misses hydrate from persistence, successful `UpsertContext` calls write back to LevelDB, and shutdown closes the store with error telemetry.
|
||||||
- Introduced `pkg/slurp/slurp_persistence_test.go` to confirm contexts survive process restarts and can be resolved after clearing in-memory caches.
|
- Introduced `pkg/slurp/slurp_persistence_test.go` to confirm contexts survive process restarts and can be resolved after clearing in-memory caches.
|
||||||
- Instrumented cache/persistence metrics so hit/miss ratios and storage failures are tracked for observability.
|
- Instrumented cache/persistence metrics so hit/miss ratios and storage failures are tracked for observability.
|
||||||
- Attempted `GOWORK=off go test ./pkg/slurp`; execution was blocked by legacy references to `config.Authority*` symbols in `pkg/slurp/context`, so the new test did not run.
|
- Implemented lightweight crypto/key-management stubs (`pkg/crypto/role_crypto_stub.go`, `pkg/crypto/key_manager_stub.go`) so SLURP modules compile while the production stack is ported.
|
||||||
|
- Updated DHT distribution and encrypted storage layers (`pkg/slurp/distribution/dht_impl.go`, `pkg/slurp/storage/encrypted_storage.go`) to use the crypto stubs, adding per-role fingerprints and durable decoding logic.
|
||||||
|
- Expanded storage metadata models (`pkg/slurp/storage/types.go`, `pkg/slurp/storage/backup_manager.go`) with fields referenced by backup/replication flows (progress, error messages, retention, data size).
|
||||||
|
- Incrementally stubbed/simplified distributed storage helpers to inch toward a compilable SLURP package.
|
||||||
|
- Attempted `GOWORK=off go test ./pkg/slurp`; the original authority-level blocker is resolved, but builds still fail in storage/index code due to remaining stub work (e.g., Bleve queries, DHT helpers).
|
||||||
|
|
||||||
## Recommended Next Steps
|
## Recommended Next Steps
|
||||||
- Address the `config.Authority*` symbol drift (or scope down the impacted packages) so the SLURP test suite can compile cleanly, then rerun `GOWORK=off go test ./pkg/slurp` to validate persistence changes.
|
- Stub the remaining storage/index dependencies (Bleve query scaffolding, UCXL helpers, `errorCh` queues, cache regex usage) or neutralize the heavy modules so that `GOWORK=off go test ./pkg/slurp` compiles and runs.
|
||||||
- Feed the durable store into the resolver and temporal graph implementations to finish the remaining Phase 1 SLURP roadmap items.
|
- Feed the durable store into the resolver and temporal graph implementations to finish the SEC-SLURP 1.1 milestone once the package builds cleanly.
|
||||||
- Expand Prometheus metrics and logging to track cache hit/miss ratios plus persistence errors for SEC-SLURP observability goals.
|
- Extend Prometheus metrics/logging to track cache hit/miss ratios plus persistence errors for observability alignment.
|
||||||
- Review unrelated changes on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert to keep the branch focused.
|
- Review unrelated changes still tracked on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert for focus.
|
||||||
|
|||||||
@@ -130,7 +130,27 @@ type ResolutionConfig struct {
|
|||||||
|
|
||||||
// SlurpConfig defines SLURP settings
|
// SlurpConfig defines SLURP settings
|
||||||
type SlurpConfig struct {
|
type SlurpConfig struct {
|
||||||
Enabled bool `yaml:"enabled"`
|
Enabled bool `yaml:"enabled"`
|
||||||
|
BaseURL string `yaml:"base_url"`
|
||||||
|
APIKey string `yaml:"api_key"`
|
||||||
|
Timeout time.Duration `yaml:"timeout"`
|
||||||
|
RetryCount int `yaml:"retry_count"`
|
||||||
|
RetryDelay time.Duration `yaml:"retry_delay"`
|
||||||
|
TemporalAnalysis SlurpTemporalAnalysisConfig `yaml:"temporal_analysis"`
|
||||||
|
Performance SlurpPerformanceConfig `yaml:"performance"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SlurpTemporalAnalysisConfig captures temporal behaviour tuning for SLURP.
|
||||||
|
type SlurpTemporalAnalysisConfig struct {
|
||||||
|
MaxDecisionHops int `yaml:"max_decision_hops"`
|
||||||
|
StalenessCheckInterval time.Duration `yaml:"staleness_check_interval"`
|
||||||
|
StalenessThreshold float64 `yaml:"staleness_threshold"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SlurpPerformanceConfig exposes performance related tunables for SLURP.
|
||||||
|
type SlurpPerformanceConfig struct {
|
||||||
|
MaxConcurrentResolutions int `yaml:"max_concurrent_resolutions"`
|
||||||
|
MetricsCollectionInterval time.Duration `yaml:"metrics_collection_interval"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// WHOOSHAPIConfig defines WHOOSH API integration settings
|
// WHOOSHAPIConfig defines WHOOSH API integration settings
|
||||||
@@ -211,7 +231,21 @@ func LoadFromEnvironment() (*Config, error) {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
Slurp: SlurpConfig{
|
Slurp: SlurpConfig{
|
||||||
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
|
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
|
||||||
|
BaseURL: getEnvOrDefault("CHORUS_SLURP_API_BASE_URL", "http://localhost:9090"),
|
||||||
|
APIKey: getEnvOrFileContent("CHORUS_SLURP_API_KEY", "CHORUS_SLURP_API_KEY_FILE"),
|
||||||
|
Timeout: getEnvDurationOrDefault("CHORUS_SLURP_API_TIMEOUT", 15*time.Second),
|
||||||
|
RetryCount: getEnvIntOrDefault("CHORUS_SLURP_API_RETRY_COUNT", 3),
|
||||||
|
RetryDelay: getEnvDurationOrDefault("CHORUS_SLURP_API_RETRY_DELAY", 2*time.Second),
|
||||||
|
TemporalAnalysis: SlurpTemporalAnalysisConfig{
|
||||||
|
MaxDecisionHops: getEnvIntOrDefault("CHORUS_SLURP_MAX_DECISION_HOPS", 5),
|
||||||
|
StalenessCheckInterval: getEnvDurationOrDefault("CHORUS_SLURP_STALENESS_CHECK_INTERVAL", 5*time.Minute),
|
||||||
|
StalenessThreshold: 0.2,
|
||||||
|
},
|
||||||
|
Performance: SlurpPerformanceConfig{
|
||||||
|
MaxConcurrentResolutions: getEnvIntOrDefault("CHORUS_SLURP_MAX_CONCURRENT_RESOLUTIONS", 4),
|
||||||
|
MetricsCollectionInterval: getEnvDurationOrDefault("CHORUS_SLURP_METRICS_COLLECTION_INTERVAL", time.Minute),
|
||||||
|
},
|
||||||
},
|
},
|
||||||
Security: SecurityConfig{
|
Security: SecurityConfig{
|
||||||
KeyRotationDays: getEnvIntOrDefault("CHORUS_KEY_ROTATION_DAYS", 30),
|
KeyRotationDays: getEnvIntOrDefault("CHORUS_KEY_ROTATION_DAYS", 30),
|
||||||
|
|||||||
23
pkg/crypto/key_manager_stub.go
Normal file
23
pkg/crypto/key_manager_stub.go
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
package crypto
|
||||||
|
|
||||||
|
import "time"
|
||||||
|
|
||||||
|
// GenerateKey returns a deterministic placeholder key identifier for the given role.
|
||||||
|
func (km *KeyManager) GenerateKey(role string) (string, error) {
|
||||||
|
return "stub-key-" + role, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeprecateKey is a no-op in the stub implementation.
|
||||||
|
func (km *KeyManager) DeprecateKey(keyID string) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetKeysForRotation mirrors SEC-SLURP-1.1 key rotation discovery while remaining inert.
|
||||||
|
func (km *KeyManager) GetKeysForRotation(maxAge time.Duration) ([]*KeyInfo, error) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValidateKeyFingerprint accepts all fingerprints in the stubbed environment.
|
||||||
|
func (km *KeyManager) ValidateKeyFingerprint(role, fingerprint string) bool {
|
||||||
|
return true
|
||||||
|
}
|
||||||
75
pkg/crypto/role_crypto_stub.go
Normal file
75
pkg/crypto/role_crypto_stub.go
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
package crypto
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"chorus/pkg/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
type RoleCrypto struct {
|
||||||
|
config *config.Config
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRoleCrypto(cfg *config.Config, _ interface{}, _ interface{}, _ interface{}) (*RoleCrypto, error) {
|
||||||
|
if cfg == nil {
|
||||||
|
return nil, fmt.Errorf("config cannot be nil")
|
||||||
|
}
|
||||||
|
return &RoleCrypto{config: cfg}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RoleCrypto) EncryptForRole(data []byte, role string) ([]byte, string, error) {
|
||||||
|
if len(data) == 0 {
|
||||||
|
return []byte{}, rc.fingerprint(data), nil
|
||||||
|
}
|
||||||
|
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
|
||||||
|
base64.StdEncoding.Encode(encoded, data)
|
||||||
|
return encoded, rc.fingerprint(data), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RoleCrypto) DecryptForRole(data []byte, role string, _ string) ([]byte, error) {
|
||||||
|
if len(data) == 0 {
|
||||||
|
return []byte{}, nil
|
||||||
|
}
|
||||||
|
decoded := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
|
||||||
|
n, err := base64.StdEncoding.Decode(decoded, data)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return decoded[:n], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RoleCrypto) EncryptContextForRoles(payload interface{}, roles []string, _ []string) ([]byte, error) {
|
||||||
|
raw, err := json.Marshal(payload)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(raw)))
|
||||||
|
base64.StdEncoding.Encode(encoded, raw)
|
||||||
|
return encoded, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RoleCrypto) fingerprint(data []byte) string {
|
||||||
|
sum := sha256.Sum256(data)
|
||||||
|
return base64.StdEncoding.EncodeToString(sum[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
type StorageAccessController interface {
|
||||||
|
CanStore(role, key string) bool
|
||||||
|
CanRetrieve(role, key string) bool
|
||||||
|
}
|
||||||
|
|
||||||
|
type StorageAuditLogger interface {
|
||||||
|
LogEncryptionOperation(role, key, operation string, success bool)
|
||||||
|
LogDecryptionOperation(role, key, operation string, success bool)
|
||||||
|
LogKeyRotation(role, keyID string, success bool, message string)
|
||||||
|
LogError(message string)
|
||||||
|
LogAccessDenial(role, key, operation string)
|
||||||
|
}
|
||||||
|
|
||||||
|
type KeyInfo struct {
|
||||||
|
Role string
|
||||||
|
KeyID string
|
||||||
|
}
|
||||||
284
pkg/slurp/alignment/stubs.go
Normal file
284
pkg/slurp/alignment/stubs.go
Normal file
@@ -0,0 +1,284 @@
|
|||||||
|
package alignment
|
||||||
|
|
||||||
|
import "time"
|
||||||
|
|
||||||
|
// GoalStatistics summarizes goal management metrics.
|
||||||
|
type GoalStatistics struct {
|
||||||
|
TotalGoals int
|
||||||
|
ActiveGoals int
|
||||||
|
Completed int
|
||||||
|
Archived int
|
||||||
|
LastUpdated time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentGapAnalysis captures detected misalignments that require follow-up.
|
||||||
|
type AlignmentGapAnalysis struct {
|
||||||
|
Address string
|
||||||
|
Severity string
|
||||||
|
Findings []string
|
||||||
|
DetectedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentComparison provides a simple comparison view between two contexts.
|
||||||
|
type AlignmentComparison struct {
|
||||||
|
PrimaryScore float64
|
||||||
|
SecondaryScore float64
|
||||||
|
Differences []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentStatistics aggregates assessment metrics across contexts.
|
||||||
|
type AlignmentStatistics struct {
|
||||||
|
TotalAssessments int
|
||||||
|
AverageScore float64
|
||||||
|
SuccessRate float64
|
||||||
|
FailureRate float64
|
||||||
|
LastUpdated time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressHistory captures historical progress samples for a goal.
|
||||||
|
type ProgressHistory struct {
|
||||||
|
GoalID string
|
||||||
|
Samples []ProgressSample
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressSample represents a single progress measurement.
|
||||||
|
type ProgressSample struct {
|
||||||
|
Timestamp time.Time
|
||||||
|
Percentage float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// CompletionPrediction represents a simple completion forecast for a goal.
|
||||||
|
type CompletionPrediction struct {
|
||||||
|
GoalID string
|
||||||
|
EstimatedFinish time.Time
|
||||||
|
Confidence float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressStatistics aggregates goal progress metrics.
|
||||||
|
type ProgressStatistics struct {
|
||||||
|
AverageCompletion float64
|
||||||
|
OpenGoals int
|
||||||
|
OnTrackGoals int
|
||||||
|
AtRiskGoals int
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftHistory tracks historical drift events.
|
||||||
|
type DriftHistory struct {
|
||||||
|
Address string
|
||||||
|
Events []DriftEvent
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftEvent captures a single drift occurrence.
|
||||||
|
type DriftEvent struct {
|
||||||
|
Timestamp time.Time
|
||||||
|
Severity DriftSeverity
|
||||||
|
Details string
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftThresholds defines sensitivity thresholds for drift detection.
|
||||||
|
type DriftThresholds struct {
|
||||||
|
SeverityThreshold DriftSeverity
|
||||||
|
ScoreDelta float64
|
||||||
|
ObservationWindow time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftPatternAnalysis summarizes detected drift patterns.
|
||||||
|
type DriftPatternAnalysis struct {
|
||||||
|
Patterns []string
|
||||||
|
Summary string
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftPrediction provides a lightweight stub for future drift forecasting.
|
||||||
|
type DriftPrediction struct {
|
||||||
|
Address string
|
||||||
|
Horizon time.Duration
|
||||||
|
Severity DriftSeverity
|
||||||
|
Confidence float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftAlert represents an alert emitted when drift exceeds thresholds.
|
||||||
|
type DriftAlert struct {
|
||||||
|
ID string
|
||||||
|
Address string
|
||||||
|
Severity DriftSeverity
|
||||||
|
CreatedAt time.Time
|
||||||
|
Message string
|
||||||
|
}
|
||||||
|
|
||||||
|
// GoalRecommendation summarises next actions for a specific goal.
|
||||||
|
type GoalRecommendation struct {
|
||||||
|
GoalID string
|
||||||
|
Title string
|
||||||
|
Description string
|
||||||
|
Priority int
|
||||||
|
}
|
||||||
|
|
||||||
|
// StrategicRecommendation captures higher-level alignment guidance.
|
||||||
|
type StrategicRecommendation struct {
|
||||||
|
Theme string
|
||||||
|
Summary string
|
||||||
|
Impact string
|
||||||
|
RecommendedBy string
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrioritizedRecommendation wraps a recommendation with ranking metadata.
|
||||||
|
type PrioritizedRecommendation struct {
|
||||||
|
Recommendation *AlignmentRecommendation
|
||||||
|
Score float64
|
||||||
|
Rank int
|
||||||
|
}
|
||||||
|
|
||||||
|
// RecommendationHistory tracks lifecycle updates for a recommendation.
|
||||||
|
type RecommendationHistory struct {
|
||||||
|
RecommendationID string
|
||||||
|
Entries []RecommendationHistoryEntry
|
||||||
|
}
|
||||||
|
|
||||||
|
// RecommendationHistoryEntry represents a single change entry.
|
||||||
|
type RecommendationHistoryEntry struct {
|
||||||
|
Timestamp time.Time
|
||||||
|
Status ImplementationStatus
|
||||||
|
Notes string
|
||||||
|
}
|
||||||
|
|
||||||
|
// ImplementationStatus reflects execution state for recommendations.
|
||||||
|
type ImplementationStatus string
|
||||||
|
|
||||||
|
const (
|
||||||
|
ImplementationPending ImplementationStatus = "pending"
|
||||||
|
ImplementationActive ImplementationStatus = "active"
|
||||||
|
ImplementationBlocked ImplementationStatus = "blocked"
|
||||||
|
ImplementationDone ImplementationStatus = "completed"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RecommendationEffectiveness offers coarse metrics on outcome quality.
|
||||||
|
type RecommendationEffectiveness struct {
|
||||||
|
SuccessRate float64
|
||||||
|
AverageTime time.Duration
|
||||||
|
Feedback []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// RecommendationStatistics aggregates recommendation issuance metrics.
|
||||||
|
type RecommendationStatistics struct {
|
||||||
|
TotalCreated int
|
||||||
|
TotalCompleted int
|
||||||
|
AveragePriority float64
|
||||||
|
LastUpdated time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentMetrics is a lightweight placeholder exported for engine integration.
|
||||||
|
type AlignmentMetrics struct {
|
||||||
|
Assessments int
|
||||||
|
SuccessRate float64
|
||||||
|
FailureRate float64
|
||||||
|
AverageScore float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// GoalMetrics is a stub summarising per-goal metrics.
|
||||||
|
type GoalMetrics struct {
|
||||||
|
GoalID string
|
||||||
|
AverageScore float64
|
||||||
|
SuccessRate float64
|
||||||
|
LastUpdated time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressMetrics is a stub capturing aggregate progress data.
|
||||||
|
type ProgressMetrics struct {
|
||||||
|
OverallCompletion float64
|
||||||
|
ActiveGoals int
|
||||||
|
CompletedGoals int
|
||||||
|
UpdatedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// MetricsTrends wraps high-level trend information.
|
||||||
|
type MetricsTrends struct {
|
||||||
|
Metric string
|
||||||
|
TrendLine []float64
|
||||||
|
Timestamp time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// MetricsReport represents a generated metrics report placeholder.
|
||||||
|
type MetricsReport struct {
|
||||||
|
ID string
|
||||||
|
Generated time.Time
|
||||||
|
Summary string
|
||||||
|
}
|
||||||
|
|
||||||
|
// MetricsConfiguration reflects configuration for metrics collection.
|
||||||
|
type MetricsConfiguration struct {
|
||||||
|
Enabled bool
|
||||||
|
Interval time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncResult summarises a synchronisation run.
|
||||||
|
type SyncResult struct {
|
||||||
|
SyncedItems int
|
||||||
|
Errors []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// ImportResult summarises the outcome of an import operation.
|
||||||
|
type ImportResult struct {
|
||||||
|
Imported int
|
||||||
|
Skipped int
|
||||||
|
Errors []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncSettings captures synchronisation preferences.
|
||||||
|
type SyncSettings struct {
|
||||||
|
Enabled bool
|
||||||
|
Interval time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncStatus provides health information about sync processes.
|
||||||
|
type SyncStatus struct {
|
||||||
|
LastSync time.Time
|
||||||
|
Healthy bool
|
||||||
|
Message string
|
||||||
|
}
|
||||||
|
|
||||||
|
// AssessmentValidation provides validation results for assessments.
|
||||||
|
type AssessmentValidation struct {
|
||||||
|
Valid bool
|
||||||
|
Issues []string
|
||||||
|
CheckedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConfigurationValidation summarises configuration validation status.
|
||||||
|
type ConfigurationValidation struct {
|
||||||
|
Valid bool
|
||||||
|
Messages []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// WeightsValidation describes validation for weighting schemes.
|
||||||
|
type WeightsValidation struct {
|
||||||
|
Normalized bool
|
||||||
|
Adjustments map[string]float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConsistencyIssue represents a detected consistency issue.
|
||||||
|
type ConsistencyIssue struct {
|
||||||
|
Description string
|
||||||
|
Severity DriftSeverity
|
||||||
|
DetectedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentHealthCheck is a stub for health check outputs.
|
||||||
|
type AlignmentHealthCheck struct {
|
||||||
|
Status string
|
||||||
|
Details string
|
||||||
|
CheckedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// NotificationRules captures notification configuration stubs.
|
||||||
|
type NotificationRules struct {
|
||||||
|
Enabled bool
|
||||||
|
Channels []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// NotificationRecord represents a delivered notification.
|
||||||
|
type NotificationRecord struct {
|
||||||
|
ID string
|
||||||
|
Timestamp time.Time
|
||||||
|
Recipient string
|
||||||
|
Status string
|
||||||
|
}
|
||||||
@@ -4,176 +4,175 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
"chorus/pkg/ucxl"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// ProjectGoal represents a high-level project objective
|
// ProjectGoal represents a high-level project objective
|
||||||
type ProjectGoal struct {
|
type ProjectGoal struct {
|
||||||
ID string `json:"id"` // Unique identifier
|
ID string `json:"id"` // Unique identifier
|
||||||
Name string `json:"name"` // Goal name
|
Name string `json:"name"` // Goal name
|
||||||
Description string `json:"description"` // Detailed description
|
Description string `json:"description"` // Detailed description
|
||||||
Keywords []string `json:"keywords"` // Associated keywords
|
Keywords []string `json:"keywords"` // Associated keywords
|
||||||
Priority int `json:"priority"` // Priority level (1=highest)
|
Priority int `json:"priority"` // Priority level (1=highest)
|
||||||
Phase string `json:"phase"` // Project phase
|
Phase string `json:"phase"` // Project phase
|
||||||
Category string `json:"category"` // Goal category
|
Category string `json:"category"` // Goal category
|
||||||
Owner string `json:"owner"` // Goal owner
|
Owner string `json:"owner"` // Goal owner
|
||||||
Status GoalStatus `json:"status"` // Current status
|
Status GoalStatus `json:"status"` // Current status
|
||||||
|
|
||||||
// Success criteria
|
// Success criteria
|
||||||
Metrics []string `json:"metrics"` // Success metrics
|
Metrics []string `json:"metrics"` // Success metrics
|
||||||
SuccessCriteria []*SuccessCriterion `json:"success_criteria"` // Detailed success criteria
|
SuccessCriteria []*SuccessCriterion `json:"success_criteria"` // Detailed success criteria
|
||||||
AcceptanceCriteria []string `json:"acceptance_criteria"` // Acceptance criteria
|
AcceptanceCriteria []string `json:"acceptance_criteria"` // Acceptance criteria
|
||||||
|
|
||||||
// Timeline
|
// Timeline
|
||||||
StartDate *time.Time `json:"start_date,omitempty"` // Goal start date
|
StartDate *time.Time `json:"start_date,omitempty"` // Goal start date
|
||||||
TargetDate *time.Time `json:"target_date,omitempty"` // Target completion date
|
TargetDate *time.Time `json:"target_date,omitempty"` // Target completion date
|
||||||
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
|
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
|
||||||
|
|
||||||
// Relationships
|
// Relationships
|
||||||
ParentGoalID *string `json:"parent_goal_id,omitempty"` // Parent goal
|
ParentGoalID *string `json:"parent_goal_id,omitempty"` // Parent goal
|
||||||
ChildGoalIDs []string `json:"child_goal_ids"` // Child goals
|
ChildGoalIDs []string `json:"child_goal_ids"` // Child goals
|
||||||
Dependencies []string `json:"dependencies"` // Goal dependencies
|
Dependencies []string `json:"dependencies"` // Goal dependencies
|
||||||
|
|
||||||
// Configuration
|
// Configuration
|
||||||
Weights *GoalWeights `json:"weights"` // Assessment weights
|
Weights *GoalWeights `json:"weights"` // Assessment weights
|
||||||
ThresholdScore float64 `json:"threshold_score"` // Minimum alignment score
|
ThresholdScore float64 `json:"threshold_score"` // Minimum alignment score
|
||||||
|
|
||||||
// Metadata
|
// Metadata
|
||||||
CreatedAt time.Time `json:"created_at"` // When created
|
CreatedAt time.Time `json:"created_at"` // When created
|
||||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||||
CreatedBy string `json:"created_by"` // Who created it
|
CreatedBy string `json:"created_by"` // Who created it
|
||||||
Tags []string `json:"tags"` // Goal tags
|
Tags []string `json:"tags"` // Goal tags
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// GoalStatus represents the current status of a goal
|
// GoalStatus represents the current status of a goal
|
||||||
type GoalStatus string
|
type GoalStatus string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
GoalStatusDraft GoalStatus = "draft" // Goal is in draft state
|
GoalStatusDraft GoalStatus = "draft" // Goal is in draft state
|
||||||
GoalStatusActive GoalStatus = "active" // Goal is active
|
GoalStatusActive GoalStatus = "active" // Goal is active
|
||||||
GoalStatusOnHold GoalStatus = "on_hold" // Goal is on hold
|
GoalStatusOnHold GoalStatus = "on_hold" // Goal is on hold
|
||||||
GoalStatusCompleted GoalStatus = "completed" // Goal is completed
|
GoalStatusCompleted GoalStatus = "completed" // Goal is completed
|
||||||
GoalStatusCancelled GoalStatus = "cancelled" // Goal is cancelled
|
GoalStatusCancelled GoalStatus = "cancelled" // Goal is cancelled
|
||||||
GoalStatusArchived GoalStatus = "archived" // Goal is archived
|
GoalStatusArchived GoalStatus = "archived" // Goal is archived
|
||||||
)
|
)
|
||||||
|
|
||||||
// SuccessCriterion represents a specific success criterion for a goal
|
// SuccessCriterion represents a specific success criterion for a goal
|
||||||
type SuccessCriterion struct {
|
type SuccessCriterion struct {
|
||||||
ID string `json:"id"` // Criterion ID
|
ID string `json:"id"` // Criterion ID
|
||||||
Description string `json:"description"` // Criterion description
|
Description string `json:"description"` // Criterion description
|
||||||
MetricName string `json:"metric_name"` // Associated metric
|
MetricName string `json:"metric_name"` // Associated metric
|
||||||
TargetValue interface{} `json:"target_value"` // Target value
|
TargetValue interface{} `json:"target_value"` // Target value
|
||||||
CurrentValue interface{} `json:"current_value"` // Current value
|
CurrentValue interface{} `json:"current_value"` // Current value
|
||||||
Unit string `json:"unit"` // Value unit
|
Unit string `json:"unit"` // Value unit
|
||||||
ComparisonOp string `json:"comparison_op"` // Comparison operator (>=, <=, ==, etc.)
|
ComparisonOp string `json:"comparison_op"` // Comparison operator (>=, <=, ==, etc.)
|
||||||
Weight float64 `json:"weight"` // Criterion weight
|
Weight float64 `json:"weight"` // Criterion weight
|
||||||
Achieved bool `json:"achieved"` // Whether achieved
|
Achieved bool `json:"achieved"` // Whether achieved
|
||||||
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
|
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
|
||||||
}
|
}
|
||||||
|
|
||||||
// GoalWeights represents weights for different aspects of goal alignment assessment
|
// GoalWeights represents weights for different aspects of goal alignment assessment
|
||||||
type GoalWeights struct {
|
type GoalWeights struct {
|
||||||
KeywordMatch float64 `json:"keyword_match"` // Weight for keyword matching
|
KeywordMatch float64 `json:"keyword_match"` // Weight for keyword matching
|
||||||
SemanticAlignment float64 `json:"semantic_alignment"` // Weight for semantic alignment
|
SemanticAlignment float64 `json:"semantic_alignment"` // Weight for semantic alignment
|
||||||
PurposeAlignment float64 `json:"purpose_alignment"` // Weight for purpose alignment
|
PurposeAlignment float64 `json:"purpose_alignment"` // Weight for purpose alignment
|
||||||
TechnologyMatch float64 `json:"technology_match"` // Weight for technology matching
|
TechnologyMatch float64 `json:"technology_match"` // Weight for technology matching
|
||||||
QualityScore float64 `json:"quality_score"` // Weight for context quality
|
QualityScore float64 `json:"quality_score"` // Weight for context quality
|
||||||
RecentActivity float64 `json:"recent_activity"` // Weight for recent activity
|
RecentActivity float64 `json:"recent_activity"` // Weight for recent activity
|
||||||
ImportanceScore float64 `json:"importance_score"` // Weight for component importance
|
ImportanceScore float64 `json:"importance_score"` // Weight for component importance
|
||||||
}
|
}
|
||||||
|
|
||||||
// AlignmentAssessment represents overall alignment assessment for a context
|
// AlignmentAssessment represents overall alignment assessment for a context
|
||||||
type AlignmentAssessment struct {
|
type AlignmentAssessment struct {
|
||||||
Address ucxl.Address `json:"address"` // Context address
|
Address ucxl.Address `json:"address"` // Context address
|
||||||
OverallScore float64 `json:"overall_score"` // Overall alignment score (0-1)
|
OverallScore float64 `json:"overall_score"` // Overall alignment score (0-1)
|
||||||
GoalAlignments []*GoalAlignment `json:"goal_alignments"` // Individual goal alignments
|
GoalAlignments []*GoalAlignment `json:"goal_alignments"` // Individual goal alignments
|
||||||
StrengthAreas []string `json:"strength_areas"` // Areas of strong alignment
|
StrengthAreas []string `json:"strength_areas"` // Areas of strong alignment
|
||||||
WeaknessAreas []string `json:"weakness_areas"` // Areas of weak alignment
|
WeaknessAreas []string `json:"weakness_areas"` // Areas of weak alignment
|
||||||
Recommendations []*AlignmentRecommendation `json:"recommendations"` // Improvement recommendations
|
Recommendations []*AlignmentRecommendation `json:"recommendations"` // Improvement recommendations
|
||||||
AssessedAt time.Time `json:"assessed_at"` // When assessment was performed
|
AssessedAt time.Time `json:"assessed_at"` // When assessment was performed
|
||||||
AssessmentVersion string `json:"assessment_version"` // Assessment algorithm version
|
AssessmentVersion string `json:"assessment_version"` // Assessment algorithm version
|
||||||
Confidence float64 `json:"confidence"` // Assessment confidence (0-1)
|
Confidence float64 `json:"confidence"` // Assessment confidence (0-1)
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// GoalAlignment represents alignment assessment for a specific goal
|
// GoalAlignment represents alignment assessment for a specific goal
|
||||||
type GoalAlignment struct {
|
type GoalAlignment struct {
|
||||||
GoalID string `json:"goal_id"` // Goal identifier
|
GoalID string `json:"goal_id"` // Goal identifier
|
||||||
GoalName string `json:"goal_name"` // Goal name
|
GoalName string `json:"goal_name"` // Goal name
|
||||||
AlignmentScore float64 `json:"alignment_score"` // Alignment score (0-1)
|
AlignmentScore float64 `json:"alignment_score"` // Alignment score (0-1)
|
||||||
ComponentScores *AlignmentScores `json:"component_scores"` // Component-wise scores
|
ComponentScores *AlignmentScores `json:"component_scores"` // Component-wise scores
|
||||||
MatchedKeywords []string `json:"matched_keywords"` // Keywords that matched
|
MatchedKeywords []string `json:"matched_keywords"` // Keywords that matched
|
||||||
MatchedCriteria []string `json:"matched_criteria"` // Criteria that matched
|
MatchedCriteria []string `json:"matched_criteria"` // Criteria that matched
|
||||||
Explanation string `json:"explanation"` // Alignment explanation
|
Explanation string `json:"explanation"` // Alignment explanation
|
||||||
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in assessment
|
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in assessment
|
||||||
ImprovementAreas []string `json:"improvement_areas"` // Areas for improvement
|
ImprovementAreas []string `json:"improvement_areas"` // Areas for improvement
|
||||||
Strengths []string `json:"strengths"` // Alignment strengths
|
Strengths []string `json:"strengths"` // Alignment strengths
|
||||||
}
|
}
|
||||||
|
|
||||||
// AlignmentScores represents component scores for alignment assessment
|
// AlignmentScores represents component scores for alignment assessment
|
||||||
type AlignmentScores struct {
|
type AlignmentScores struct {
|
||||||
KeywordScore float64 `json:"keyword_score"` // Keyword matching score
|
KeywordScore float64 `json:"keyword_score"` // Keyword matching score
|
||||||
SemanticScore float64 `json:"semantic_score"` // Semantic alignment score
|
SemanticScore float64 `json:"semantic_score"` // Semantic alignment score
|
||||||
PurposeScore float64 `json:"purpose_score"` // Purpose alignment score
|
PurposeScore float64 `json:"purpose_score"` // Purpose alignment score
|
||||||
TechnologyScore float64 `json:"technology_score"` // Technology alignment score
|
TechnologyScore float64 `json:"technology_score"` // Technology alignment score
|
||||||
QualityScore float64 `json:"quality_score"` // Context quality score
|
QualityScore float64 `json:"quality_score"` // Context quality score
|
||||||
ActivityScore float64 `json:"activity_score"` // Recent activity score
|
ActivityScore float64 `json:"activity_score"` // Recent activity score
|
||||||
ImportanceScore float64 `json:"importance_score"` // Component importance score
|
ImportanceScore float64 `json:"importance_score"` // Component importance score
|
||||||
}
|
}
|
||||||
|
|
||||||
// AlignmentRecommendation represents a recommendation for improving alignment
|
// AlignmentRecommendation represents a recommendation for improving alignment
|
||||||
type AlignmentRecommendation struct {
|
type AlignmentRecommendation struct {
|
||||||
ID string `json:"id"` // Recommendation ID
|
ID string `json:"id"` // Recommendation ID
|
||||||
Type RecommendationType `json:"type"` // Recommendation type
|
Type RecommendationType `json:"type"` // Recommendation type
|
||||||
Priority int `json:"priority"` // Priority (1=highest)
|
Priority int `json:"priority"` // Priority (1=highest)
|
||||||
Title string `json:"title"` // Recommendation title
|
Title string `json:"title"` // Recommendation title
|
||||||
Description string `json:"description"` // Detailed description
|
Description string `json:"description"` // Detailed description
|
||||||
GoalID *string `json:"goal_id,omitempty"` // Related goal
|
GoalID *string `json:"goal_id,omitempty"` // Related goal
|
||||||
Address ucxl.Address `json:"address"` // Context address
|
Address ucxl.Address `json:"address"` // Context address
|
||||||
|
|
||||||
// Implementation details
|
// Implementation details
|
||||||
ActionItems []string `json:"action_items"` // Specific actions
|
ActionItems []string `json:"action_items"` // Specific actions
|
||||||
EstimatedEffort EffortLevel `json:"estimated_effort"` // Estimated effort
|
EstimatedEffort EffortLevel `json:"estimated_effort"` // Estimated effort
|
||||||
ExpectedImpact ImpactLevel `json:"expected_impact"` // Expected impact
|
ExpectedImpact ImpactLevel `json:"expected_impact"` // Expected impact
|
||||||
RequiredRoles []string `json:"required_roles"` // Required roles
|
RequiredRoles []string `json:"required_roles"` // Required roles
|
||||||
Prerequisites []string `json:"prerequisites"` // Prerequisites
|
Prerequisites []string `json:"prerequisites"` // Prerequisites
|
||||||
|
|
||||||
// Status tracking
|
// Status tracking
|
||||||
Status RecommendationStatus `json:"status"` // Implementation status
|
Status RecommendationStatus `json:"status"` // Implementation status
|
||||||
AssignedTo []string `json:"assigned_to"` // Assigned team members
|
AssignedTo []string `json:"assigned_to"` // Assigned team members
|
||||||
CreatedAt time.Time `json:"created_at"` // When created
|
CreatedAt time.Time `json:"created_at"` // When created
|
||||||
DueDate *time.Time `json:"due_date,omitempty"` // Implementation due date
|
DueDate *time.Time `json:"due_date,omitempty"` // Implementation due date
|
||||||
CompletedAt *time.Time `json:"completed_at,omitempty"` // When completed
|
CompletedAt *time.Time `json:"completed_at,omitempty"` // When completed
|
||||||
|
|
||||||
// Metadata
|
// Metadata
|
||||||
Tags []string `json:"tags"` // Recommendation tags
|
Tags []string `json:"tags"` // Recommendation tags
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// RecommendationType represents types of alignment recommendations
|
// RecommendationType represents types of alignment recommendations
|
||||||
type RecommendationType string
|
type RecommendationType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
RecommendationKeywordImprovement RecommendationType = "keyword_improvement" // Improve keyword matching
|
RecommendationKeywordImprovement RecommendationType = "keyword_improvement" // Improve keyword matching
|
||||||
RecommendationPurposeAlignment RecommendationType = "purpose_alignment" // Align purpose better
|
RecommendationPurposeAlignment RecommendationType = "purpose_alignment" // Align purpose better
|
||||||
RecommendationTechnologyUpdate RecommendationType = "technology_update" // Update technology usage
|
RecommendationTechnologyUpdate RecommendationType = "technology_update" // Update technology usage
|
||||||
RecommendationQualityImprovement RecommendationType = "quality_improvement" // Improve context quality
|
RecommendationQualityImprovement RecommendationType = "quality_improvement" // Improve context quality
|
||||||
RecommendationDocumentation RecommendationType = "documentation" // Add/improve documentation
|
RecommendationDocumentation RecommendationType = "documentation" // Add/improve documentation
|
||||||
RecommendationRefactoring RecommendationType = "refactoring" // Code refactoring
|
RecommendationRefactoring RecommendationType = "refactoring" // Code refactoring
|
||||||
RecommendationArchitectural RecommendationType = "architectural" // Architectural changes
|
RecommendationArchitectural RecommendationType = "architectural" // Architectural changes
|
||||||
RecommendationTesting RecommendationType = "testing" // Testing improvements
|
RecommendationTesting RecommendationType = "testing" // Testing improvements
|
||||||
RecommendationPerformance RecommendationType = "performance" // Performance optimization
|
RecommendationPerformance RecommendationType = "performance" // Performance optimization
|
||||||
RecommendationSecurity RecommendationType = "security" // Security enhancements
|
RecommendationSecurity RecommendationType = "security" // Security enhancements
|
||||||
)
|
)
|
||||||
|
|
||||||
// EffortLevel represents estimated effort levels
|
// EffortLevel represents estimated effort levels
|
||||||
type EffortLevel string
|
type EffortLevel string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
EffortLow EffortLevel = "low" // Low effort (1-2 hours)
|
EffortLow EffortLevel = "low" // Low effort (1-2 hours)
|
||||||
EffortMedium EffortLevel = "medium" // Medium effort (1-2 days)
|
EffortMedium EffortLevel = "medium" // Medium effort (1-2 days)
|
||||||
EffortHigh EffortLevel = "high" // High effort (1-2 weeks)
|
EffortHigh EffortLevel = "high" // High effort (1-2 weeks)
|
||||||
EffortVeryHigh EffortLevel = "very_high" // Very high effort (>2 weeks)
|
EffortVeryHigh EffortLevel = "very_high" // Very high effort (>2 weeks)
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -181,9 +180,9 @@ const (
|
|||||||
type ImpactLevel string
|
type ImpactLevel string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ImpactLow ImpactLevel = "low" // Low impact
|
ImpactLow ImpactLevel = "low" // Low impact
|
||||||
ImpactMedium ImpactLevel = "medium" // Medium impact
|
ImpactMedium ImpactLevel = "medium" // Medium impact
|
||||||
ImpactHigh ImpactLevel = "high" // High impact
|
ImpactHigh ImpactLevel = "high" // High impact
|
||||||
ImpactCritical ImpactLevel = "critical" // Critical impact
|
ImpactCritical ImpactLevel = "critical" // Critical impact
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -201,38 +200,38 @@ const (
|
|||||||
|
|
||||||
// GoalProgress represents progress toward goal achievement
|
// GoalProgress represents progress toward goal achievement
|
||||||
type GoalProgress struct {
|
type GoalProgress struct {
|
||||||
GoalID string `json:"goal_id"` // Goal identifier
|
GoalID string `json:"goal_id"` // Goal identifier
|
||||||
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage (0-100)
|
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage (0-100)
|
||||||
CriteriaProgress []*CriterionProgress `json:"criteria_progress"` // Progress for each criterion
|
CriteriaProgress []*CriterionProgress `json:"criteria_progress"` // Progress for each criterion
|
||||||
Milestones []*MilestoneProgress `json:"milestones"` // Milestone progress
|
Milestones []*MilestoneProgress `json:"milestones"` // Milestone progress
|
||||||
Velocity float64 `json:"velocity"` // Progress velocity (% per day)
|
Velocity float64 `json:"velocity"` // Progress velocity (% per day)
|
||||||
EstimatedCompletion *time.Time `json:"estimated_completion,omitempty"` // Estimated completion date
|
EstimatedCompletion *time.Time `json:"estimated_completion,omitempty"` // Estimated completion date
|
||||||
RiskFactors []string `json:"risk_factors"` // Identified risk factors
|
RiskFactors []string `json:"risk_factors"` // Identified risk factors
|
||||||
Blockers []string `json:"blockers"` // Current blockers
|
Blockers []string `json:"blockers"` // Current blockers
|
||||||
LastUpdated time.Time `json:"last_updated"` // When last updated
|
LastUpdated time.Time `json:"last_updated"` // When last updated
|
||||||
UpdatedBy string `json:"updated_by"` // Who last updated
|
UpdatedBy string `json:"updated_by"` // Who last updated
|
||||||
}
|
}
|
||||||
|
|
||||||
// CriterionProgress represents progress for a specific success criterion
|
// CriterionProgress represents progress for a specific success criterion
|
||||||
type CriterionProgress struct {
|
type CriterionProgress struct {
|
||||||
CriterionID string `json:"criterion_id"` // Criterion ID
|
CriterionID string `json:"criterion_id"` // Criterion ID
|
||||||
CurrentValue interface{} `json:"current_value"` // Current value
|
CurrentValue interface{} `json:"current_value"` // Current value
|
||||||
TargetValue interface{} `json:"target_value"` // Target value
|
TargetValue interface{} `json:"target_value"` // Target value
|
||||||
ProgressPercentage float64 `json:"progress_percentage"` // Progress percentage
|
ProgressPercentage float64 `json:"progress_percentage"` // Progress percentage
|
||||||
Achieved bool `json:"achieved"` // Whether achieved
|
Achieved bool `json:"achieved"` // Whether achieved
|
||||||
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
|
AchievedAt *time.Time `json:"achieved_at,omitempty"` // When achieved
|
||||||
Notes string `json:"notes"` // Progress notes
|
Notes string `json:"notes"` // Progress notes
|
||||||
}
|
}
|
||||||
|
|
||||||
// MilestoneProgress represents progress for a goal milestone
|
// MilestoneProgress represents progress for a goal milestone
|
||||||
type MilestoneProgress struct {
|
type MilestoneProgress struct {
|
||||||
MilestoneID string `json:"milestone_id"` // Milestone ID
|
MilestoneID string `json:"milestone_id"` // Milestone ID
|
||||||
Name string `json:"name"` // Milestone name
|
Name string `json:"name"` // Milestone name
|
||||||
Status MilestoneStatus `json:"status"` // Current status
|
Status MilestoneStatus `json:"status"` // Current status
|
||||||
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage
|
CompletionPercentage float64 `json:"completion_percentage"` // Completion percentage
|
||||||
PlannedDate time.Time `json:"planned_date"` // Planned completion date
|
PlannedDate time.Time `json:"planned_date"` // Planned completion date
|
||||||
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
|
ActualDate *time.Time `json:"actual_date,omitempty"` // Actual completion date
|
||||||
DelayReason string `json:"delay_reason"` // Reason for delay if applicable
|
DelayReason string `json:"delay_reason"` // Reason for delay if applicable
|
||||||
}
|
}
|
||||||
|
|
||||||
// MilestoneStatus represents status of a milestone
|
// MilestoneStatus represents status of a milestone
|
||||||
@@ -248,27 +247,27 @@ const (
|
|||||||
|
|
||||||
// AlignmentDrift represents detected alignment drift
|
// AlignmentDrift represents detected alignment drift
|
||||||
type AlignmentDrift struct {
|
type AlignmentDrift struct {
|
||||||
Address ucxl.Address `json:"address"` // Context address
|
Address ucxl.Address `json:"address"` // Context address
|
||||||
DriftType DriftType `json:"drift_type"` // Type of drift
|
DriftType DriftType `json:"drift_type"` // Type of drift
|
||||||
Severity DriftSeverity `json:"severity"` // Drift severity
|
Severity DriftSeverity `json:"severity"` // Drift severity
|
||||||
CurrentScore float64 `json:"current_score"` // Current alignment score
|
CurrentScore float64 `json:"current_score"` // Current alignment score
|
||||||
PreviousScore float64 `json:"previous_score"` // Previous alignment score
|
PreviousScore float64 `json:"previous_score"` // Previous alignment score
|
||||||
ScoreDelta float64 `json:"score_delta"` // Change in score
|
ScoreDelta float64 `json:"score_delta"` // Change in score
|
||||||
AffectedGoals []string `json:"affected_goals"` // Goals affected by drift
|
AffectedGoals []string `json:"affected_goals"` // Goals affected by drift
|
||||||
DetectedAt time.Time `json:"detected_at"` // When drift was detected
|
DetectedAt time.Time `json:"detected_at"` // When drift was detected
|
||||||
DriftReason []string `json:"drift_reason"` // Reasons for drift
|
DriftReason []string `json:"drift_reason"` // Reasons for drift
|
||||||
RecommendedActions []string `json:"recommended_actions"` // Recommended actions
|
RecommendedActions []string `json:"recommended_actions"` // Recommended actions
|
||||||
Priority DriftPriority `json:"priority"` // Priority for addressing
|
Priority DriftPriority `json:"priority"` // Priority for addressing
|
||||||
}
|
}
|
||||||
|
|
||||||
// DriftType represents types of alignment drift
|
// DriftType represents types of alignment drift
|
||||||
type DriftType string
|
type DriftType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
DriftTypeGradual DriftType = "gradual" // Gradual drift over time
|
DriftTypeGradual DriftType = "gradual" // Gradual drift over time
|
||||||
DriftTypeSudden DriftType = "sudden" // Sudden drift
|
DriftTypeSudden DriftType = "sudden" // Sudden drift
|
||||||
DriftTypeOscillating DriftType = "oscillating" // Oscillating drift pattern
|
DriftTypeOscillating DriftType = "oscillating" // Oscillating drift pattern
|
||||||
DriftTypeGoalChange DriftType = "goal_change" // Due to goal changes
|
DriftTypeGoalChange DriftType = "goal_change" // Due to goal changes
|
||||||
DriftTypeContextChange DriftType = "context_change" // Due to context changes
|
DriftTypeContextChange DriftType = "context_change" // Due to context changes
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -286,68 +285,68 @@ const (
|
|||||||
type DriftPriority string
|
type DriftPriority string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
DriftPriorityLow DriftPriority = "low" // Low priority
|
DriftPriorityLow DriftPriority = "low" // Low priority
|
||||||
DriftPriorityMedium DriftPriority = "medium" // Medium priority
|
DriftPriorityMedium DriftPriority = "medium" // Medium priority
|
||||||
DriftPriorityHigh DriftPriority = "high" // High priority
|
DriftPriorityHigh DriftPriority = "high" // High priority
|
||||||
DriftPriorityUrgent DriftPriority = "urgent" // Urgent priority
|
DriftPriorityUrgent DriftPriority = "urgent" // Urgent priority
|
||||||
)
|
)
|
||||||
|
|
||||||
// AlignmentTrends represents alignment trends over time
|
// AlignmentTrends represents alignment trends over time
|
||||||
type AlignmentTrends struct {
|
type AlignmentTrends struct {
|
||||||
Address ucxl.Address `json:"address"` // Context address
|
Address ucxl.Address `json:"address"` // Context address
|
||||||
TimeRange time.Duration `json:"time_range"` // Analyzed time range
|
TimeRange time.Duration `json:"time_range"` // Analyzed time range
|
||||||
DataPoints []*TrendDataPoint `json:"data_points"` // Trend data points
|
DataPoints []*TrendDataPoint `json:"data_points"` // Trend data points
|
||||||
OverallTrend TrendDirection `json:"overall_trend"` // Overall trend direction
|
OverallTrend TrendDirection `json:"overall_trend"` // Overall trend direction
|
||||||
TrendStrength float64 `json:"trend_strength"` // Trend strength (0-1)
|
TrendStrength float64 `json:"trend_strength"` // Trend strength (0-1)
|
||||||
Volatility float64 `json:"volatility"` // Score volatility
|
Volatility float64 `json:"volatility"` // Score volatility
|
||||||
SeasonalPatterns []*SeasonalPattern `json:"seasonal_patterns"` // Detected seasonal patterns
|
SeasonalPatterns []*SeasonalPattern `json:"seasonal_patterns"` // Detected seasonal patterns
|
||||||
AnomalousPoints []*AnomalousPoint `json:"anomalous_points"` // Anomalous data points
|
AnomalousPoints []*AnomalousPoint `json:"anomalous_points"` // Anomalous data points
|
||||||
Predictions []*TrendPrediction `json:"predictions"` // Future trend predictions
|
Predictions []*TrendPrediction `json:"predictions"` // Future trend predictions
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
}
|
}
|
||||||
|
|
||||||
// TrendDataPoint represents a single data point in alignment trends
|
// TrendDataPoint represents a single data point in alignment trends
|
||||||
type TrendDataPoint struct {
|
type TrendDataPoint struct {
|
||||||
Timestamp time.Time `json:"timestamp"` // Data point timestamp
|
Timestamp time.Time `json:"timestamp"` // Data point timestamp
|
||||||
AlignmentScore float64 `json:"alignment_score"` // Alignment score at this time
|
AlignmentScore float64 `json:"alignment_score"` // Alignment score at this time
|
||||||
GoalScores map[string]float64 `json:"goal_scores"` // Individual goal scores
|
GoalScores map[string]float64 `json:"goal_scores"` // Individual goal scores
|
||||||
Events []string `json:"events"` // Events that occurred around this time
|
Events []string `json:"events"` // Events that occurred around this time
|
||||||
}
|
}
|
||||||
|
|
||||||
// TrendDirection represents direction of alignment trends
|
// TrendDirection represents direction of alignment trends
|
||||||
type TrendDirection string
|
type TrendDirection string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
TrendDirectionImproving TrendDirection = "improving" // Improving trend
|
TrendDirectionImproving TrendDirection = "improving" // Improving trend
|
||||||
TrendDirectionDeclining TrendDirection = "declining" // Declining trend
|
TrendDirectionDeclining TrendDirection = "declining" // Declining trend
|
||||||
TrendDirectionStable TrendDirection = "stable" // Stable trend
|
TrendDirectionStable TrendDirection = "stable" // Stable trend
|
||||||
TrendDirectionVolatile TrendDirection = "volatile" // Volatile trend
|
TrendDirectionVolatile TrendDirection = "volatile" // Volatile trend
|
||||||
)
|
)
|
||||||
|
|
||||||
// SeasonalPattern represents a detected seasonal pattern in alignment
|
// SeasonalPattern represents a detected seasonal pattern in alignment
|
||||||
type SeasonalPattern struct {
|
type SeasonalPattern struct {
|
||||||
PatternType string `json:"pattern_type"` // Type of pattern (weekly, monthly, etc.)
|
PatternType string `json:"pattern_type"` // Type of pattern (weekly, monthly, etc.)
|
||||||
Period time.Duration `json:"period"` // Pattern period
|
Period time.Duration `json:"period"` // Pattern period
|
||||||
Amplitude float64 `json:"amplitude"` // Pattern amplitude
|
Amplitude float64 `json:"amplitude"` // Pattern amplitude
|
||||||
Confidence float64 `json:"confidence"` // Pattern confidence
|
Confidence float64 `json:"confidence"` // Pattern confidence
|
||||||
Description string `json:"description"` // Pattern description
|
Description string `json:"description"` // Pattern description
|
||||||
}
|
}
|
||||||
|
|
||||||
// AnomalousPoint represents an anomalous data point
|
// AnomalousPoint represents an anomalous data point
|
||||||
type AnomalousPoint struct {
|
type AnomalousPoint struct {
|
||||||
Timestamp time.Time `json:"timestamp"` // When anomaly occurred
|
Timestamp time.Time `json:"timestamp"` // When anomaly occurred
|
||||||
ExpectedScore float64 `json:"expected_score"` // Expected alignment score
|
ExpectedScore float64 `json:"expected_score"` // Expected alignment score
|
||||||
ActualScore float64 `json:"actual_score"` // Actual alignment score
|
ActualScore float64 `json:"actual_score"` // Actual alignment score
|
||||||
AnomalyScore float64 `json:"anomaly_score"` // Anomaly score
|
AnomalyScore float64 `json:"anomaly_score"` // Anomaly score
|
||||||
PossibleCauses []string `json:"possible_causes"` // Possible causes
|
PossibleCauses []string `json:"possible_causes"` // Possible causes
|
||||||
}
|
}
|
||||||
|
|
||||||
// TrendPrediction represents a prediction of future alignment trends
|
// TrendPrediction represents a prediction of future alignment trends
|
||||||
type TrendPrediction struct {
|
type TrendPrediction struct {
|
||||||
Timestamp time.Time `json:"timestamp"` // Predicted timestamp
|
Timestamp time.Time `json:"timestamp"` // Predicted timestamp
|
||||||
PredictedScore float64 `json:"predicted_score"` // Predicted alignment score
|
PredictedScore float64 `json:"predicted_score"` // Predicted alignment score
|
||||||
ConfidenceInterval *ConfidenceInterval `json:"confidence_interval"` // Confidence interval
|
ConfidenceInterval *ConfidenceInterval `json:"confidence_interval"` // Confidence interval
|
||||||
Probability float64 `json:"probability"` // Prediction probability
|
Probability float64 `json:"probability"` // Prediction probability
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConfidenceInterval represents a confidence interval for predictions
|
// ConfidenceInterval represents a confidence interval for predictions
|
||||||
@@ -359,21 +358,21 @@ type ConfidenceInterval struct {
|
|||||||
|
|
||||||
// AlignmentWeights represents weights for alignment calculation
|
// AlignmentWeights represents weights for alignment calculation
|
||||||
type AlignmentWeights struct {
|
type AlignmentWeights struct {
|
||||||
GoalWeights map[string]float64 `json:"goal_weights"` // Weights by goal ID
|
GoalWeights map[string]float64 `json:"goal_weights"` // Weights by goal ID
|
||||||
CategoryWeights map[string]float64 `json:"category_weights"` // Weights by goal category
|
CategoryWeights map[string]float64 `json:"category_weights"` // Weights by goal category
|
||||||
PriorityWeights map[int]float64 `json:"priority_weights"` // Weights by priority level
|
PriorityWeights map[int]float64 `json:"priority_weights"` // Weights by priority level
|
||||||
PhaseWeights map[string]float64 `json:"phase_weights"` // Weights by project phase
|
PhaseWeights map[string]float64 `json:"phase_weights"` // Weights by project phase
|
||||||
RoleWeights map[string]float64 `json:"role_weights"` // Weights by role
|
RoleWeights map[string]float64 `json:"role_weights"` // Weights by role
|
||||||
ComponentWeights *AlignmentScores `json:"component_weights"` // Weights for score components
|
ComponentWeights *AlignmentScores `json:"component_weights"` // Weights for score components
|
||||||
TemporalWeights *TemporalWeights `json:"temporal_weights"` // Temporal weighting factors
|
TemporalWeights *TemporalWeights `json:"temporal_weights"` // Temporal weighting factors
|
||||||
}
|
}
|
||||||
|
|
||||||
// TemporalWeights represents temporal weighting factors
|
// TemporalWeights represents temporal weighting factors
|
||||||
type TemporalWeights struct {
|
type TemporalWeights struct {
|
||||||
RecentWeight float64 `json:"recent_weight"` // Weight for recent changes
|
RecentWeight float64 `json:"recent_weight"` // Weight for recent changes
|
||||||
DecayFactor float64 `json:"decay_factor"` // Score decay factor over time
|
DecayFactor float64 `json:"decay_factor"` // Score decay factor over time
|
||||||
RecencyWindow time.Duration `json:"recency_window"` // Window for considering recent activity
|
RecencyWindow time.Duration `json:"recency_window"` // Window for considering recent activity
|
||||||
HistoricalWeight float64 `json:"historical_weight"` // Weight for historical alignment
|
HistoricalWeight float64 `json:"historical_weight"` // Weight for historical alignment
|
||||||
}
|
}
|
||||||
|
|
||||||
// GoalFilter represents filtering criteria for goal listing
|
// GoalFilter represents filtering criteria for goal listing
|
||||||
@@ -393,55 +392,55 @@ type GoalFilter struct {
|
|||||||
|
|
||||||
// GoalHierarchy represents the hierarchical structure of goals
|
// GoalHierarchy represents the hierarchical structure of goals
|
||||||
type GoalHierarchy struct {
|
type GoalHierarchy struct {
|
||||||
RootGoals []*GoalNode `json:"root_goals"` // Root level goals
|
RootGoals []*GoalNode `json:"root_goals"` // Root level goals
|
||||||
MaxDepth int `json:"max_depth"` // Maximum hierarchy depth
|
MaxDepth int `json:"max_depth"` // Maximum hierarchy depth
|
||||||
TotalGoals int `json:"total_goals"` // Total number of goals
|
TotalGoals int `json:"total_goals"` // Total number of goals
|
||||||
GeneratedAt time.Time `json:"generated_at"` // When hierarchy was generated
|
GeneratedAt time.Time `json:"generated_at"` // When hierarchy was generated
|
||||||
}
|
}
|
||||||
|
|
||||||
// GoalNode represents a node in the goal hierarchy
|
// GoalNode represents a node in the goal hierarchy
|
||||||
type GoalNode struct {
|
type GoalNode struct {
|
||||||
Goal *ProjectGoal `json:"goal"` // Goal information
|
Goal *ProjectGoal `json:"goal"` // Goal information
|
||||||
Children []*GoalNode `json:"children"` // Child goals
|
Children []*GoalNode `json:"children"` // Child goals
|
||||||
Depth int `json:"depth"` // Depth in hierarchy
|
Depth int `json:"depth"` // Depth in hierarchy
|
||||||
Path []string `json:"path"` // Path from root
|
Path []string `json:"path"` // Path from root
|
||||||
}
|
}
|
||||||
|
|
||||||
// GoalValidation represents validation results for a goal
|
// GoalValidation represents validation results for a goal
|
||||||
type GoalValidation struct {
|
type GoalValidation struct {
|
||||||
Valid bool `json:"valid"` // Whether goal is valid
|
Valid bool `json:"valid"` // Whether goal is valid
|
||||||
Issues []*ValidationIssue `json:"issues"` // Validation issues
|
Issues []*ValidationIssue `json:"issues"` // Validation issues
|
||||||
Warnings []*ValidationWarning `json:"warnings"` // Validation warnings
|
Warnings []*ValidationWarning `json:"warnings"` // Validation warnings
|
||||||
ValidatedAt time.Time `json:"validated_at"` // When validated
|
ValidatedAt time.Time `json:"validated_at"` // When validated
|
||||||
}
|
}
|
||||||
|
|
||||||
// ValidationIssue represents a validation issue
|
// ValidationIssue represents a validation issue
|
||||||
type ValidationIssue struct {
|
type ValidationIssue struct {
|
||||||
Field string `json:"field"` // Affected field
|
Field string `json:"field"` // Affected field
|
||||||
Code string `json:"code"` // Issue code
|
Code string `json:"code"` // Issue code
|
||||||
Message string `json:"message"` // Issue message
|
Message string `json:"message"` // Issue message
|
||||||
Severity string `json:"severity"` // Issue severity
|
Severity string `json:"severity"` // Issue severity
|
||||||
Suggestion string `json:"suggestion"` // Suggested fix
|
Suggestion string `json:"suggestion"` // Suggested fix
|
||||||
}
|
}
|
||||||
|
|
||||||
// ValidationWarning represents a validation warning
|
// ValidationWarning represents a validation warning
|
||||||
type ValidationWarning struct {
|
type ValidationWarning struct {
|
||||||
Field string `json:"field"` // Affected field
|
Field string `json:"field"` // Affected field
|
||||||
Code string `json:"code"` // Warning code
|
Code string `json:"code"` // Warning code
|
||||||
Message string `json:"message"` // Warning message
|
Message string `json:"message"` // Warning message
|
||||||
Suggestion string `json:"suggestion"` // Suggested improvement
|
Suggestion string `json:"suggestion"` // Suggested improvement
|
||||||
}
|
}
|
||||||
|
|
||||||
// GoalMilestone represents a milestone for goal tracking
|
// GoalMilestone represents a milestone for goal tracking
|
||||||
type GoalMilestone struct {
|
type GoalMilestone struct {
|
||||||
ID string `json:"id"` // Milestone ID
|
ID string `json:"id"` // Milestone ID
|
||||||
Name string `json:"name"` // Milestone name
|
Name string `json:"name"` // Milestone name
|
||||||
Description string `json:"description"` // Milestone description
|
Description string `json:"description"` // Milestone description
|
||||||
PlannedDate time.Time `json:"planned_date"` // Planned completion date
|
PlannedDate time.Time `json:"planned_date"` // Planned completion date
|
||||||
Weight float64 `json:"weight"` // Milestone weight
|
Weight float64 `json:"weight"` // Milestone weight
|
||||||
Criteria []string `json:"criteria"` // Completion criteria
|
Criteria []string `json:"criteria"` // Completion criteria
|
||||||
Dependencies []string `json:"dependencies"` // Milestone dependencies
|
Dependencies []string `json:"dependencies"` // Milestone dependencies
|
||||||
CreatedAt time.Time `json:"created_at"` // When created
|
CreatedAt time.Time `json:"created_at"` // When created
|
||||||
}
|
}
|
||||||
|
|
||||||
// MilestoneStatus represents status of a milestone (duplicate removed)
|
// MilestoneStatus represents status of a milestone (duplicate removed)
|
||||||
@@ -449,39 +448,39 @@ type GoalMilestone struct {
|
|||||||
|
|
||||||
// ProgressUpdate represents an update to goal progress
|
// ProgressUpdate represents an update to goal progress
|
||||||
type ProgressUpdate struct {
|
type ProgressUpdate struct {
|
||||||
UpdateType ProgressUpdateType `json:"update_type"` // Type of update
|
UpdateType ProgressUpdateType `json:"update_type"` // Type of update
|
||||||
CompletionDelta float64 `json:"completion_delta"` // Change in completion percentage
|
CompletionDelta float64 `json:"completion_delta"` // Change in completion percentage
|
||||||
CriteriaUpdates []*CriterionUpdate `json:"criteria_updates"` // Updates to criteria
|
CriteriaUpdates []*CriterionUpdate `json:"criteria_updates"` // Updates to criteria
|
||||||
MilestoneUpdates []*MilestoneUpdate `json:"milestone_updates"` // Updates to milestones
|
MilestoneUpdates []*MilestoneUpdate `json:"milestone_updates"` // Updates to milestones
|
||||||
Notes string `json:"notes"` // Update notes
|
Notes string `json:"notes"` // Update notes
|
||||||
UpdatedBy string `json:"updated_by"` // Who made the update
|
UpdatedBy string `json:"updated_by"` // Who made the update
|
||||||
Evidence []string `json:"evidence"` // Evidence for progress
|
Evidence []string `json:"evidence"` // Evidence for progress
|
||||||
RiskFactors []string `json:"risk_factors"` // New risk factors
|
RiskFactors []string `json:"risk_factors"` // New risk factors
|
||||||
Blockers []string `json:"blockers"` // New blockers
|
Blockers []string `json:"blockers"` // New blockers
|
||||||
}
|
}
|
||||||
|
|
||||||
// ProgressUpdateType represents types of progress updates
|
// ProgressUpdateType represents types of progress updates
|
||||||
type ProgressUpdateType string
|
type ProgressUpdateType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ProgressUpdateTypeIncrement ProgressUpdateType = "increment" // Incremental progress
|
ProgressUpdateTypeIncrement ProgressUpdateType = "increment" // Incremental progress
|
||||||
ProgressUpdateTypeAbsolute ProgressUpdateType = "absolute" // Absolute progress value
|
ProgressUpdateTypeAbsolute ProgressUpdateType = "absolute" // Absolute progress value
|
||||||
ProgressUpdateTypeMilestone ProgressUpdateType = "milestone" // Milestone completion
|
ProgressUpdateTypeMilestone ProgressUpdateType = "milestone" // Milestone completion
|
||||||
ProgressUpdateTypeCriterion ProgressUpdateType = "criterion" // Criterion achievement
|
ProgressUpdateTypeCriterion ProgressUpdateType = "criterion" // Criterion achievement
|
||||||
)
|
)
|
||||||
|
|
||||||
// CriterionUpdate represents an update to a success criterion
|
// CriterionUpdate represents an update to a success criterion
|
||||||
type CriterionUpdate struct {
|
type CriterionUpdate struct {
|
||||||
CriterionID string `json:"criterion_id"` // Criterion ID
|
CriterionID string `json:"criterion_id"` // Criterion ID
|
||||||
NewValue interface{} `json:"new_value"` // New current value
|
NewValue interface{} `json:"new_value"` // New current value
|
||||||
Achieved bool `json:"achieved"` // Whether now achieved
|
Achieved bool `json:"achieved"` // Whether now achieved
|
||||||
Notes string `json:"notes"` // Update notes
|
Notes string `json:"notes"` // Update notes
|
||||||
}
|
}
|
||||||
|
|
||||||
// MilestoneUpdate represents an update to a milestone
|
// MilestoneUpdate represents an update to a milestone
|
||||||
type MilestoneUpdate struct {
|
type MilestoneUpdate struct {
|
||||||
MilestoneID string `json:"milestone_id"` // Milestone ID
|
MilestoneID string `json:"milestone_id"` // Milestone ID
|
||||||
NewStatus MilestoneStatus `json:"new_status"` // New status
|
NewStatus MilestoneStatus `json:"new_status"` // New status
|
||||||
CompletedDate *time.Time `json:"completed_date,omitempty"` // Completion date if completed
|
CompletedDate *time.Time `json:"completed_date,omitempty"` // Completion date if completed
|
||||||
Notes string `json:"notes"` // Update notes
|
Notes string `json:"notes"` // Update notes
|
||||||
}
|
}
|
||||||
@@ -26,12 +26,25 @@ type ContextNode struct {
|
|||||||
Insights []string `json:"insights"` // Analytical insights
|
Insights []string `json:"insights"` // Analytical insights
|
||||||
|
|
||||||
// Hierarchy control
|
// Hierarchy control
|
||||||
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
|
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
|
||||||
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
|
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
|
||||||
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
|
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
|
||||||
|
AppliesTo ContextScope `json:"applies_to"` // Scope of application within hierarchy
|
||||||
|
Parent *string `json:"parent,omitempty"` // Parent context path
|
||||||
|
Children []string `json:"children,omitempty"` // Child context paths
|
||||||
|
|
||||||
// Metadata
|
// File metadata
|
||||||
|
FileType string `json:"file_type"` // File extension or type
|
||||||
|
Language *string `json:"language,omitempty"` // Programming language
|
||||||
|
Size *int64 `json:"size,omitempty"` // File size in bytes
|
||||||
|
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification timestamp
|
||||||
|
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
|
||||||
|
|
||||||
|
// Temporal metadata
|
||||||
GeneratedAt time.Time `json:"generated_at"` // When context was generated
|
GeneratedAt time.Time `json:"generated_at"` // When context was generated
|
||||||
|
UpdatedAt time.Time `json:"updated_at"` // Last update timestamp
|
||||||
|
CreatedBy string `json:"created_by"` // Who created the context
|
||||||
|
WhoUpdated string `json:"who_updated"` // Who performed the last update
|
||||||
RAGConfidence float64 `json:"rag_confidence"` // RAG system confidence (0-1)
|
RAGConfidence float64 `json:"rag_confidence"` // RAG system confidence (0-1)
|
||||||
|
|
||||||
// Access control
|
// Access control
|
||||||
|
|||||||
@@ -261,11 +261,11 @@ func (ch *ConsistentHashingImpl) GetMetrics() *ConsistentHashMetrics {
|
|||||||
defer ch.mu.RUnlock()
|
defer ch.mu.RUnlock()
|
||||||
|
|
||||||
return &ConsistentHashMetrics{
|
return &ConsistentHashMetrics{
|
||||||
TotalKeys: 0, // Would be maintained by usage tracking
|
TotalKeys: 0, // Would be maintained by usage tracking
|
||||||
NodeUtilization: ch.GetNodeDistribution(),
|
NodeUtilization: ch.GetNodeDistribution(),
|
||||||
RebalanceEvents: 0, // Would be maintained by event tracking
|
RebalanceEvents: 0, // Would be maintained by event tracking
|
||||||
AverageSeekTime: 0.1, // Placeholder - would be measured
|
AverageSeekTime: 0.1, // Placeholder - would be measured
|
||||||
LoadBalanceScore: ch.calculateLoadBalance(),
|
LoadBalanceScore: ch.calculateLoadBalance(),
|
||||||
LastRebalanceTime: 0, // Would be maintained by event tracking
|
LastRebalanceTime: 0, // Would be maintained by event tracking
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -364,8 +364,8 @@ func (ch *ConsistentHashingImpl) FindClosestNodes(key string, count int) ([]stri
|
|||||||
if hash >= keyHash {
|
if hash >= keyHash {
|
||||||
distance = hash - keyHash
|
distance = hash - keyHash
|
||||||
} else {
|
} else {
|
||||||
// Wrap around distance
|
// Wrap around distance without overflowing 32-bit space
|
||||||
distance = (1<<32 - keyHash) + hash
|
distance = uint32((uint64(1)<<32 - uint64(keyHash)) + uint64(hash))
|
||||||
}
|
}
|
||||||
|
|
||||||
distances = append(distances, struct {
|
distances = append(distances, struct {
|
||||||
|
|||||||
@@ -7,38 +7,38 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
"chorus/pkg/election"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
"chorus/pkg/ucxl"
|
"chorus/pkg/crypto"
|
||||||
|
"chorus/pkg/dht"
|
||||||
|
"chorus/pkg/election"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DistributionCoordinator orchestrates distributed context operations across the cluster
|
// DistributionCoordinator orchestrates distributed context operations across the cluster
|
||||||
type DistributionCoordinator struct {
|
type DistributionCoordinator struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
config *config.Config
|
config *config.Config
|
||||||
dht *dht.DHT
|
dht dht.DHT
|
||||||
roleCrypto *crypto.RoleCrypto
|
roleCrypto *crypto.RoleCrypto
|
||||||
election election.Election
|
election election.Election
|
||||||
distributor ContextDistributor
|
distributor ContextDistributor
|
||||||
replicationMgr ReplicationManager
|
replicationMgr ReplicationManager
|
||||||
conflictResolver ConflictResolver
|
conflictResolver ConflictResolver
|
||||||
gossipProtocol GossipProtocol
|
gossipProtocol GossipProtocol
|
||||||
networkMgr NetworkManager
|
networkMgr NetworkManager
|
||||||
|
|
||||||
// Coordination state
|
// Coordination state
|
||||||
isLeader bool
|
isLeader bool
|
||||||
leaderID string
|
leaderID string
|
||||||
coordinationTasks chan *CoordinationTask
|
coordinationTasks chan *CoordinationTask
|
||||||
distributionQueue chan *DistributionRequest
|
distributionQueue chan *DistributionRequest
|
||||||
roleFilters map[string]*RoleFilter
|
roleFilters map[string]*RoleFilter
|
||||||
healthMonitors map[string]*HealthMonitor
|
healthMonitors map[string]*HealthMonitor
|
||||||
|
|
||||||
// Statistics and metrics
|
// Statistics and metrics
|
||||||
stats *CoordinationStatistics
|
stats *CoordinationStatistics
|
||||||
performanceMetrics *PerformanceMetrics
|
performanceMetrics *PerformanceMetrics
|
||||||
|
|
||||||
// Configuration
|
// Configuration
|
||||||
maxConcurrentTasks int
|
maxConcurrentTasks int
|
||||||
@@ -49,14 +49,14 @@ type DistributionCoordinator struct {
|
|||||||
|
|
||||||
// CoordinationTask represents a task for the coordinator
|
// CoordinationTask represents a task for the coordinator
|
||||||
type CoordinationTask struct {
|
type CoordinationTask struct {
|
||||||
TaskID string `json:"task_id"`
|
TaskID string `json:"task_id"`
|
||||||
TaskType CoordinationTaskType `json:"task_type"`
|
TaskType CoordinationTaskType `json:"task_type"`
|
||||||
Priority Priority `json:"priority"`
|
Priority Priority `json:"priority"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
RequestedBy string `json:"requested_by"`
|
RequestedBy string `json:"requested_by"`
|
||||||
Payload interface{} `json:"payload"`
|
Payload interface{} `json:"payload"`
|
||||||
Context context.Context `json:"-"`
|
Context context.Context `json:"-"`
|
||||||
Callback func(error) `json:"-"`
|
Callback func(error) `json:"-"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// CoordinationTaskType represents different types of coordination tasks
|
// CoordinationTaskType represents different types of coordination tasks
|
||||||
@@ -74,55 +74,55 @@ const (
|
|||||||
|
|
||||||
// DistributionRequest represents a request for context distribution
|
// DistributionRequest represents a request for context distribution
|
||||||
type DistributionRequest struct {
|
type DistributionRequest struct {
|
||||||
RequestID string `json:"request_id"`
|
RequestID string `json:"request_id"`
|
||||||
ContextNode *slurpContext.ContextNode `json:"context_node"`
|
ContextNode *slurpContext.ContextNode `json:"context_node"`
|
||||||
TargetRoles []string `json:"target_roles"`
|
TargetRoles []string `json:"target_roles"`
|
||||||
Priority Priority `json:"priority"`
|
Priority Priority `json:"priority"`
|
||||||
RequesterID string `json:"requester_id"`
|
RequesterID string `json:"requester_id"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
Options *DistributionOptions `json:"options"`
|
Options *DistributionOptions `json:"options"`
|
||||||
Callback func(*DistributionResult, error) `json:"-"`
|
Callback func(*DistributionResult, error) `json:"-"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// DistributionOptions contains options for context distribution
|
// DistributionOptions contains options for context distribution
|
||||||
type DistributionOptions struct {
|
type DistributionOptions struct {
|
||||||
ReplicationFactor int `json:"replication_factor"`
|
ReplicationFactor int `json:"replication_factor"`
|
||||||
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
|
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
|
||||||
EncryptionLevel crypto.AccessLevel `json:"encryption_level"`
|
EncryptionLevel crypto.AccessLevel `json:"encryption_level"`
|
||||||
TTL *time.Duration `json:"ttl,omitempty"`
|
TTL *time.Duration `json:"ttl,omitempty"`
|
||||||
PreferredZones []string `json:"preferred_zones"`
|
PreferredZones []string `json:"preferred_zones"`
|
||||||
ExcludedNodes []string `json:"excluded_nodes"`
|
ExcludedNodes []string `json:"excluded_nodes"`
|
||||||
ConflictResolution ResolutionType `json:"conflict_resolution"`
|
ConflictResolution ResolutionType `json:"conflict_resolution"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// DistributionResult represents the result of a distribution operation
|
// DistributionResult represents the result of a distribution operation
|
||||||
type DistributionResult struct {
|
type DistributionResult struct {
|
||||||
RequestID string `json:"request_id"`
|
RequestID string `json:"request_id"`
|
||||||
Success bool `json:"success"`
|
Success bool `json:"success"`
|
||||||
DistributedNodes []string `json:"distributed_nodes"`
|
DistributedNodes []string `json:"distributed_nodes"`
|
||||||
ReplicationFactor int `json:"replication_factor"`
|
ReplicationFactor int `json:"replication_factor"`
|
||||||
ProcessingTime time.Duration `json:"processing_time"`
|
ProcessingTime time.Duration `json:"processing_time"`
|
||||||
Errors []string `json:"errors"`
|
Errors []string `json:"errors"`
|
||||||
ConflictResolved *ConflictResolution `json:"conflict_resolved,omitempty"`
|
ConflictResolved *ConflictResolution `json:"conflict_resolved,omitempty"`
|
||||||
CompletedAt time.Time `json:"completed_at"`
|
CompletedAt time.Time `json:"completed_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleFilter manages role-based filtering for context access
|
// RoleFilter manages role-based filtering for context access
|
||||||
type RoleFilter struct {
|
type RoleFilter struct {
|
||||||
RoleID string `json:"role_id"`
|
RoleID string `json:"role_id"`
|
||||||
AccessLevel crypto.AccessLevel `json:"access_level"`
|
AccessLevel crypto.AccessLevel `json:"access_level"`
|
||||||
AllowedCompartments []string `json:"allowed_compartments"`
|
AllowedCompartments []string `json:"allowed_compartments"`
|
||||||
FilterRules []*FilterRule `json:"filter_rules"`
|
FilterRules []*FilterRule `json:"filter_rules"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// FilterRule represents a single filtering rule
|
// FilterRule represents a single filtering rule
|
||||||
type FilterRule struct {
|
type FilterRule struct {
|
||||||
RuleID string `json:"rule_id"`
|
RuleID string `json:"rule_id"`
|
||||||
RuleType FilterRuleType `json:"rule_type"`
|
RuleType FilterRuleType `json:"rule_type"`
|
||||||
Pattern string `json:"pattern"`
|
Pattern string `json:"pattern"`
|
||||||
Action FilterAction `json:"action"`
|
Action FilterAction `json:"action"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// FilterRuleType represents different types of filter rules
|
// FilterRuleType represents different types of filter rules
|
||||||
@@ -139,10 +139,10 @@ const (
|
|||||||
type FilterAction string
|
type FilterAction string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
FilterActionAllow FilterAction = "allow"
|
FilterActionAllow FilterAction = "allow"
|
||||||
FilterActionDeny FilterAction = "deny"
|
FilterActionDeny FilterAction = "deny"
|
||||||
FilterActionModify FilterAction = "modify"
|
FilterActionModify FilterAction = "modify"
|
||||||
FilterActionAudit FilterAction = "audit"
|
FilterActionAudit FilterAction = "audit"
|
||||||
)
|
)
|
||||||
|
|
||||||
// HealthMonitor monitors the health of a specific component
|
// HealthMonitor monitors the health of a specific component
|
||||||
@@ -160,10 +160,10 @@ type HealthMonitor struct {
|
|||||||
type ComponentType string
|
type ComponentType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ComponentTypeDHT ComponentType = "dht"
|
ComponentTypeDHT ComponentType = "dht"
|
||||||
ComponentTypeReplication ComponentType = "replication"
|
ComponentTypeReplication ComponentType = "replication"
|
||||||
ComponentTypeGossip ComponentType = "gossip"
|
ComponentTypeGossip ComponentType = "gossip"
|
||||||
ComponentTypeNetwork ComponentType = "network"
|
ComponentTypeNetwork ComponentType = "network"
|
||||||
ComponentTypeConflictResolver ComponentType = "conflict_resolver"
|
ComponentTypeConflictResolver ComponentType = "conflict_resolver"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -190,13 +190,13 @@ type CoordinationStatistics struct {
|
|||||||
|
|
||||||
// PerformanceMetrics tracks detailed performance metrics
|
// PerformanceMetrics tracks detailed performance metrics
|
||||||
type PerformanceMetrics struct {
|
type PerformanceMetrics struct {
|
||||||
ThroughputPerSecond float64 `json:"throughput_per_second"`
|
ThroughputPerSecond float64 `json:"throughput_per_second"`
|
||||||
LatencyPercentiles map[string]float64 `json:"latency_percentiles"`
|
LatencyPercentiles map[string]float64 `json:"latency_percentiles"`
|
||||||
ErrorRateByType map[string]float64 `json:"error_rate_by_type"`
|
ErrorRateByType map[string]float64 `json:"error_rate_by_type"`
|
||||||
ResourceUtilization map[string]float64 `json:"resource_utilization"`
|
ResourceUtilization map[string]float64 `json:"resource_utilization"`
|
||||||
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
|
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
|
||||||
StorageMetrics *StorageMetrics `json:"storage_metrics"`
|
StorageMetrics *StorageMetrics `json:"storage_metrics"`
|
||||||
LastCalculated time.Time `json:"last_calculated"`
|
LastCalculated time.Time `json:"last_calculated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NetworkMetrics tracks network-related performance
|
// NetworkMetrics tracks network-related performance
|
||||||
@@ -210,24 +210,24 @@ type NetworkMetrics struct {
|
|||||||
|
|
||||||
// StorageMetrics tracks storage-related performance
|
// StorageMetrics tracks storage-related performance
|
||||||
type StorageMetrics struct {
|
type StorageMetrics struct {
|
||||||
TotalContexts int64 `json:"total_contexts"`
|
TotalContexts int64 `json:"total_contexts"`
|
||||||
StorageUtilization float64 `json:"storage_utilization"`
|
StorageUtilization float64 `json:"storage_utilization"`
|
||||||
CompressionRatio float64 `json:"compression_ratio"`
|
CompressionRatio float64 `json:"compression_ratio"`
|
||||||
ReplicationEfficiency float64 `json:"replication_efficiency"`
|
ReplicationEfficiency float64 `json:"replication_efficiency"`
|
||||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewDistributionCoordinator creates a new distribution coordinator
|
// NewDistributionCoordinator creates a new distribution coordinator
|
||||||
func NewDistributionCoordinator(
|
func NewDistributionCoordinator(
|
||||||
config *config.Config,
|
config *config.Config,
|
||||||
dht *dht.DHT,
|
dhtInstance dht.DHT,
|
||||||
roleCrypto *crypto.RoleCrypto,
|
roleCrypto *crypto.RoleCrypto,
|
||||||
election election.Election,
|
election election.Election,
|
||||||
) (*DistributionCoordinator, error) {
|
) (*DistributionCoordinator, error) {
|
||||||
if config == nil {
|
if config == nil {
|
||||||
return nil, fmt.Errorf("config is required")
|
return nil, fmt.Errorf("config is required")
|
||||||
}
|
}
|
||||||
if dht == nil {
|
if dhtInstance == nil {
|
||||||
return nil, fmt.Errorf("DHT instance is required")
|
return nil, fmt.Errorf("DHT instance is required")
|
||||||
}
|
}
|
||||||
if roleCrypto == nil {
|
if roleCrypto == nil {
|
||||||
@@ -238,14 +238,14 @@ func NewDistributionCoordinator(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Create distributor
|
// Create distributor
|
||||||
distributor, err := NewDHTContextDistributor(dht, roleCrypto, election, config)
|
distributor, err := NewDHTContextDistributor(dhtInstance, roleCrypto, election, config)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to create context distributor: %w", err)
|
return nil, fmt.Errorf("failed to create context distributor: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
coord := &DistributionCoordinator{
|
coord := &DistributionCoordinator{
|
||||||
config: config,
|
config: config,
|
||||||
dht: dht,
|
dht: dhtInstance,
|
||||||
roleCrypto: roleCrypto,
|
roleCrypto: roleCrypto,
|
||||||
election: election,
|
election: election,
|
||||||
distributor: distributor,
|
distributor: distributor,
|
||||||
@@ -264,9 +264,9 @@ func NewDistributionCoordinator(
|
|||||||
LatencyPercentiles: make(map[string]float64),
|
LatencyPercentiles: make(map[string]float64),
|
||||||
ErrorRateByType: make(map[string]float64),
|
ErrorRateByType: make(map[string]float64),
|
||||||
ResourceUtilization: make(map[string]float64),
|
ResourceUtilization: make(map[string]float64),
|
||||||
NetworkMetrics: &NetworkMetrics{},
|
NetworkMetrics: &NetworkMetrics{},
|
||||||
StorageMetrics: &StorageMetrics{},
|
StorageMetrics: &StorageMetrics{},
|
||||||
LastCalculated: time.Now(),
|
LastCalculated: time.Now(),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -356,7 +356,7 @@ func (dc *DistributionCoordinator) CoordinateReplication(
|
|||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
RequestedBy: dc.config.Agent.ID,
|
RequestedBy: dc.config.Agent.ID,
|
||||||
Payload: map[string]interface{}{
|
Payload: map[string]interface{}{
|
||||||
"address": address,
|
"address": address,
|
||||||
"target_factor": targetFactor,
|
"target_factor": targetFactor,
|
||||||
},
|
},
|
||||||
Context: ctx,
|
Context: ctx,
|
||||||
@@ -398,14 +398,14 @@ func (dc *DistributionCoordinator) GetClusterHealth() (*ClusterHealth, error) {
|
|||||||
defer dc.mu.RUnlock()
|
defer dc.mu.RUnlock()
|
||||||
|
|
||||||
health := &ClusterHealth{
|
health := &ClusterHealth{
|
||||||
OverallStatus: dc.calculateOverallHealth(),
|
OverallStatus: dc.calculateOverallHealth(),
|
||||||
NodeCount: len(dc.dht.GetConnectedPeers()) + 1, // +1 for current node
|
NodeCount: len(dc.healthMonitors) + 1, // Placeholder count including current node
|
||||||
HealthyNodes: 0,
|
HealthyNodes: 0,
|
||||||
UnhealthyNodes: 0,
|
UnhealthyNodes: 0,
|
||||||
ComponentHealth: make(map[string]*ComponentHealth),
|
ComponentHealth: make(map[string]*ComponentHealth),
|
||||||
LastUpdated: time.Now(),
|
LastUpdated: time.Now(),
|
||||||
Alerts: []string{},
|
Alerts: []string{},
|
||||||
Recommendations: []string{},
|
Recommendations: []string{},
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate component health
|
// Calculate component health
|
||||||
@@ -598,8 +598,8 @@ func (dc *DistributionCoordinator) initializeHealthMonitors() {
|
|||||||
components := map[string]ComponentType{
|
components := map[string]ComponentType{
|
||||||
"dht": ComponentTypeDHT,
|
"dht": ComponentTypeDHT,
|
||||||
"replication": ComponentTypeReplication,
|
"replication": ComponentTypeReplication,
|
||||||
"gossip": ComponentTypeGossip,
|
"gossip": ComponentTypeGossip,
|
||||||
"network": ComponentTypeNetwork,
|
"network": ComponentTypeNetwork,
|
||||||
"conflict_resolver": ComponentTypeConflictResolver,
|
"conflict_resolver": ComponentTypeConflictResolver,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -682,8 +682,8 @@ func (dc *DistributionCoordinator) executeDistribution(ctx context.Context, requ
|
|||||||
Success: false,
|
Success: false,
|
||||||
DistributedNodes: []string{},
|
DistributedNodes: []string{},
|
||||||
ProcessingTime: 0,
|
ProcessingTime: 0,
|
||||||
Errors: []string{},
|
Errors: []string{},
|
||||||
CompletedAt: time.Now(),
|
CompletedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Execute distribution via distributor
|
// Execute distribution via distributor
|
||||||
@@ -703,14 +703,14 @@ func (dc *DistributionCoordinator) executeDistribution(ctx context.Context, requ
|
|||||||
|
|
||||||
// ClusterHealth represents overall cluster health
|
// ClusterHealth represents overall cluster health
|
||||||
type ClusterHealth struct {
|
type ClusterHealth struct {
|
||||||
OverallStatus HealthStatus `json:"overall_status"`
|
OverallStatus HealthStatus `json:"overall_status"`
|
||||||
NodeCount int `json:"node_count"`
|
NodeCount int `json:"node_count"`
|
||||||
HealthyNodes int `json:"healthy_nodes"`
|
HealthyNodes int `json:"healthy_nodes"`
|
||||||
UnhealthyNodes int `json:"unhealthy_nodes"`
|
UnhealthyNodes int `json:"unhealthy_nodes"`
|
||||||
ComponentHealth map[string]*ComponentHealth `json:"component_health"`
|
ComponentHealth map[string]*ComponentHealth `json:"component_health"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
Alerts []string `json:"alerts"`
|
Alerts []string `json:"alerts"`
|
||||||
Recommendations []string `json:"recommendations"`
|
Recommendations []string `json:"recommendations"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ComponentHealth represents individual component health
|
// ComponentHealth represents individual component health
|
||||||
@@ -736,14 +736,14 @@ func (dc *DistributionCoordinator) getDefaultDistributionOptions() *Distribution
|
|||||||
return &DistributionOptions{
|
return &DistributionOptions{
|
||||||
ReplicationFactor: 3,
|
ReplicationFactor: 3,
|
||||||
ConsistencyLevel: ConsistencyEventual,
|
ConsistencyLevel: ConsistencyEventual,
|
||||||
EncryptionLevel: crypto.AccessMedium,
|
EncryptionLevel: crypto.AccessLevel(slurpContext.AccessMedium),
|
||||||
ConflictResolution: ResolutionMerged,
|
ConflictResolution: ResolutionMerged,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dc *DistributionCoordinator) getAccessLevelForRole(role string) crypto.AccessLevel {
|
func (dc *DistributionCoordinator) getAccessLevelForRole(role string) crypto.AccessLevel {
|
||||||
// Placeholder implementation
|
// Placeholder implementation
|
||||||
return crypto.AccessMedium
|
return crypto.AccessLevel(slurpContext.AccessMedium)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dc *DistributionCoordinator) getAllowedCompartments(role string) []string {
|
func (dc *DistributionCoordinator) getAllowedCompartments(role string) []string {
|
||||||
@@ -796,11 +796,11 @@ func (dc *DistributionCoordinator) updatePerformanceMetrics() {
|
|||||||
|
|
||||||
func (dc *DistributionCoordinator) priorityFromSeverity(severity ConflictSeverity) Priority {
|
func (dc *DistributionCoordinator) priorityFromSeverity(severity ConflictSeverity) Priority {
|
||||||
switch severity {
|
switch severity {
|
||||||
case SeverityCritical:
|
case ConflictSeverityCritical:
|
||||||
return PriorityCritical
|
return PriorityCritical
|
||||||
case SeverityHigh:
|
case ConflictSeverityHigh:
|
||||||
return PriorityHigh
|
return PriorityHigh
|
||||||
case SeverityMedium:
|
case ConflictSeverityMedium:
|
||||||
return PriorityNormal
|
return PriorityNormal
|
||||||
default:
|
default:
|
||||||
return PriorityLow
|
return PriorityLow
|
||||||
|
|||||||
@@ -9,12 +9,12 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
"chorus/pkg/election"
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
|
"chorus/pkg/crypto"
|
||||||
|
"chorus/pkg/dht"
|
||||||
|
"chorus/pkg/election"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ContextDistributor handles distributed context operations via DHT
|
// ContextDistributor handles distributed context operations via DHT
|
||||||
@@ -61,6 +61,12 @@ type ContextDistributor interface {
|
|||||||
|
|
||||||
// SetReplicationPolicy configures replication behavior
|
// SetReplicationPolicy configures replication behavior
|
||||||
SetReplicationPolicy(policy *ReplicationPolicy) error
|
SetReplicationPolicy(policy *ReplicationPolicy) error
|
||||||
|
|
||||||
|
// Start initializes background distribution routines
|
||||||
|
Start(ctx context.Context) error
|
||||||
|
|
||||||
|
// Stop releases distribution resources
|
||||||
|
Stop(ctx context.Context) error
|
||||||
}
|
}
|
||||||
|
|
||||||
// DHTStorage provides direct DHT storage operations for context data
|
// DHTStorage provides direct DHT storage operations for context data
|
||||||
@@ -175,59 +181,59 @@ type NetworkManager interface {
|
|||||||
|
|
||||||
// DistributionCriteria represents criteria for listing distributed contexts
|
// DistributionCriteria represents criteria for listing distributed contexts
|
||||||
type DistributionCriteria struct {
|
type DistributionCriteria struct {
|
||||||
Tags []string `json:"tags"` // Required tags
|
Tags []string `json:"tags"` // Required tags
|
||||||
Technologies []string `json:"technologies"` // Required technologies
|
Technologies []string `json:"technologies"` // Required technologies
|
||||||
MinReplicas int `json:"min_replicas"` // Minimum replica count
|
MinReplicas int `json:"min_replicas"` // Minimum replica count
|
||||||
MaxAge *time.Duration `json:"max_age"` // Maximum age
|
MaxAge *time.Duration `json:"max_age"` // Maximum age
|
||||||
HealthyOnly bool `json:"healthy_only"` // Only healthy replicas
|
HealthyOnly bool `json:"healthy_only"` // Only healthy replicas
|
||||||
Limit int `json:"limit"` // Maximum results
|
Limit int `json:"limit"` // Maximum results
|
||||||
Offset int `json:"offset"` // Result offset
|
Offset int `json:"offset"` // Result offset
|
||||||
}
|
}
|
||||||
|
|
||||||
// DistributedContextInfo represents information about distributed context
|
// DistributedContextInfo represents information about distributed context
|
||||||
type DistributedContextInfo struct {
|
type DistributedContextInfo struct {
|
||||||
Address ucxl.Address `json:"address"` // Context address
|
Address ucxl.Address `json:"address"` // Context address
|
||||||
Roles []string `json:"roles"` // Accessible roles
|
Roles []string `json:"roles"` // Accessible roles
|
||||||
ReplicaCount int `json:"replica_count"` // Number of replicas
|
ReplicaCount int `json:"replica_count"` // Number of replicas
|
||||||
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
|
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
|
||||||
LastUpdated time.Time `json:"last_updated"` // Last update time
|
LastUpdated time.Time `json:"last_updated"` // Last update time
|
||||||
Version int64 `json:"version"` // Version number
|
Version int64 `json:"version"` // Version number
|
||||||
Size int64 `json:"size"` // Data size
|
Size int64 `json:"size"` // Data size
|
||||||
Checksum string `json:"checksum"` // Data checksum
|
Checksum string `json:"checksum"` // Data checksum
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConflictResolution represents the result of conflict resolution
|
// ConflictResolution represents the result of conflict resolution
|
||||||
type ConflictResolution struct {
|
type ConflictResolution struct {
|
||||||
Address ucxl.Address `json:"address"` // Context address
|
Address ucxl.Address `json:"address"` // Context address
|
||||||
ResolutionType ResolutionType `json:"resolution_type"` // How conflict was resolved
|
ResolutionType ResolutionType `json:"resolution_type"` // How conflict was resolved
|
||||||
MergedContext *slurpContext.ContextNode `json:"merged_context"` // Resulting merged context
|
MergedContext *slurpContext.ContextNode `json:"merged_context"` // Resulting merged context
|
||||||
ConflictingSources []string `json:"conflicting_sources"` // Sources of conflict
|
ConflictingSources []string `json:"conflicting_sources"` // Sources of conflict
|
||||||
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
|
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
|
||||||
ResolvedAt time.Time `json:"resolved_at"` // When resolved
|
ResolvedAt time.Time `json:"resolved_at"` // When resolved
|
||||||
Confidence float64 `json:"confidence"` // Confidence in resolution
|
Confidence float64 `json:"confidence"` // Confidence in resolution
|
||||||
ManualReview bool `json:"manual_review"` // Whether manual review needed
|
ManualReview bool `json:"manual_review"` // Whether manual review needed
|
||||||
}
|
}
|
||||||
|
|
||||||
// ResolutionType represents different types of conflict resolution
|
// ResolutionType represents different types of conflict resolution
|
||||||
type ResolutionType string
|
type ResolutionType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ResolutionMerged ResolutionType = "merged" // Contexts were merged
|
ResolutionMerged ResolutionType = "merged" // Contexts were merged
|
||||||
ResolutionLastWriter ResolutionType = "last_writer" // Last writer wins
|
ResolutionLastWriter ResolutionType = "last_writer" // Last writer wins
|
||||||
ResolutionLeaderDecision ResolutionType = "leader_decision" // Leader made decision
|
ResolutionLeaderDecision ResolutionType = "leader_decision" // Leader made decision
|
||||||
ResolutionManual ResolutionType = "manual" // Manual resolution required
|
ResolutionManual ResolutionType = "manual" // Manual resolution required
|
||||||
ResolutionFailed ResolutionType = "failed" // Resolution failed
|
ResolutionFailed ResolutionType = "failed" // Resolution failed
|
||||||
)
|
)
|
||||||
|
|
||||||
// PotentialConflict represents a detected potential conflict
|
// PotentialConflict represents a detected potential conflict
|
||||||
type PotentialConflict struct {
|
type PotentialConflict struct {
|
||||||
Address ucxl.Address `json:"address"` // Context address
|
Address ucxl.Address `json:"address"` // Context address
|
||||||
ConflictType ConflictType `json:"conflict_type"` // Type of conflict
|
ConflictType ConflictType `json:"conflict_type"` // Type of conflict
|
||||||
Description string `json:"description"` // Conflict description
|
Description string `json:"description"` // Conflict description
|
||||||
Severity ConflictSeverity `json:"severity"` // Conflict severity
|
Severity ConflictSeverity `json:"severity"` // Conflict severity
|
||||||
AffectedFields []string `json:"affected_fields"` // Fields in conflict
|
AffectedFields []string `json:"affected_fields"` // Fields in conflict
|
||||||
Suggestions []string `json:"suggestions"` // Resolution suggestions
|
Suggestions []string `json:"suggestions"` // Resolution suggestions
|
||||||
DetectedAt time.Time `json:"detected_at"` // When detected
|
DetectedAt time.Time `json:"detected_at"` // When detected
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConflictType represents different types of conflicts
|
// ConflictType represents different types of conflicts
|
||||||
@@ -245,88 +251,88 @@ const (
|
|||||||
type ConflictSeverity string
|
type ConflictSeverity string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
SeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
|
ConflictSeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
|
||||||
SeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
|
ConflictSeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
|
||||||
SeverityHigh ConflictSeverity = "high" // High severity - needs attention
|
ConflictSeverityHigh ConflictSeverity = "high" // High severity - needs attention
|
||||||
SeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
|
ConflictSeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
|
||||||
)
|
)
|
||||||
|
|
||||||
// ResolutionStrategy represents conflict resolution strategy configuration
|
// ResolutionStrategy represents conflict resolution strategy configuration
|
||||||
type ResolutionStrategy struct {
|
type ResolutionStrategy struct {
|
||||||
DefaultResolution ResolutionType `json:"default_resolution"` // Default resolution method
|
DefaultResolution ResolutionType `json:"default_resolution"` // Default resolution method
|
||||||
FieldPriorities map[string]int `json:"field_priorities"` // Field priority mapping
|
FieldPriorities map[string]int `json:"field_priorities"` // Field priority mapping
|
||||||
AutoMergeEnabled bool `json:"auto_merge_enabled"` // Enable automatic merging
|
AutoMergeEnabled bool `json:"auto_merge_enabled"` // Enable automatic merging
|
||||||
RequireConsensus bool `json:"require_consensus"` // Require node consensus
|
RequireConsensus bool `json:"require_consensus"` // Require node consensus
|
||||||
LeaderBreaksTies bool `json:"leader_breaks_ties"` // Leader resolves ties
|
LeaderBreaksTies bool `json:"leader_breaks_ties"` // Leader resolves ties
|
||||||
MaxConflictAge time.Duration `json:"max_conflict_age"` // Max age before escalation
|
MaxConflictAge time.Duration `json:"max_conflict_age"` // Max age before escalation
|
||||||
EscalationRoles []string `json:"escalation_roles"` // Roles for manual escalation
|
EscalationRoles []string `json:"escalation_roles"` // Roles for manual escalation
|
||||||
}
|
}
|
||||||
|
|
||||||
// SyncResult represents the result of synchronization operation
|
// SyncResult represents the result of synchronization operation
|
||||||
type SyncResult struct {
|
type SyncResult struct {
|
||||||
SyncedContexts int `json:"synced_contexts"` // Contexts synchronized
|
SyncedContexts int `json:"synced_contexts"` // Contexts synchronized
|
||||||
ConflictsResolved int `json:"conflicts_resolved"` // Conflicts resolved
|
ConflictsResolved int `json:"conflicts_resolved"` // Conflicts resolved
|
||||||
Errors []string `json:"errors"` // Synchronization errors
|
Errors []string `json:"errors"` // Synchronization errors
|
||||||
SyncTime time.Duration `json:"sync_time"` // Total sync time
|
SyncTime time.Duration `json:"sync_time"` // Total sync time
|
||||||
PeersContacted int `json:"peers_contacted"` // Number of peers contacted
|
PeersContacted int `json:"peers_contacted"` // Number of peers contacted
|
||||||
DataTransferred int64 `json:"data_transferred"` // Bytes transferred
|
DataTransferred int64 `json:"data_transferred"` // Bytes transferred
|
||||||
SyncedAt time.Time `json:"synced_at"` // When sync completed
|
SyncedAt time.Time `json:"synced_at"` // When sync completed
|
||||||
}
|
}
|
||||||
|
|
||||||
// ReplicaHealth represents health status of context replicas
|
// ReplicaHealth represents health status of context replicas
|
||||||
type ReplicaHealth struct {
|
type ReplicaHealth struct {
|
||||||
Address ucxl.Address `json:"address"` // Context address
|
Address ucxl.Address `json:"address"` // Context address
|
||||||
TotalReplicas int `json:"total_replicas"` // Total replica count
|
TotalReplicas int `json:"total_replicas"` // Total replica count
|
||||||
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
|
HealthyReplicas int `json:"healthy_replicas"` // Healthy replica count
|
||||||
FailedReplicas int `json:"failed_replicas"` // Failed replica count
|
FailedReplicas int `json:"failed_replicas"` // Failed replica count
|
||||||
ReplicaNodes []*ReplicaNode `json:"replica_nodes"` // Individual replica status
|
ReplicaNodes []*ReplicaNode `json:"replica_nodes"` // Individual replica status
|
||||||
OverallHealth HealthStatus `json:"overall_health"` // Overall health status
|
OverallHealth HealthStatus `json:"overall_health"` // Overall health status
|
||||||
LastChecked time.Time `json:"last_checked"` // When last checked
|
LastChecked time.Time `json:"last_checked"` // When last checked
|
||||||
RepairNeeded bool `json:"repair_needed"` // Whether repair is needed
|
RepairNeeded bool `json:"repair_needed"` // Whether repair is needed
|
||||||
}
|
}
|
||||||
|
|
||||||
// ReplicaNode represents status of individual replica node
|
// ReplicaNode represents status of individual replica node
|
||||||
type ReplicaNode struct {
|
type ReplicaNode struct {
|
||||||
NodeID string `json:"node_id"` // Node identifier
|
NodeID string `json:"node_id"` // Node identifier
|
||||||
Status ReplicaStatus `json:"status"` // Replica status
|
Status ReplicaStatus `json:"status"` // Replica status
|
||||||
LastSeen time.Time `json:"last_seen"` // When last seen
|
LastSeen time.Time `json:"last_seen"` // When last seen
|
||||||
Version int64 `json:"version"` // Context version
|
Version int64 `json:"version"` // Context version
|
||||||
Checksum string `json:"checksum"` // Data checksum
|
Checksum string `json:"checksum"` // Data checksum
|
||||||
Latency time.Duration `json:"latency"` // Network latency
|
Latency time.Duration `json:"latency"` // Network latency
|
||||||
NetworkAddress string `json:"network_address"` // Network address
|
NetworkAddress string `json:"network_address"` // Network address
|
||||||
}
|
}
|
||||||
|
|
||||||
// ReplicaStatus represents status of individual replica
|
// ReplicaStatus represents status of individual replica
|
||||||
type ReplicaStatus string
|
type ReplicaStatus string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ReplicaHealthy ReplicaStatus = "healthy" // Replica is healthy
|
ReplicaHealthy ReplicaStatus = "healthy" // Replica is healthy
|
||||||
ReplicaStale ReplicaStatus = "stale" // Replica is stale
|
ReplicaStale ReplicaStatus = "stale" // Replica is stale
|
||||||
ReplicaCorrupted ReplicaStatus = "corrupted" // Replica is corrupted
|
ReplicaCorrupted ReplicaStatus = "corrupted" // Replica is corrupted
|
||||||
ReplicaUnreachable ReplicaStatus = "unreachable" // Replica is unreachable
|
ReplicaUnreachable ReplicaStatus = "unreachable" // Replica is unreachable
|
||||||
ReplicaSyncing ReplicaStatus = "syncing" // Replica is syncing
|
ReplicaSyncing ReplicaStatus = "syncing" // Replica is syncing
|
||||||
)
|
)
|
||||||
|
|
||||||
// HealthStatus represents overall health status
|
// HealthStatus represents overall health status
|
||||||
type HealthStatus string
|
type HealthStatus string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
HealthHealthy HealthStatus = "healthy" // All replicas healthy
|
HealthHealthy HealthStatus = "healthy" // All replicas healthy
|
||||||
HealthDegraded HealthStatus = "degraded" // Some replicas unhealthy
|
HealthDegraded HealthStatus = "degraded" // Some replicas unhealthy
|
||||||
HealthCritical HealthStatus = "critical" // Most replicas unhealthy
|
HealthCritical HealthStatus = "critical" // Most replicas unhealthy
|
||||||
HealthFailed HealthStatus = "failed" // All replicas failed
|
HealthFailed HealthStatus = "failed" // All replicas failed
|
||||||
)
|
)
|
||||||
|
|
||||||
// ReplicationPolicy represents replication behavior configuration
|
// ReplicationPolicy represents replication behavior configuration
|
||||||
type ReplicationPolicy struct {
|
type ReplicationPolicy struct {
|
||||||
DefaultFactor int `json:"default_factor"` // Default replication factor
|
DefaultFactor int `json:"default_factor"` // Default replication factor
|
||||||
MinFactor int `json:"min_factor"` // Minimum replication factor
|
MinFactor int `json:"min_factor"` // Minimum replication factor
|
||||||
MaxFactor int `json:"max_factor"` // Maximum replication factor
|
MaxFactor int `json:"max_factor"` // Maximum replication factor
|
||||||
PreferredZones []string `json:"preferred_zones"` // Preferred availability zones
|
PreferredZones []string `json:"preferred_zones"` // Preferred availability zones
|
||||||
AvoidSameNode bool `json:"avoid_same_node"` // Avoid same physical node
|
AvoidSameNode bool `json:"avoid_same_node"` // Avoid same physical node
|
||||||
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` // Consistency requirements
|
ConsistencyLevel ConsistencyLevel `json:"consistency_level"` // Consistency requirements
|
||||||
RepairThreshold float64 `json:"repair_threshold"` // Health threshold for repair
|
RepairThreshold float64 `json:"repair_threshold"` // Health threshold for repair
|
||||||
RebalanceInterval time.Duration `json:"rebalance_interval"` // Rebalancing frequency
|
RebalanceInterval time.Duration `json:"rebalance_interval"` // Rebalancing frequency
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConsistencyLevel represents consistency requirements
|
// ConsistencyLevel represents consistency requirements
|
||||||
@@ -340,12 +346,12 @@ const (
|
|||||||
|
|
||||||
// DHTStoreOptions represents options for DHT storage operations
|
// DHTStoreOptions represents options for DHT storage operations
|
||||||
type DHTStoreOptions struct {
|
type DHTStoreOptions struct {
|
||||||
ReplicationFactor int `json:"replication_factor"` // Number of replicas
|
ReplicationFactor int `json:"replication_factor"` // Number of replicas
|
||||||
TTL *time.Duration `json:"ttl,omitempty"` // Time to live
|
TTL *time.Duration `json:"ttl,omitempty"` // Time to live
|
||||||
Priority Priority `json:"priority"` // Storage priority
|
Priority Priority `json:"priority"` // Storage priority
|
||||||
Compress bool `json:"compress"` // Whether to compress
|
Compress bool `json:"compress"` // Whether to compress
|
||||||
Checksum bool `json:"checksum"` // Whether to checksum
|
Checksum bool `json:"checksum"` // Whether to checksum
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// Priority represents storage operation priority
|
// Priority represents storage operation priority
|
||||||
@@ -360,12 +366,12 @@ const (
|
|||||||
|
|
||||||
// DHTMetadata represents metadata for DHT stored data
|
// DHTMetadata represents metadata for DHT stored data
|
||||||
type DHTMetadata struct {
|
type DHTMetadata struct {
|
||||||
StoredAt time.Time `json:"stored_at"` // When stored
|
StoredAt time.Time `json:"stored_at"` // When stored
|
||||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||||
Version int64 `json:"version"` // Version number
|
Version int64 `json:"version"` // Version number
|
||||||
Size int64 `json:"size"` // Data size
|
Size int64 `json:"size"` // Data size
|
||||||
Checksum string `json:"checksum"` // Data checksum
|
Checksum string `json:"checksum"` // Data checksum
|
||||||
ReplicationFactor int `json:"replication_factor"` // Number of replicas
|
ReplicationFactor int `json:"replication_factor"` // Number of replicas
|
||||||
TTL *time.Time `json:"ttl,omitempty"` // Time to live
|
TTL *time.Time `json:"ttl,omitempty"` // Time to live
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
@@ -10,18 +10,18 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
"chorus/pkg/election"
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
|
"chorus/pkg/crypto"
|
||||||
|
"chorus/pkg/dht"
|
||||||
|
"chorus/pkg/election"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DHTContextDistributor implements ContextDistributor using CHORUS DHT infrastructure
|
// DHTContextDistributor implements ContextDistributor using CHORUS DHT infrastructure
|
||||||
type DHTContextDistributor struct {
|
type DHTContextDistributor struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
dht *dht.DHT
|
dht dht.DHT
|
||||||
roleCrypto *crypto.RoleCrypto
|
roleCrypto *crypto.RoleCrypto
|
||||||
election election.Election
|
election election.Election
|
||||||
config *config.Config
|
config *config.Config
|
||||||
@@ -37,7 +37,7 @@ type DHTContextDistributor struct {
|
|||||||
|
|
||||||
// NewDHTContextDistributor creates a new DHT-based context distributor
|
// NewDHTContextDistributor creates a new DHT-based context distributor
|
||||||
func NewDHTContextDistributor(
|
func NewDHTContextDistributor(
|
||||||
dht *dht.DHT,
|
dht dht.DHT,
|
||||||
roleCrypto *crypto.RoleCrypto,
|
roleCrypto *crypto.RoleCrypto,
|
||||||
election election.Election,
|
election election.Election,
|
||||||
config *config.Config,
|
config *config.Config,
|
||||||
@@ -147,36 +147,43 @@ func (d *DHTContextDistributor) DistributeContext(ctx context.Context, node *slu
|
|||||||
return d.recordError(fmt.Sprintf("failed to get vector clock: %v", err))
|
return d.recordError(fmt.Sprintf("failed to get vector clock: %v", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Encrypt context for roles
|
// Prepare context payload for role encryption
|
||||||
encryptedData, err := d.roleCrypto.EncryptContextForRoles(node, roles, []string{})
|
rawContext, err := json.Marshal(node)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return d.recordError(fmt.Sprintf("failed to encrypt context: %v", err))
|
return d.recordError(fmt.Sprintf("failed to marshal context: %v", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create distribution metadata
|
// Create distribution metadata (checksum calculated per-role below)
|
||||||
metadata := &DistributionMetadata{
|
metadata := &DistributionMetadata{
|
||||||
Address: node.UCXLAddress,
|
Address: node.UCXLAddress,
|
||||||
Roles: roles,
|
Roles: roles,
|
||||||
Version: 1,
|
Version: 1,
|
||||||
VectorClock: clock,
|
VectorClock: clock,
|
||||||
DistributedBy: d.config.Agent.ID,
|
DistributedBy: d.config.Agent.ID,
|
||||||
DistributedAt: time.Now(),
|
DistributedAt: time.Now(),
|
||||||
ReplicationFactor: d.getReplicationFactor(),
|
ReplicationFactor: d.getReplicationFactor(),
|
||||||
Checksum: d.calculateChecksum(encryptedData),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Store encrypted data in DHT for each role
|
// Store encrypted data in DHT for each role
|
||||||
for _, role := range roles {
|
for _, role := range roles {
|
||||||
key := d.keyGenerator.GenerateContextKey(node.UCXLAddress.String(), role)
|
key := d.keyGenerator.GenerateContextKey(node.UCXLAddress.String(), role)
|
||||||
|
|
||||||
|
cipher, fingerprint, err := d.roleCrypto.EncryptForRole(rawContext, role)
|
||||||
|
if err != nil {
|
||||||
|
return d.recordError(fmt.Sprintf("failed to encrypt context for role %s: %v", role, err))
|
||||||
|
}
|
||||||
|
|
||||||
// Create role-specific storage package
|
// Create role-specific storage package
|
||||||
storagePackage := &ContextStoragePackage{
|
storagePackage := &ContextStoragePackage{
|
||||||
EncryptedData: encryptedData,
|
EncryptedData: cipher,
|
||||||
Metadata: metadata,
|
KeyFingerprint: fingerprint,
|
||||||
Role: role,
|
Metadata: metadata,
|
||||||
StoredAt: time.Now(),
|
Role: role,
|
||||||
|
StoredAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
metadata.Checksum = d.calculateChecksum(cipher)
|
||||||
|
|
||||||
// Serialize for storage
|
// Serialize for storage
|
||||||
storageBytes, err := json.Marshal(storagePackage)
|
storageBytes, err := json.Marshal(storagePackage)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -252,25 +259,30 @@ func (d *DHTContextDistributor) RetrieveContext(ctx context.Context, address ucx
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Decrypt context for role
|
// Decrypt context for role
|
||||||
contextNode, err := d.roleCrypto.DecryptContextForRole(storagePackage.EncryptedData, role)
|
plain, err := d.roleCrypto.DecryptForRole(storagePackage.EncryptedData, role, storagePackage.KeyFingerprint)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decrypt context: %v", err))
|
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decrypt context: %v", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var contextNode slurpContext.ContextNode
|
||||||
|
if err := json.Unmarshal(plain, &contextNode); err != nil {
|
||||||
|
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decode context: %v", err))
|
||||||
|
}
|
||||||
|
|
||||||
// Convert to resolved context
|
// Convert to resolved context
|
||||||
resolvedContext := &slurpContext.ResolvedContext{
|
resolvedContext := &slurpContext.ResolvedContext{
|
||||||
UCXLAddress: contextNode.UCXLAddress,
|
UCXLAddress: contextNode.UCXLAddress,
|
||||||
Summary: contextNode.Summary,
|
Summary: contextNode.Summary,
|
||||||
Purpose: contextNode.Purpose,
|
Purpose: contextNode.Purpose,
|
||||||
Technologies: contextNode.Technologies,
|
Technologies: contextNode.Technologies,
|
||||||
Tags: contextNode.Tags,
|
Tags: contextNode.Tags,
|
||||||
Insights: contextNode.Insights,
|
Insights: contextNode.Insights,
|
||||||
ContextSourcePath: contextNode.Path,
|
ContextSourcePath: contextNode.Path,
|
||||||
InheritanceChain: []string{contextNode.Path},
|
InheritanceChain: []string{contextNode.Path},
|
||||||
ResolutionConfidence: contextNode.RAGConfidence,
|
ResolutionConfidence: contextNode.RAGConfidence,
|
||||||
BoundedDepth: 1,
|
BoundedDepth: 1,
|
||||||
GlobalContextsApplied: false,
|
GlobalContextsApplied: false,
|
||||||
ResolvedAt: time.Now(),
|
ResolvedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update statistics
|
// Update statistics
|
||||||
@@ -304,15 +316,15 @@ func (d *DHTContextDistributor) UpdateContext(ctx context.Context, node *slurpCo
|
|||||||
|
|
||||||
// Convert existing resolved context back to context node for comparison
|
// Convert existing resolved context back to context node for comparison
|
||||||
existingNode := &slurpContext.ContextNode{
|
existingNode := &slurpContext.ContextNode{
|
||||||
Path: existingContext.ContextSourcePath,
|
Path: existingContext.ContextSourcePath,
|
||||||
UCXLAddress: existingContext.UCXLAddress,
|
UCXLAddress: existingContext.UCXLAddress,
|
||||||
Summary: existingContext.Summary,
|
Summary: existingContext.Summary,
|
||||||
Purpose: existingContext.Purpose,
|
Purpose: existingContext.Purpose,
|
||||||
Technologies: existingContext.Technologies,
|
Technologies: existingContext.Technologies,
|
||||||
Tags: existingContext.Tags,
|
Tags: existingContext.Tags,
|
||||||
Insights: existingContext.Insights,
|
Insights: existingContext.Insights,
|
||||||
RAGConfidence: existingContext.ResolutionConfidence,
|
RAGConfidence: existingContext.ResolutionConfidence,
|
||||||
GeneratedAt: existingContext.ResolvedAt,
|
GeneratedAt: existingContext.ResolvedAt,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use conflict resolver to handle the update
|
// Use conflict resolver to handle the update
|
||||||
@@ -380,13 +392,13 @@ func (d *DHTContextDistributor) Sync(ctx context.Context) (*SyncResult, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
result := &SyncResult{
|
result := &SyncResult{
|
||||||
SyncedContexts: 0, // Would be populated in real implementation
|
SyncedContexts: 0, // Would be populated in real implementation
|
||||||
ConflictsResolved: 0,
|
ConflictsResolved: 0,
|
||||||
Errors: []string{},
|
Errors: []string{},
|
||||||
SyncTime: time.Since(start),
|
SyncTime: time.Since(start),
|
||||||
PeersContacted: len(d.dht.GetConnectedPeers()),
|
PeersContacted: len(d.dht.GetConnectedPeers()),
|
||||||
DataTransferred: 0,
|
DataTransferred: 0,
|
||||||
SyncedAt: time.Now(),
|
SyncedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
return result, nil
|
return result, nil
|
||||||
@@ -453,28 +465,13 @@ func (d *DHTContextDistributor) calculateChecksum(data interface{}) string {
|
|||||||
return hex.EncodeToString(hash[:])
|
return hex.EncodeToString(hash[:])
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ensure DHT is bootstrapped before operations
|
|
||||||
func (d *DHTContextDistributor) ensureDHTReady() error {
|
|
||||||
if !d.dht.IsBootstrapped() {
|
|
||||||
return fmt.Errorf("DHT not bootstrapped")
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start starts the distribution service
|
// Start starts the distribution service
|
||||||
func (d *DHTContextDistributor) Start(ctx context.Context) error {
|
func (d *DHTContextDistributor) Start(ctx context.Context) error {
|
||||||
// Bootstrap DHT if not already done
|
if d.gossipProtocol != nil {
|
||||||
if !d.dht.IsBootstrapped() {
|
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
|
||||||
if err := d.dht.Bootstrap(); err != nil {
|
return fmt.Errorf("failed to start gossip protocol: %w", err)
|
||||||
return fmt.Errorf("failed to bootstrap DHT: %w", err)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start gossip protocol
|
|
||||||
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
|
|
||||||
return fmt.Errorf("failed to start gossip protocol: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -488,22 +485,23 @@ func (d *DHTContextDistributor) Stop(ctx context.Context) error {
|
|||||||
|
|
||||||
// ContextStoragePackage represents a complete package for DHT storage
|
// ContextStoragePackage represents a complete package for DHT storage
|
||||||
type ContextStoragePackage struct {
|
type ContextStoragePackage struct {
|
||||||
EncryptedData *crypto.EncryptedContextData `json:"encrypted_data"`
|
EncryptedData []byte `json:"encrypted_data"`
|
||||||
Metadata *DistributionMetadata `json:"metadata"`
|
KeyFingerprint string `json:"key_fingerprint,omitempty"`
|
||||||
Role string `json:"role"`
|
Metadata *DistributionMetadata `json:"metadata"`
|
||||||
StoredAt time.Time `json:"stored_at"`
|
Role string `json:"role"`
|
||||||
|
StoredAt time.Time `json:"stored_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// DistributionMetadata contains metadata for distributed context
|
// DistributionMetadata contains metadata for distributed context
|
||||||
type DistributionMetadata struct {
|
type DistributionMetadata struct {
|
||||||
Address ucxl.Address `json:"address"`
|
Address ucxl.Address `json:"address"`
|
||||||
Roles []string `json:"roles"`
|
Roles []string `json:"roles"`
|
||||||
Version int64 `json:"version"`
|
Version int64 `json:"version"`
|
||||||
VectorClock *VectorClock `json:"vector_clock"`
|
VectorClock *VectorClock `json:"vector_clock"`
|
||||||
DistributedBy string `json:"distributed_by"`
|
DistributedBy string `json:"distributed_by"`
|
||||||
DistributedAt time.Time `json:"distributed_at"`
|
DistributedAt time.Time `json:"distributed_at"`
|
||||||
ReplicationFactor int `json:"replication_factor"`
|
ReplicationFactor int `json:"replication_factor"`
|
||||||
Checksum string `json:"checksum"`
|
Checksum string `json:"checksum"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// DHTKeyGenerator implements KeyGenerator interface
|
// DHTKeyGenerator implements KeyGenerator interface
|
||||||
@@ -532,65 +530,124 @@ func (kg *DHTKeyGenerator) GenerateReplicationKey(address string) string {
|
|||||||
// Component constructors - these would be implemented in separate files
|
// Component constructors - these would be implemented in separate files
|
||||||
|
|
||||||
// NewReplicationManager creates a new replication manager
|
// NewReplicationManager creates a new replication manager
|
||||||
func NewReplicationManager(dht *dht.DHT, config *config.Config) (ReplicationManager, error) {
|
func NewReplicationManager(dht dht.DHT, config *config.Config) (ReplicationManager, error) {
|
||||||
// Placeholder implementation
|
impl, err := NewReplicationManagerImpl(dht, config)
|
||||||
return &ReplicationManagerImpl{}, nil
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return impl, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewConflictResolver creates a new conflict resolver
|
// NewConflictResolver creates a new conflict resolver
|
||||||
func NewConflictResolver(dht *dht.DHT, config *config.Config) (ConflictResolver, error) {
|
func NewConflictResolver(dht dht.DHT, config *config.Config) (ConflictResolver, error) {
|
||||||
// Placeholder implementation
|
// Placeholder implementation until full resolver is wired
|
||||||
return &ConflictResolverImpl{}, nil
|
return &ConflictResolverImpl{}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewGossipProtocol creates a new gossip protocol
|
// NewGossipProtocol creates a new gossip protocol
|
||||||
func NewGossipProtocol(dht *dht.DHT, config *config.Config) (GossipProtocol, error) {
|
func NewGossipProtocol(dht dht.DHT, config *config.Config) (GossipProtocol, error) {
|
||||||
// Placeholder implementation
|
impl, err := NewGossipProtocolImpl(dht, config)
|
||||||
return &GossipProtocolImpl{}, nil
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return impl, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewNetworkManager creates a new network manager
|
// NewNetworkManager creates a new network manager
|
||||||
func NewNetworkManager(dht *dht.DHT, config *config.Config) (NetworkManager, error) {
|
func NewNetworkManager(dht dht.DHT, config *config.Config) (NetworkManager, error) {
|
||||||
// Placeholder implementation
|
impl, err := NewNetworkManagerImpl(dht, config)
|
||||||
return &NetworkManagerImpl{}, nil
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return impl, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewVectorClockManager creates a new vector clock manager
|
// NewVectorClockManager creates a new vector clock manager
|
||||||
func NewVectorClockManager(dht *dht.DHT, nodeID string) (VectorClockManager, error) {
|
func NewVectorClockManager(dht dht.DHT, nodeID string) (VectorClockManager, error) {
|
||||||
// Placeholder implementation
|
return &defaultVectorClockManager{
|
||||||
return &VectorClockManagerImpl{}, nil
|
clocks: make(map[string]*VectorClock),
|
||||||
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Placeholder structs for components - these would be properly implemented
|
// ConflictResolverImpl is a temporary stub until the full resolver is implemented
|
||||||
|
|
||||||
type ReplicationManagerImpl struct{}
|
|
||||||
func (rm *ReplicationManagerImpl) EnsureReplication(ctx context.Context, address ucxl.Address, factor int) error { return nil }
|
|
||||||
func (rm *ReplicationManagerImpl) GetReplicationStatus(ctx context.Context, address ucxl.Address) (*ReplicaHealth, error) {
|
|
||||||
return &ReplicaHealth{}, nil
|
|
||||||
}
|
|
||||||
func (rm *ReplicationManagerImpl) SetReplicationFactor(factor int) error { return nil }
|
|
||||||
|
|
||||||
type ConflictResolverImpl struct{}
|
type ConflictResolverImpl struct{}
|
||||||
|
|
||||||
func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remote *slurpContext.ContextNode) (*ConflictResolution, error) {
|
func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remote *slurpContext.ContextNode) (*ConflictResolution, error) {
|
||||||
return &ConflictResolution{
|
return &ConflictResolution{
|
||||||
Address: local.UCXLAddress,
|
Address: local.UCXLAddress,
|
||||||
ResolutionType: ResolutionMerged,
|
ResolutionType: ResolutionMerged,
|
||||||
MergedContext: local,
|
MergedContext: local,
|
||||||
ResolutionTime: time.Millisecond,
|
ResolutionTime: time.Millisecond,
|
||||||
ResolvedAt: time.Now(),
|
ResolvedAt: time.Now(),
|
||||||
Confidence: 0.95,
|
Confidence: 0.95,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
type GossipProtocolImpl struct{}
|
// defaultVectorClockManager provides a minimal vector clock store for SEC-SLURP scaffolding.
|
||||||
func (gp *GossipProtocolImpl) StartGossip(ctx context.Context) error { return nil }
|
type defaultVectorClockManager struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
clocks map[string]*VectorClock
|
||||||
|
}
|
||||||
|
|
||||||
type NetworkManagerImpl struct{}
|
func (vcm *defaultVectorClockManager) GetClock(nodeID string) (*VectorClock, error) {
|
||||||
|
vcm.mu.Lock()
|
||||||
|
defer vcm.mu.Unlock()
|
||||||
|
|
||||||
type VectorClockManagerImpl struct{}
|
if clock, ok := vcm.clocks[nodeID]; ok {
|
||||||
func (vcm *VectorClockManagerImpl) GetClock(nodeID string) (*VectorClock, error) {
|
return clock, nil
|
||||||
return &VectorClock{
|
}
|
||||||
Clock: map[string]int64{nodeID: time.Now().Unix()},
|
clock := &VectorClock{
|
||||||
|
Clock: map[string]int64{nodeID: time.Now().Unix()},
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
}, nil
|
}
|
||||||
|
vcm.clocks[nodeID] = clock
|
||||||
|
return clock, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (vcm *defaultVectorClockManager) UpdateClock(nodeID string, clock *VectorClock) error {
|
||||||
|
vcm.mu.Lock()
|
||||||
|
defer vcm.mu.Unlock()
|
||||||
|
|
||||||
|
vcm.clocks[nodeID] = clock
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (vcm *defaultVectorClockManager) CompareClock(clock1, clock2 *VectorClock) ClockRelation {
|
||||||
|
if clock1 == nil || clock2 == nil {
|
||||||
|
return ClockConcurrent
|
||||||
|
}
|
||||||
|
if clock1.UpdatedAt.Before(clock2.UpdatedAt) {
|
||||||
|
return ClockBefore
|
||||||
|
}
|
||||||
|
if clock1.UpdatedAt.After(clock2.UpdatedAt) {
|
||||||
|
return ClockAfter
|
||||||
|
}
|
||||||
|
return ClockEqual
|
||||||
|
}
|
||||||
|
|
||||||
|
func (vcm *defaultVectorClockManager) MergeClock(clocks []*VectorClock) *VectorClock {
|
||||||
|
if len(clocks) == 0 {
|
||||||
|
return &VectorClock{
|
||||||
|
Clock: map[string]int64{},
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
merged := &VectorClock{
|
||||||
|
Clock: make(map[string]int64),
|
||||||
|
UpdatedAt: clocks[0].UpdatedAt,
|
||||||
|
}
|
||||||
|
for _, clock := range clocks {
|
||||||
|
if clock == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if clock.UpdatedAt.After(merged.UpdatedAt) {
|
||||||
|
merged.UpdatedAt = clock.UpdatedAt
|
||||||
|
}
|
||||||
|
for node, value := range clock.Clock {
|
||||||
|
if existing, ok := merged.Clock[node]; !ok || value > existing {
|
||||||
|
merged.Clock[node] = value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return merged
|
||||||
}
|
}
|
||||||
@@ -15,48 +15,48 @@ import (
|
|||||||
|
|
||||||
// MonitoringSystem provides comprehensive monitoring for the distributed context system
|
// MonitoringSystem provides comprehensive monitoring for the distributed context system
|
||||||
type MonitoringSystem struct {
|
type MonitoringSystem struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
config *config.Config
|
config *config.Config
|
||||||
metrics *MetricsCollector
|
metrics *MetricsCollector
|
||||||
healthChecks *HealthCheckManager
|
healthChecks *HealthCheckManager
|
||||||
alertManager *AlertManager
|
alertManager *AlertManager
|
||||||
dashboard *DashboardServer
|
dashboard *DashboardServer
|
||||||
logManager *LogManager
|
logManager *LogManager
|
||||||
traceManager *TraceManager
|
traceManager *TraceManager
|
||||||
|
|
||||||
// State
|
// State
|
||||||
running bool
|
running bool
|
||||||
monitoringPort int
|
monitoringPort int
|
||||||
updateInterval time.Duration
|
updateInterval time.Duration
|
||||||
retentionPeriod time.Duration
|
retentionPeriod time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
// MetricsCollector collects and aggregates system metrics
|
// MetricsCollector collects and aggregates system metrics
|
||||||
type MetricsCollector struct {
|
type MetricsCollector struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
timeSeries map[string]*TimeSeries
|
timeSeries map[string]*TimeSeries
|
||||||
counters map[string]*Counter
|
counters map[string]*Counter
|
||||||
gauges map[string]*Gauge
|
gauges map[string]*Gauge
|
||||||
histograms map[string]*Histogram
|
histograms map[string]*Histogram
|
||||||
customMetrics map[string]*CustomMetric
|
customMetrics map[string]*CustomMetric
|
||||||
aggregatedStats *AggregatedStatistics
|
aggregatedStats *AggregatedStatistics
|
||||||
exporters []MetricsExporter
|
exporters []MetricsExporter
|
||||||
lastCollection time.Time
|
lastCollection time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// TimeSeries represents a time-series metric
|
// TimeSeries represents a time-series metric
|
||||||
type TimeSeries struct {
|
type TimeSeries struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Labels map[string]string `json:"labels"`
|
Labels map[string]string `json:"labels"`
|
||||||
DataPoints []*TimeSeriesPoint `json:"data_points"`
|
DataPoints []*TimeSeriesPoint `json:"data_points"`
|
||||||
RetentionTTL time.Duration `json:"retention_ttl"`
|
RetentionTTL time.Duration `json:"retention_ttl"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// TimeSeriesPoint represents a single data point in a time series
|
// TimeSeriesPoint represents a single data point in a time series
|
||||||
type TimeSeriesPoint struct {
|
type TimeSeriesPoint struct {
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Value float64 `json:"value"`
|
Value float64 `json:"value"`
|
||||||
Labels map[string]string `json:"labels,omitempty"`
|
Labels map[string]string `json:"labels,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -64,7 +64,7 @@ type TimeSeriesPoint struct {
|
|||||||
type Counter struct {
|
type Counter struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Value int64 `json:"value"`
|
Value int64 `json:"value"`
|
||||||
Rate float64 `json:"rate"` // per second
|
Rate float64 `json:"rate"` // per second
|
||||||
Labels map[string]string `json:"labels"`
|
Labels map[string]string `json:"labels"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
@@ -82,13 +82,13 @@ type Gauge struct {
|
|||||||
|
|
||||||
// Histogram represents distribution of values
|
// Histogram represents distribution of values
|
||||||
type Histogram struct {
|
type Histogram struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Buckets map[float64]int64 `json:"buckets"`
|
Buckets map[float64]int64 `json:"buckets"`
|
||||||
Count int64 `json:"count"`
|
Count int64 `json:"count"`
|
||||||
Sum float64 `json:"sum"`
|
Sum float64 `json:"sum"`
|
||||||
Labels map[string]string `json:"labels"`
|
Labels map[string]string `json:"labels"`
|
||||||
Percentiles map[float64]float64 `json:"percentiles"`
|
Percentiles map[float64]float64 `json:"percentiles"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// CustomMetric represents application-specific metrics
|
// CustomMetric represents application-specific metrics
|
||||||
@@ -114,81 +114,81 @@ const (
|
|||||||
|
|
||||||
// AggregatedStatistics provides high-level system statistics
|
// AggregatedStatistics provides high-level system statistics
|
||||||
type AggregatedStatistics struct {
|
type AggregatedStatistics struct {
|
||||||
SystemOverview *SystemOverview `json:"system_overview"`
|
SystemOverview *SystemOverview `json:"system_overview"`
|
||||||
PerformanceMetrics *PerformanceOverview `json:"performance_metrics"`
|
PerformanceMetrics *PerformanceOverview `json:"performance_metrics"`
|
||||||
HealthMetrics *HealthOverview `json:"health_metrics"`
|
HealthMetrics *HealthOverview `json:"health_metrics"`
|
||||||
ErrorMetrics *ErrorOverview `json:"error_metrics"`
|
ErrorMetrics *ErrorOverview `json:"error_metrics"`
|
||||||
ResourceMetrics *ResourceOverview `json:"resource_metrics"`
|
ResourceMetrics *ResourceOverview `json:"resource_metrics"`
|
||||||
NetworkMetrics *NetworkOverview `json:"network_metrics"`
|
NetworkMetrics *NetworkOverview `json:"network_metrics"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// SystemOverview provides system-wide overview metrics
|
// SystemOverview provides system-wide overview metrics
|
||||||
type SystemOverview struct {
|
type SystemOverview struct {
|
||||||
TotalNodes int `json:"total_nodes"`
|
TotalNodes int `json:"total_nodes"`
|
||||||
HealthyNodes int `json:"healthy_nodes"`
|
HealthyNodes int `json:"healthy_nodes"`
|
||||||
TotalContexts int64 `json:"total_contexts"`
|
TotalContexts int64 `json:"total_contexts"`
|
||||||
DistributedContexts int64 `json:"distributed_contexts"`
|
DistributedContexts int64 `json:"distributed_contexts"`
|
||||||
ReplicationFactor float64 `json:"average_replication_factor"`
|
ReplicationFactor float64 `json:"average_replication_factor"`
|
||||||
SystemUptime time.Duration `json:"system_uptime"`
|
SystemUptime time.Duration `json:"system_uptime"`
|
||||||
ClusterVersion string `json:"cluster_version"`
|
ClusterVersion string `json:"cluster_version"`
|
||||||
LastRestart time.Time `json:"last_restart"`
|
LastRestart time.Time `json:"last_restart"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// PerformanceOverview provides performance metrics
|
// PerformanceOverview provides performance metrics
|
||||||
type PerformanceOverview struct {
|
type PerformanceOverview struct {
|
||||||
RequestsPerSecond float64 `json:"requests_per_second"`
|
RequestsPerSecond float64 `json:"requests_per_second"`
|
||||||
AverageResponseTime time.Duration `json:"average_response_time"`
|
AverageResponseTime time.Duration `json:"average_response_time"`
|
||||||
P95ResponseTime time.Duration `json:"p95_response_time"`
|
P95ResponseTime time.Duration `json:"p95_response_time"`
|
||||||
P99ResponseTime time.Duration `json:"p99_response_time"`
|
P99ResponseTime time.Duration `json:"p99_response_time"`
|
||||||
Throughput float64 `json:"throughput_mbps"`
|
Throughput float64 `json:"throughput_mbps"`
|
||||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||||
QueueDepth int `json:"queue_depth"`
|
QueueDepth int `json:"queue_depth"`
|
||||||
ActiveConnections int `json:"active_connections"`
|
ActiveConnections int `json:"active_connections"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// HealthOverview provides health-related metrics
|
// HealthOverview provides health-related metrics
|
||||||
type HealthOverview struct {
|
type HealthOverview struct {
|
||||||
OverallHealthScore float64 `json:"overall_health_score"`
|
OverallHealthScore float64 `json:"overall_health_score"`
|
||||||
ComponentHealth map[string]float64 `json:"component_health"`
|
ComponentHealth map[string]float64 `json:"component_health"`
|
||||||
FailedHealthChecks int `json:"failed_health_checks"`
|
FailedHealthChecks int `json:"failed_health_checks"`
|
||||||
LastHealthCheck time.Time `json:"last_health_check"`
|
LastHealthCheck time.Time `json:"last_health_check"`
|
||||||
HealthTrend string `json:"health_trend"` // improving, stable, degrading
|
HealthTrend string `json:"health_trend"` // improving, stable, degrading
|
||||||
CriticalAlerts int `json:"critical_alerts"`
|
CriticalAlerts int `json:"critical_alerts"`
|
||||||
WarningAlerts int `json:"warning_alerts"`
|
WarningAlerts int `json:"warning_alerts"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ErrorOverview provides error-related metrics
|
// ErrorOverview provides error-related metrics
|
||||||
type ErrorOverview struct {
|
type ErrorOverview struct {
|
||||||
TotalErrors int64 `json:"total_errors"`
|
TotalErrors int64 `json:"total_errors"`
|
||||||
ErrorRate float64 `json:"error_rate"`
|
ErrorRate float64 `json:"error_rate"`
|
||||||
ErrorsByType map[string]int64 `json:"errors_by_type"`
|
ErrorsByType map[string]int64 `json:"errors_by_type"`
|
||||||
ErrorsByComponent map[string]int64 `json:"errors_by_component"`
|
ErrorsByComponent map[string]int64 `json:"errors_by_component"`
|
||||||
LastError *ErrorEvent `json:"last_error"`
|
LastError *ErrorEvent `json:"last_error"`
|
||||||
ErrorTrend string `json:"error_trend"` // increasing, stable, decreasing
|
ErrorTrend string `json:"error_trend"` // increasing, stable, decreasing
|
||||||
}
|
}
|
||||||
|
|
||||||
// ResourceOverview provides resource utilization metrics
|
// ResourceOverview provides resource utilization metrics
|
||||||
type ResourceOverview struct {
|
type ResourceOverview struct {
|
||||||
CPUUtilization float64 `json:"cpu_utilization"`
|
CPUUtilization float64 `json:"cpu_utilization"`
|
||||||
MemoryUtilization float64 `json:"memory_utilization"`
|
MemoryUtilization float64 `json:"memory_utilization"`
|
||||||
DiskUtilization float64 `json:"disk_utilization"`
|
DiskUtilization float64 `json:"disk_utilization"`
|
||||||
NetworkUtilization float64 `json:"network_utilization"`
|
NetworkUtilization float64 `json:"network_utilization"`
|
||||||
StorageUsed int64 `json:"storage_used_bytes"`
|
StorageUsed int64 `json:"storage_used_bytes"`
|
||||||
StorageAvailable int64 `json:"storage_available_bytes"`
|
StorageAvailable int64 `json:"storage_available_bytes"`
|
||||||
FileDescriptors int `json:"open_file_descriptors"`
|
FileDescriptors int `json:"open_file_descriptors"`
|
||||||
Goroutines int `json:"goroutines"`
|
Goroutines int `json:"goroutines"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NetworkOverview provides network-related metrics
|
// NetworkOverview provides network-related metrics
|
||||||
type NetworkOverview struct {
|
type NetworkOverview struct {
|
||||||
TotalConnections int `json:"total_connections"`
|
TotalConnections int `json:"total_connections"`
|
||||||
ActiveConnections int `json:"active_connections"`
|
ActiveConnections int `json:"active_connections"`
|
||||||
BandwidthUtilization float64 `json:"bandwidth_utilization"`
|
BandwidthUtilization float64 `json:"bandwidth_utilization"`
|
||||||
PacketLossRate float64 `json:"packet_loss_rate"`
|
PacketLossRate float64 `json:"packet_loss_rate"`
|
||||||
AverageLatency time.Duration `json:"average_latency"`
|
AverageLatency time.Duration `json:"average_latency"`
|
||||||
NetworkPartitions int `json:"network_partitions"`
|
NetworkPartitions int `json:"network_partitions"`
|
||||||
DataTransferred int64 `json:"data_transferred_bytes"`
|
DataTransferred int64 `json:"data_transferred_bytes"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// MetricsExporter exports metrics to external systems
|
// MetricsExporter exports metrics to external systems
|
||||||
@@ -200,49 +200,49 @@ type MetricsExporter interface {
|
|||||||
|
|
||||||
// HealthCheckManager manages system health checks
|
// HealthCheckManager manages system health checks
|
||||||
type HealthCheckManager struct {
|
type HealthCheckManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
healthChecks map[string]*HealthCheck
|
healthChecks map[string]*HealthCheck
|
||||||
checkResults map[string]*HealthCheckResult
|
checkResults map[string]*HealthCheckResult
|
||||||
schedules map[string]*HealthCheckSchedule
|
schedules map[string]*HealthCheckSchedule
|
||||||
running bool
|
running bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// HealthCheck represents a single health check
|
// HealthCheck represents a single health check
|
||||||
type HealthCheck struct {
|
type HealthCheck struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
CheckType HealthCheckType `json:"check_type"`
|
CheckType HealthCheckType `json:"check_type"`
|
||||||
Target string `json:"target"`
|
Target string `json:"target"`
|
||||||
Timeout time.Duration `json:"timeout"`
|
Timeout time.Duration `json:"timeout"`
|
||||||
Interval time.Duration `json:"interval"`
|
Interval time.Duration `json:"interval"`
|
||||||
Retries int `json:"retries"`
|
Retries int `json:"retries"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
Enabled bool `json:"enabled"`
|
Enabled bool `json:"enabled"`
|
||||||
CheckFunction func(context.Context) (*HealthCheckResult, error) `json:"-"`
|
CheckFunction func(context.Context) (*HealthCheckResult, error) `json:"-"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// HealthCheckType represents different types of health checks
|
// HealthCheckType represents different types of health checks
|
||||||
type HealthCheckType string
|
type HealthCheckType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
HealthCheckTypeHTTP HealthCheckType = "http"
|
HealthCheckTypeHTTP HealthCheckType = "http"
|
||||||
HealthCheckTypeTCP HealthCheckType = "tcp"
|
HealthCheckTypeTCP HealthCheckType = "tcp"
|
||||||
HealthCheckTypeCustom HealthCheckType = "custom"
|
HealthCheckTypeCustom HealthCheckType = "custom"
|
||||||
HealthCheckTypeComponent HealthCheckType = "component"
|
HealthCheckTypeComponent HealthCheckType = "component"
|
||||||
HealthCheckTypeDatabase HealthCheckType = "database"
|
HealthCheckTypeDatabase HealthCheckType = "database"
|
||||||
HealthCheckTypeService HealthCheckType = "service"
|
HealthCheckTypeService HealthCheckType = "service"
|
||||||
)
|
)
|
||||||
|
|
||||||
// HealthCheckResult represents the result of a health check
|
// HealthCheckResult represents the result of a health check
|
||||||
type HealthCheckResult struct {
|
type HealthCheckResult struct {
|
||||||
CheckName string `json:"check_name"`
|
CheckName string `json:"check_name"`
|
||||||
Status HealthCheckStatus `json:"status"`
|
Status HealthCheckStatus `json:"status"`
|
||||||
ResponseTime time.Duration `json:"response_time"`
|
ResponseTime time.Duration `json:"response_time"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
Details map[string]interface{} `json:"details"`
|
Details map[string]interface{} `json:"details"`
|
||||||
Error string `json:"error,omitempty"`
|
Error string `json:"error,omitempty"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Attempt int `json:"attempt"`
|
Attempt int `json:"attempt"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// HealthCheckStatus represents the status of a health check
|
// HealthCheckStatus represents the status of a health check
|
||||||
@@ -258,45 +258,45 @@ const (
|
|||||||
|
|
||||||
// HealthCheckSchedule defines when health checks should run
|
// HealthCheckSchedule defines when health checks should run
|
||||||
type HealthCheckSchedule struct {
|
type HealthCheckSchedule struct {
|
||||||
CheckName string `json:"check_name"`
|
CheckName string `json:"check_name"`
|
||||||
Interval time.Duration `json:"interval"`
|
Interval time.Duration `json:"interval"`
|
||||||
NextRun time.Time `json:"next_run"`
|
NextRun time.Time `json:"next_run"`
|
||||||
LastRun time.Time `json:"last_run"`
|
LastRun time.Time `json:"last_run"`
|
||||||
Enabled bool `json:"enabled"`
|
Enabled bool `json:"enabled"`
|
||||||
FailureCount int `json:"failure_count"`
|
FailureCount int `json:"failure_count"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AlertManager manages system alerts and notifications
|
// AlertManager manages system alerts and notifications
|
||||||
type AlertManager struct {
|
type AlertManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
alertRules map[string]*AlertRule
|
alertRules map[string]*AlertRule
|
||||||
activeAlerts map[string]*Alert
|
activeAlerts map[string]*Alert
|
||||||
alertHistory []*Alert
|
alertHistory []*Alert
|
||||||
notifiers []AlertNotifier
|
notifiers []AlertNotifier
|
||||||
silences map[string]*AlertSilence
|
silences map[string]*AlertSilence
|
||||||
running bool
|
running bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// AlertRule defines conditions for triggering alerts
|
// AlertRule defines conditions for triggering alerts
|
||||||
type AlertRule struct {
|
type AlertRule struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
Severity AlertSeverity `json:"severity"`
|
Severity AlertSeverity `json:"severity"`
|
||||||
Conditions []*AlertCondition `json:"conditions"`
|
Conditions []*AlertCondition `json:"conditions"`
|
||||||
Duration time.Duration `json:"duration"` // How long condition must persist
|
Duration time.Duration `json:"duration"` // How long condition must persist
|
||||||
Cooldown time.Duration `json:"cooldown"` // Minimum time between alerts
|
Cooldown time.Duration `json:"cooldown"` // Minimum time between alerts
|
||||||
Labels map[string]string `json:"labels"`
|
Labels map[string]string `json:"labels"`
|
||||||
Annotations map[string]string `json:"annotations"`
|
Annotations map[string]string `json:"annotations"`
|
||||||
Enabled bool `json:"enabled"`
|
Enabled bool `json:"enabled"`
|
||||||
LastTriggered *time.Time `json:"last_triggered,omitempty"`
|
LastTriggered *time.Time `json:"last_triggered,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AlertCondition defines a single condition for an alert
|
// AlertCondition defines a single condition for an alert
|
||||||
type AlertCondition struct {
|
type AlertCondition struct {
|
||||||
MetricName string `json:"metric_name"`
|
MetricName string `json:"metric_name"`
|
||||||
Operator ConditionOperator `json:"operator"`
|
Operator ConditionOperator `json:"operator"`
|
||||||
Threshold float64 `json:"threshold"`
|
Threshold float64 `json:"threshold"`
|
||||||
Duration time.Duration `json:"duration"`
|
Duration time.Duration `json:"duration"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConditionOperator represents comparison operators for alert conditions
|
// ConditionOperator represents comparison operators for alert conditions
|
||||||
@@ -313,39 +313,39 @@ const (
|
|||||||
|
|
||||||
// Alert represents an active alert
|
// Alert represents an active alert
|
||||||
type Alert struct {
|
type Alert struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
RuleName string `json:"rule_name"`
|
RuleName string `json:"rule_name"`
|
||||||
Severity AlertSeverity `json:"severity"`
|
Severity AlertSeverity `json:"severity"`
|
||||||
Status AlertStatus `json:"status"`
|
Status AlertStatus `json:"status"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
Details map[string]interface{} `json:"details"`
|
Details map[string]interface{} `json:"details"`
|
||||||
Labels map[string]string `json:"labels"`
|
Labels map[string]string `json:"labels"`
|
||||||
Annotations map[string]string `json:"annotations"`
|
Annotations map[string]string `json:"annotations"`
|
||||||
StartsAt time.Time `json:"starts_at"`
|
StartsAt time.Time `json:"starts_at"`
|
||||||
EndsAt *time.Time `json:"ends_at,omitempty"`
|
EndsAt *time.Time `json:"ends_at,omitempty"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
AckBy string `json:"acknowledged_by,omitempty"`
|
AckBy string `json:"acknowledged_by,omitempty"`
|
||||||
AckAt *time.Time `json:"acknowledged_at,omitempty"`
|
AckAt *time.Time `json:"acknowledged_at,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AlertSeverity represents the severity level of an alert
|
// AlertSeverity represents the severity level of an alert
|
||||||
type AlertSeverity string
|
type AlertSeverity string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
SeverityInfo AlertSeverity = "info"
|
AlertAlertSeverityInfo AlertSeverity = "info"
|
||||||
SeverityWarning AlertSeverity = "warning"
|
AlertAlertSeverityWarning AlertSeverity = "warning"
|
||||||
SeverityError AlertSeverity = "error"
|
AlertAlertSeverityError AlertSeverity = "error"
|
||||||
SeverityCritical AlertSeverity = "critical"
|
AlertAlertSeverityCritical AlertSeverity = "critical"
|
||||||
)
|
)
|
||||||
|
|
||||||
// AlertStatus represents the current status of an alert
|
// AlertStatus represents the current status of an alert
|
||||||
type AlertStatus string
|
type AlertStatus string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
AlertStatusFiring AlertStatus = "firing"
|
AlertStatusFiring AlertStatus = "firing"
|
||||||
AlertStatusResolved AlertStatus = "resolved"
|
AlertStatusResolved AlertStatus = "resolved"
|
||||||
AlertStatusAcknowledged AlertStatus = "acknowledged"
|
AlertStatusAcknowledged AlertStatus = "acknowledged"
|
||||||
AlertStatusSilenced AlertStatus = "silenced"
|
AlertStatusSilenced AlertStatus = "silenced"
|
||||||
)
|
)
|
||||||
|
|
||||||
// AlertNotifier sends alert notifications
|
// AlertNotifier sends alert notifications
|
||||||
@@ -357,64 +357,64 @@ type AlertNotifier interface {
|
|||||||
|
|
||||||
// AlertSilence represents a silenced alert
|
// AlertSilence represents a silenced alert
|
||||||
type AlertSilence struct {
|
type AlertSilence struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Matchers map[string]string `json:"matchers"`
|
Matchers map[string]string `json:"matchers"`
|
||||||
StartTime time.Time `json:"start_time"`
|
StartTime time.Time `json:"start_time"`
|
||||||
EndTime time.Time `json:"end_time"`
|
EndTime time.Time `json:"end_time"`
|
||||||
CreatedBy string `json:"created_by"`
|
CreatedBy string `json:"created_by"`
|
||||||
Comment string `json:"comment"`
|
Comment string `json:"comment"`
|
||||||
Active bool `json:"active"`
|
Active bool `json:"active"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// DashboardServer provides web-based monitoring dashboard
|
// DashboardServer provides web-based monitoring dashboard
|
||||||
type DashboardServer struct {
|
type DashboardServer struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
server *http.Server
|
server *http.Server
|
||||||
dashboards map[string]*Dashboard
|
dashboards map[string]*Dashboard
|
||||||
widgets map[string]*Widget
|
widgets map[string]*Widget
|
||||||
customPages map[string]*CustomPage
|
customPages map[string]*CustomPage
|
||||||
running bool
|
running bool
|
||||||
port int
|
port int
|
||||||
}
|
}
|
||||||
|
|
||||||
// Dashboard represents a monitoring dashboard
|
// Dashboard represents a monitoring dashboard
|
||||||
type Dashboard struct {
|
type Dashboard struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
Widgets []*Widget `json:"widgets"`
|
Widgets []*Widget `json:"widgets"`
|
||||||
Layout *DashboardLayout `json:"layout"`
|
Layout *DashboardLayout `json:"layout"`
|
||||||
Settings *DashboardSettings `json:"settings"`
|
Settings *DashboardSettings `json:"settings"`
|
||||||
CreatedBy string `json:"created_by"`
|
CreatedBy string `json:"created_by"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
UpdatedAt time.Time `json:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Widget represents a dashboard widget
|
// Widget represents a dashboard widget
|
||||||
type Widget struct {
|
type Widget struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Type WidgetType `json:"type"`
|
Type WidgetType `json:"type"`
|
||||||
Title string `json:"title"`
|
Title string `json:"title"`
|
||||||
DataSource string `json:"data_source"`
|
DataSource string `json:"data_source"`
|
||||||
Query string `json:"query"`
|
Query string `json:"query"`
|
||||||
Settings map[string]interface{} `json:"settings"`
|
Settings map[string]interface{} `json:"settings"`
|
||||||
Position *WidgetPosition `json:"position"`
|
Position *WidgetPosition `json:"position"`
|
||||||
RefreshRate time.Duration `json:"refresh_rate"`
|
RefreshRate time.Duration `json:"refresh_rate"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// WidgetType represents different types of dashboard widgets
|
// WidgetType represents different types of dashboard widgets
|
||||||
type WidgetType string
|
type WidgetType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
WidgetTypeMetric WidgetType = "metric"
|
WidgetTypeMetric WidgetType = "metric"
|
||||||
WidgetTypeChart WidgetType = "chart"
|
WidgetTypeChart WidgetType = "chart"
|
||||||
WidgetTypeTable WidgetType = "table"
|
WidgetTypeTable WidgetType = "table"
|
||||||
WidgetTypeAlert WidgetType = "alert"
|
WidgetTypeAlert WidgetType = "alert"
|
||||||
WidgetTypeHealth WidgetType = "health"
|
WidgetTypeHealth WidgetType = "health"
|
||||||
WidgetTypeTopology WidgetType = "topology"
|
WidgetTypeTopology WidgetType = "topology"
|
||||||
WidgetTypeLog WidgetType = "log"
|
WidgetTypeLog WidgetType = "log"
|
||||||
WidgetTypeCustom WidgetType = "custom"
|
WidgetTypeCustom WidgetType = "custom"
|
||||||
)
|
)
|
||||||
|
|
||||||
// WidgetPosition defines widget position and size
|
// WidgetPosition defines widget position and size
|
||||||
@@ -427,11 +427,11 @@ type WidgetPosition struct {
|
|||||||
|
|
||||||
// DashboardLayout defines dashboard layout settings
|
// DashboardLayout defines dashboard layout settings
|
||||||
type DashboardLayout struct {
|
type DashboardLayout struct {
|
||||||
Columns int `json:"columns"`
|
Columns int `json:"columns"`
|
||||||
RowHeight int `json:"row_height"`
|
RowHeight int `json:"row_height"`
|
||||||
Margins [2]int `json:"margins"` // [x, y]
|
Margins [2]int `json:"margins"` // [x, y]
|
||||||
Spacing [2]int `json:"spacing"` // [x, y]
|
Spacing [2]int `json:"spacing"` // [x, y]
|
||||||
Breakpoints map[string]int `json:"breakpoints"`
|
Breakpoints map[string]int `json:"breakpoints"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// DashboardSettings contains dashboard configuration
|
// DashboardSettings contains dashboard configuration
|
||||||
@@ -446,43 +446,43 @@ type DashboardSettings struct {
|
|||||||
|
|
||||||
// CustomPage represents a custom monitoring page
|
// CustomPage represents a custom monitoring page
|
||||||
type CustomPage struct {
|
type CustomPage struct {
|
||||||
Path string `json:"path"`
|
Path string `json:"path"`
|
||||||
Title string `json:"title"`
|
Title string `json:"title"`
|
||||||
Content string `json:"content"`
|
Content string `json:"content"`
|
||||||
ContentType string `json:"content_type"`
|
ContentType string `json:"content_type"`
|
||||||
Handler http.HandlerFunc `json:"-"`
|
Handler http.HandlerFunc `json:"-"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// LogManager manages system logs and log analysis
|
// LogManager manages system logs and log analysis
|
||||||
type LogManager struct {
|
type LogManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
logSources map[string]*LogSource
|
logSources map[string]*LogSource
|
||||||
logEntries []*LogEntry
|
logEntries []*LogEntry
|
||||||
logAnalyzers []LogAnalyzer
|
logAnalyzers []LogAnalyzer
|
||||||
retentionPolicy *LogRetentionPolicy
|
retentionPolicy *LogRetentionPolicy
|
||||||
running bool
|
running bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// LogSource represents a source of log data
|
// LogSource represents a source of log data
|
||||||
type LogSource struct {
|
type LogSource struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Type LogSourceType `json:"type"`
|
Type LogSourceType `json:"type"`
|
||||||
Location string `json:"location"`
|
Location string `json:"location"`
|
||||||
Format LogFormat `json:"format"`
|
Format LogFormat `json:"format"`
|
||||||
Labels map[string]string `json:"labels"`
|
Labels map[string]string `json:"labels"`
|
||||||
Enabled bool `json:"enabled"`
|
Enabled bool `json:"enabled"`
|
||||||
LastRead time.Time `json:"last_read"`
|
LastRead time.Time `json:"last_read"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// LogSourceType represents different types of log sources
|
// LogSourceType represents different types of log sources
|
||||||
type LogSourceType string
|
type LogSourceType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
LogSourceTypeFile LogSourceType = "file"
|
LogSourceTypeFile LogSourceType = "file"
|
||||||
LogSourceTypeHTTP LogSourceType = "http"
|
LogSourceTypeHTTP LogSourceType = "http"
|
||||||
LogSourceTypeStream LogSourceType = "stream"
|
LogSourceTypeStream LogSourceType = "stream"
|
||||||
LogSourceTypeDatabase LogSourceType = "database"
|
LogSourceTypeDatabase LogSourceType = "database"
|
||||||
LogSourceTypeCustom LogSourceType = "custom"
|
LogSourceTypeCustom LogSourceType = "custom"
|
||||||
)
|
)
|
||||||
|
|
||||||
// LogFormat represents log entry format
|
// LogFormat represents log entry format
|
||||||
@@ -497,14 +497,14 @@ const (
|
|||||||
|
|
||||||
// LogEntry represents a single log entry
|
// LogEntry represents a single log entry
|
||||||
type LogEntry struct {
|
type LogEntry struct {
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Level LogLevel `json:"level"`
|
Level LogLevel `json:"level"`
|
||||||
Source string `json:"source"`
|
Source string `json:"source"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
Fields map[string]interface{} `json:"fields"`
|
Fields map[string]interface{} `json:"fields"`
|
||||||
Labels map[string]string `json:"labels"`
|
Labels map[string]string `json:"labels"`
|
||||||
TraceID string `json:"trace_id,omitempty"`
|
TraceID string `json:"trace_id,omitempty"`
|
||||||
SpanID string `json:"span_id,omitempty"`
|
SpanID string `json:"span_id,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// LogLevel represents log entry severity
|
// LogLevel represents log entry severity
|
||||||
@@ -527,22 +527,22 @@ type LogAnalyzer interface {
|
|||||||
|
|
||||||
// LogAnalysisResult represents the result of log analysis
|
// LogAnalysisResult represents the result of log analysis
|
||||||
type LogAnalysisResult struct {
|
type LogAnalysisResult struct {
|
||||||
AnalyzerName string `json:"analyzer_name"`
|
AnalyzerName string `json:"analyzer_name"`
|
||||||
Anomalies []*LogAnomaly `json:"anomalies"`
|
Anomalies []*LogAnomaly `json:"anomalies"`
|
||||||
Patterns []*LogPattern `json:"patterns"`
|
Patterns []*LogPattern `json:"patterns"`
|
||||||
Statistics *LogStatistics `json:"statistics"`
|
Statistics *LogStatistics `json:"statistics"`
|
||||||
Recommendations []string `json:"recommendations"`
|
Recommendations []string `json:"recommendations"`
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"`
|
AnalyzedAt time.Time `json:"analyzed_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// LogAnomaly represents detected log anomaly
|
// LogAnomaly represents detected log anomaly
|
||||||
type LogAnomaly struct {
|
type LogAnomaly struct {
|
||||||
Type AnomalyType `json:"type"`
|
Type AnomalyType `json:"type"`
|
||||||
Severity AlertSeverity `json:"severity"`
|
Severity AlertSeverity `json:"severity"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
Entries []*LogEntry `json:"entries"`
|
Entries []*LogEntry `json:"entries"`
|
||||||
Confidence float64 `json:"confidence"`
|
Confidence float64 `json:"confidence"`
|
||||||
DetectedAt time.Time `json:"detected_at"`
|
DetectedAt time.Time `json:"detected_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AnomalyType represents different types of log anomalies
|
// AnomalyType represents different types of log anomalies
|
||||||
@@ -558,38 +558,38 @@ const (
|
|||||||
|
|
||||||
// LogPattern represents detected log pattern
|
// LogPattern represents detected log pattern
|
||||||
type LogPattern struct {
|
type LogPattern struct {
|
||||||
Pattern string `json:"pattern"`
|
Pattern string `json:"pattern"`
|
||||||
Frequency int `json:"frequency"`
|
Frequency int `json:"frequency"`
|
||||||
LastSeen time.Time `json:"last_seen"`
|
LastSeen time.Time `json:"last_seen"`
|
||||||
Sources []string `json:"sources"`
|
Sources []string `json:"sources"`
|
||||||
Confidence float64 `json:"confidence"`
|
Confidence float64 `json:"confidence"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// LogStatistics provides log statistics
|
// LogStatistics provides log statistics
|
||||||
type LogStatistics struct {
|
type LogStatistics struct {
|
||||||
TotalEntries int64 `json:"total_entries"`
|
TotalEntries int64 `json:"total_entries"`
|
||||||
EntriesByLevel map[LogLevel]int64 `json:"entries_by_level"`
|
EntriesByLevel map[LogLevel]int64 `json:"entries_by_level"`
|
||||||
EntriesBySource map[string]int64 `json:"entries_by_source"`
|
EntriesBySource map[string]int64 `json:"entries_by_source"`
|
||||||
ErrorRate float64 `json:"error_rate"`
|
ErrorRate float64 `json:"error_rate"`
|
||||||
AverageRate float64 `json:"average_rate"`
|
AverageRate float64 `json:"average_rate"`
|
||||||
TimeRange [2]time.Time `json:"time_range"`
|
TimeRange [2]time.Time `json:"time_range"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// LogRetentionPolicy defines log retention rules
|
// LogRetentionPolicy defines log retention rules
|
||||||
type LogRetentionPolicy struct {
|
type LogRetentionPolicy struct {
|
||||||
RetentionPeriod time.Duration `json:"retention_period"`
|
RetentionPeriod time.Duration `json:"retention_period"`
|
||||||
MaxEntries int64 `json:"max_entries"`
|
MaxEntries int64 `json:"max_entries"`
|
||||||
CompressionAge time.Duration `json:"compression_age"`
|
CompressionAge time.Duration `json:"compression_age"`
|
||||||
ArchiveAge time.Duration `json:"archive_age"`
|
ArchiveAge time.Duration `json:"archive_age"`
|
||||||
Rules []*RetentionRule `json:"rules"`
|
Rules []*RetentionRule `json:"rules"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RetentionRule defines specific retention rules
|
// RetentionRule defines specific retention rules
|
||||||
type RetentionRule struct {
|
type RetentionRule struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Condition string `json:"condition"` // Query expression
|
Condition string `json:"condition"` // Query expression
|
||||||
Retention time.Duration `json:"retention"`
|
Retention time.Duration `json:"retention"`
|
||||||
Action RetentionAction `json:"action"`
|
Action RetentionAction `json:"action"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RetentionAction represents retention actions
|
// RetentionAction represents retention actions
|
||||||
@@ -603,47 +603,47 @@ const (
|
|||||||
|
|
||||||
// TraceManager manages distributed tracing
|
// TraceManager manages distributed tracing
|
||||||
type TraceManager struct {
|
type TraceManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
traces map[string]*Trace
|
traces map[string]*Trace
|
||||||
spans map[string]*Span
|
spans map[string]*Span
|
||||||
samplers []TraceSampler
|
samplers []TraceSampler
|
||||||
exporters []TraceExporter
|
exporters []TraceExporter
|
||||||
running bool
|
running bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// Trace represents a distributed trace
|
// Trace represents a distributed trace
|
||||||
type Trace struct {
|
type Trace struct {
|
||||||
TraceID string `json:"trace_id"`
|
TraceID string `json:"trace_id"`
|
||||||
Spans []*Span `json:"spans"`
|
Spans []*Span `json:"spans"`
|
||||||
Duration time.Duration `json:"duration"`
|
Duration time.Duration `json:"duration"`
|
||||||
StartTime time.Time `json:"start_time"`
|
StartTime time.Time `json:"start_time"`
|
||||||
EndTime time.Time `json:"end_time"`
|
EndTime time.Time `json:"end_time"`
|
||||||
Status TraceStatus `json:"status"`
|
Status TraceStatus `json:"status"`
|
||||||
Tags map[string]string `json:"tags"`
|
Tags map[string]string `json:"tags"`
|
||||||
Operations []string `json:"operations"`
|
Operations []string `json:"operations"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Span represents a single span in a trace
|
// Span represents a single span in a trace
|
||||||
type Span struct {
|
type Span struct {
|
||||||
SpanID string `json:"span_id"`
|
SpanID string `json:"span_id"`
|
||||||
TraceID string `json:"trace_id"`
|
TraceID string `json:"trace_id"`
|
||||||
ParentID string `json:"parent_id,omitempty"`
|
ParentID string `json:"parent_id,omitempty"`
|
||||||
Operation string `json:"operation"`
|
Operation string `json:"operation"`
|
||||||
Service string `json:"service"`
|
Service string `json:"service"`
|
||||||
StartTime time.Time `json:"start_time"`
|
StartTime time.Time `json:"start_time"`
|
||||||
EndTime time.Time `json:"end_time"`
|
EndTime time.Time `json:"end_time"`
|
||||||
Duration time.Duration `json:"duration"`
|
Duration time.Duration `json:"duration"`
|
||||||
Status SpanStatus `json:"status"`
|
Status SpanStatus `json:"status"`
|
||||||
Tags map[string]string `json:"tags"`
|
Tags map[string]string `json:"tags"`
|
||||||
Logs []*SpanLog `json:"logs"`
|
Logs []*SpanLog `json:"logs"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// TraceStatus represents the status of a trace
|
// TraceStatus represents the status of a trace
|
||||||
type TraceStatus string
|
type TraceStatus string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
TraceStatusOK TraceStatus = "ok"
|
TraceStatusOK TraceStatus = "ok"
|
||||||
TraceStatusError TraceStatus = "error"
|
TraceStatusError TraceStatus = "error"
|
||||||
TraceStatusTimeout TraceStatus = "timeout"
|
TraceStatusTimeout TraceStatus = "timeout"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -675,18 +675,18 @@ type TraceExporter interface {
|
|||||||
|
|
||||||
// ErrorEvent represents a system error event
|
// ErrorEvent represents a system error event
|
||||||
type ErrorEvent struct {
|
type ErrorEvent struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Level LogLevel `json:"level"`
|
Level LogLevel `json:"level"`
|
||||||
Component string `json:"component"`
|
Component string `json:"component"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
Error string `json:"error"`
|
Error string `json:"error"`
|
||||||
Context map[string]interface{} `json:"context"`
|
Context map[string]interface{} `json:"context"`
|
||||||
TraceID string `json:"trace_id,omitempty"`
|
TraceID string `json:"trace_id,omitempty"`
|
||||||
SpanID string `json:"span_id,omitempty"`
|
SpanID string `json:"span_id,omitempty"`
|
||||||
Count int `json:"count"`
|
Count int `json:"count"`
|
||||||
FirstSeen time.Time `json:"first_seen"`
|
FirstSeen time.Time `json:"first_seen"`
|
||||||
LastSeen time.Time `json:"last_seen"`
|
LastSeen time.Time `json:"last_seen"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewMonitoringSystem creates a comprehensive monitoring system
|
// NewMonitoringSystem creates a comprehensive monitoring system
|
||||||
@@ -722,7 +722,7 @@ func (ms *MonitoringSystem) initializeComponents() error {
|
|||||||
aggregatedStats: &AggregatedStatistics{
|
aggregatedStats: &AggregatedStatistics{
|
||||||
LastUpdated: time.Now(),
|
LastUpdated: time.Now(),
|
||||||
},
|
},
|
||||||
exporters: []MetricsExporter{},
|
exporters: []MetricsExporter{},
|
||||||
lastCollection: time.Now(),
|
lastCollection: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1134,13 +1134,13 @@ func (ms *MonitoringSystem) createDefaultDashboards() {
|
|||||||
|
|
||||||
func (ms *MonitoringSystem) severityWeight(severity AlertSeverity) int {
|
func (ms *MonitoringSystem) severityWeight(severity AlertSeverity) int {
|
||||||
switch severity {
|
switch severity {
|
||||||
case SeverityCritical:
|
case AlertSeverityCritical:
|
||||||
return 4
|
return 4
|
||||||
case SeverityError:
|
case AlertSeverityError:
|
||||||
return 3
|
return 3
|
||||||
case SeverityWarning:
|
case AlertSeverityWarning:
|
||||||
return 2
|
return 2
|
||||||
case SeverityInfo:
|
case AlertSeverityInfo:
|
||||||
return 1
|
return 1
|
||||||
default:
|
default:
|
||||||
return 0
|
return 0
|
||||||
|
|||||||
@@ -9,74 +9,74 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
|
"chorus/pkg/dht"
|
||||||
"github.com/libp2p/go-libp2p/core/peer"
|
"github.com/libp2p/go-libp2p/core/peer"
|
||||||
)
|
)
|
||||||
|
|
||||||
// NetworkManagerImpl implements NetworkManager interface for network topology and partition management
|
// NetworkManagerImpl implements NetworkManager interface for network topology and partition management
|
||||||
type NetworkManagerImpl struct {
|
type NetworkManagerImpl struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
dht *dht.DHT
|
dht *dht.DHT
|
||||||
config *config.Config
|
config *config.Config
|
||||||
topology *NetworkTopology
|
topology *NetworkTopology
|
||||||
partitionInfo *PartitionInfo
|
partitionInfo *PartitionInfo
|
||||||
connectivity *ConnectivityMatrix
|
connectivity *ConnectivityMatrix
|
||||||
stats *NetworkStatistics
|
stats *NetworkStatistics
|
||||||
healthChecker *NetworkHealthChecker
|
healthChecker *NetworkHealthChecker
|
||||||
partitionDetector *PartitionDetector
|
partitionDetector *PartitionDetector
|
||||||
recoveryManager *RecoveryManager
|
recoveryManager *RecoveryManager
|
||||||
|
|
||||||
// Configuration
|
// Configuration
|
||||||
healthCheckInterval time.Duration
|
healthCheckInterval time.Duration
|
||||||
partitionCheckInterval time.Duration
|
partitionCheckInterval time.Duration
|
||||||
connectivityTimeout time.Duration
|
connectivityTimeout time.Duration
|
||||||
maxPartitionDuration time.Duration
|
maxPartitionDuration time.Duration
|
||||||
|
|
||||||
// State
|
// State
|
||||||
lastTopologyUpdate time.Time
|
lastTopologyUpdate time.Time
|
||||||
lastPartitionCheck time.Time
|
lastPartitionCheck time.Time
|
||||||
running bool
|
running bool
|
||||||
recoveryInProgress bool
|
recoveryInProgress bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConnectivityMatrix tracks connectivity between all nodes
|
// ConnectivityMatrix tracks connectivity between all nodes
|
||||||
type ConnectivityMatrix struct {
|
type ConnectivityMatrix struct {
|
||||||
Matrix map[string]map[string]*ConnectionInfo `json:"matrix"`
|
Matrix map[string]map[string]*ConnectionInfo `json:"matrix"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConnectionInfo represents connectivity information between two nodes
|
// ConnectionInfo represents connectivity information between two nodes
|
||||||
type ConnectionInfo struct {
|
type ConnectionInfo struct {
|
||||||
Connected bool `json:"connected"`
|
Connected bool `json:"connected"`
|
||||||
Latency time.Duration `json:"latency"`
|
Latency time.Duration `json:"latency"`
|
||||||
PacketLoss float64 `json:"packet_loss"`
|
PacketLoss float64 `json:"packet_loss"`
|
||||||
Bandwidth int64 `json:"bandwidth"`
|
Bandwidth int64 `json:"bandwidth"`
|
||||||
LastChecked time.Time `json:"last_checked"`
|
LastChecked time.Time `json:"last_checked"`
|
||||||
ErrorCount int `json:"error_count"`
|
ErrorCount int `json:"error_count"`
|
||||||
LastError string `json:"last_error,omitempty"`
|
LastError string `json:"last_error,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NetworkHealthChecker performs network health checks
|
// NetworkHealthChecker performs network health checks
|
||||||
type NetworkHealthChecker struct {
|
type NetworkHealthChecker struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
nodeHealth map[string]*NodeHealth
|
nodeHealth map[string]*NodeHealth
|
||||||
healthHistory map[string][]*HealthCheckResult
|
healthHistory map[string][]*NetworkHealthCheckResult
|
||||||
alertThresholds *NetworkAlertThresholds
|
alertThresholds *NetworkAlertThresholds
|
||||||
}
|
}
|
||||||
|
|
||||||
// NodeHealth represents health status of a network node
|
// NodeHealth represents health status of a network node
|
||||||
type NodeHealth struct {
|
type NodeHealth struct {
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
Status NodeStatus `json:"status"`
|
Status NodeStatus `json:"status"`
|
||||||
HealthScore float64 `json:"health_score"`
|
HealthScore float64 `json:"health_score"`
|
||||||
LastSeen time.Time `json:"last_seen"`
|
LastSeen time.Time `json:"last_seen"`
|
||||||
ResponseTime time.Duration `json:"response_time"`
|
ResponseTime time.Duration `json:"response_time"`
|
||||||
PacketLossRate float64 `json:"packet_loss_rate"`
|
PacketLossRate float64 `json:"packet_loss_rate"`
|
||||||
BandwidthUtil float64 `json:"bandwidth_utilization"`
|
BandwidthUtil float64 `json:"bandwidth_utilization"`
|
||||||
Uptime time.Duration `json:"uptime"`
|
Uptime time.Duration `json:"uptime"`
|
||||||
ErrorRate float64 `json:"error_rate"`
|
ErrorRate float64 `json:"error_rate"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NodeStatus represents the status of a network node
|
// NodeStatus represents the status of a network node
|
||||||
@@ -91,23 +91,23 @@ const (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// HealthCheckResult represents the result of a health check
|
// HealthCheckResult represents the result of a health check
|
||||||
type HealthCheckResult struct {
|
type NetworkHealthCheckResult struct {
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Success bool `json:"success"`
|
Success bool `json:"success"`
|
||||||
ResponseTime time.Duration `json:"response_time"`
|
ResponseTime time.Duration `json:"response_time"`
|
||||||
ErrorMessage string `json:"error_message,omitempty"`
|
ErrorMessage string `json:"error_message,omitempty"`
|
||||||
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
|
NetworkMetrics *NetworkMetrics `json:"network_metrics"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NetworkAlertThresholds defines thresholds for network alerts
|
// NetworkAlertThresholds defines thresholds for network alerts
|
||||||
type NetworkAlertThresholds struct {
|
type NetworkAlertThresholds struct {
|
||||||
LatencyWarning time.Duration `json:"latency_warning"`
|
LatencyWarning time.Duration `json:"latency_warning"`
|
||||||
LatencyCritical time.Duration `json:"latency_critical"`
|
LatencyCritical time.Duration `json:"latency_critical"`
|
||||||
PacketLossWarning float64 `json:"packet_loss_warning"`
|
PacketLossWarning float64 `json:"packet_loss_warning"`
|
||||||
PacketLossCritical float64 `json:"packet_loss_critical"`
|
PacketLossCritical float64 `json:"packet_loss_critical"`
|
||||||
HealthScoreWarning float64 `json:"health_score_warning"`
|
HealthScoreWarning float64 `json:"health_score_warning"`
|
||||||
HealthScoreCritical float64 `json:"health_score_critical"`
|
HealthScoreCritical float64 `json:"health_score_critical"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// PartitionDetector detects network partitions
|
// PartitionDetector detects network partitions
|
||||||
@@ -131,14 +131,14 @@ const (
|
|||||||
|
|
||||||
// PartitionEvent represents a partition detection event
|
// PartitionEvent represents a partition detection event
|
||||||
type PartitionEvent struct {
|
type PartitionEvent struct {
|
||||||
EventID string `json:"event_id"`
|
EventID string `json:"event_id"`
|
||||||
DetectedAt time.Time `json:"detected_at"`
|
DetectedAt time.Time `json:"detected_at"`
|
||||||
Algorithm PartitionDetectionAlgorithm `json:"algorithm"`
|
Algorithm PartitionDetectionAlgorithm `json:"algorithm"`
|
||||||
PartitionedNodes []string `json:"partitioned_nodes"`
|
PartitionedNodes []string `json:"partitioned_nodes"`
|
||||||
Confidence float64 `json:"confidence"`
|
Confidence float64 `json:"confidence"`
|
||||||
Duration time.Duration `json:"duration"`
|
Duration time.Duration `json:"duration"`
|
||||||
Resolved bool `json:"resolved"`
|
Resolved bool `json:"resolved"`
|
||||||
ResolvedAt *time.Time `json:"resolved_at,omitempty"`
|
ResolvedAt *time.Time `json:"resolved_at,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// FalsePositiveFilter helps reduce false partition detections
|
// FalsePositiveFilter helps reduce false partition detections
|
||||||
@@ -159,10 +159,10 @@ type PartitionDetectorConfig struct {
|
|||||||
|
|
||||||
// RecoveryManager manages network partition recovery
|
// RecoveryManager manages network partition recovery
|
||||||
type RecoveryManager struct {
|
type RecoveryManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
recoveryStrategies map[RecoveryStrategy]*RecoveryStrategyConfig
|
recoveryStrategies map[RecoveryStrategy]*RecoveryStrategyConfig
|
||||||
activeRecoveries map[string]*RecoveryOperation
|
activeRecoveries map[string]*RecoveryOperation
|
||||||
recoveryHistory []*RecoveryResult
|
recoveryHistory []*RecoveryResult
|
||||||
}
|
}
|
||||||
|
|
||||||
// RecoveryStrategy represents different recovery strategies
|
// RecoveryStrategy represents different recovery strategies
|
||||||
@@ -177,25 +177,25 @@ const (
|
|||||||
|
|
||||||
// RecoveryStrategyConfig configures a recovery strategy
|
// RecoveryStrategyConfig configures a recovery strategy
|
||||||
type RecoveryStrategyConfig struct {
|
type RecoveryStrategyConfig struct {
|
||||||
Strategy RecoveryStrategy `json:"strategy"`
|
Strategy RecoveryStrategy `json:"strategy"`
|
||||||
Timeout time.Duration `json:"timeout"`
|
Timeout time.Duration `json:"timeout"`
|
||||||
RetryAttempts int `json:"retry_attempts"`
|
RetryAttempts int `json:"retry_attempts"`
|
||||||
RetryInterval time.Duration `json:"retry_interval"`
|
RetryInterval time.Duration `json:"retry_interval"`
|
||||||
RequireConsensus bool `json:"require_consensus"`
|
RequireConsensus bool `json:"require_consensus"`
|
||||||
ForcedThreshold time.Duration `json:"forced_threshold"`
|
ForcedThreshold time.Duration `json:"forced_threshold"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RecoveryOperation represents an active recovery operation
|
// RecoveryOperation represents an active recovery operation
|
||||||
type RecoveryOperation struct {
|
type RecoveryOperation struct {
|
||||||
OperationID string `json:"operation_id"`
|
OperationID string `json:"operation_id"`
|
||||||
Strategy RecoveryStrategy `json:"strategy"`
|
Strategy RecoveryStrategy `json:"strategy"`
|
||||||
StartedAt time.Time `json:"started_at"`
|
StartedAt time.Time `json:"started_at"`
|
||||||
TargetNodes []string `json:"target_nodes"`
|
TargetNodes []string `json:"target_nodes"`
|
||||||
Status RecoveryStatus `json:"status"`
|
Status RecoveryStatus `json:"status"`
|
||||||
Progress float64 `json:"progress"`
|
Progress float64 `json:"progress"`
|
||||||
CurrentPhase RecoveryPhase `json:"current_phase"`
|
CurrentPhase RecoveryPhase `json:"current_phase"`
|
||||||
Errors []string `json:"errors"`
|
Errors []string `json:"errors"`
|
||||||
LastUpdate time.Time `json:"last_update"`
|
LastUpdate time.Time `json:"last_update"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RecoveryStatus represents the status of a recovery operation
|
// RecoveryStatus represents the status of a recovery operation
|
||||||
@@ -213,12 +213,12 @@ const (
|
|||||||
type RecoveryPhase string
|
type RecoveryPhase string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
RecoveryPhaseAssessment RecoveryPhase = "assessment"
|
RecoveryPhaseAssessment RecoveryPhase = "assessment"
|
||||||
RecoveryPhasePreparation RecoveryPhase = "preparation"
|
RecoveryPhasePreparation RecoveryPhase = "preparation"
|
||||||
RecoveryPhaseReconnection RecoveryPhase = "reconnection"
|
RecoveryPhaseReconnection RecoveryPhase = "reconnection"
|
||||||
RecoveryPhaseSynchronization RecoveryPhase = "synchronization"
|
RecoveryPhaseSynchronization RecoveryPhase = "synchronization"
|
||||||
RecoveryPhaseValidation RecoveryPhase = "validation"
|
RecoveryPhaseValidation RecoveryPhase = "validation"
|
||||||
RecoveryPhaseCompletion RecoveryPhase = "completion"
|
RecoveryPhaseCompletion RecoveryPhase = "completion"
|
||||||
)
|
)
|
||||||
|
|
||||||
// NewNetworkManagerImpl creates a new network manager implementation
|
// NewNetworkManagerImpl creates a new network manager implementation
|
||||||
@@ -231,13 +231,13 @@ func NewNetworkManagerImpl(dht *dht.DHT, config *config.Config) (*NetworkManager
|
|||||||
}
|
}
|
||||||
|
|
||||||
nm := &NetworkManagerImpl{
|
nm := &NetworkManagerImpl{
|
||||||
dht: dht,
|
dht: dht,
|
||||||
config: config,
|
config: config,
|
||||||
healthCheckInterval: 30 * time.Second,
|
healthCheckInterval: 30 * time.Second,
|
||||||
partitionCheckInterval: 60 * time.Second,
|
partitionCheckInterval: 60 * time.Second,
|
||||||
connectivityTimeout: 10 * time.Second,
|
connectivityTimeout: 10 * time.Second,
|
||||||
maxPartitionDuration: 10 * time.Minute,
|
maxPartitionDuration: 10 * time.Minute,
|
||||||
connectivity: &ConnectivityMatrix{Matrix: make(map[string]map[string]*ConnectionInfo)},
|
connectivity: &ConnectivityMatrix{Matrix: make(map[string]map[string]*ConnectionInfo)},
|
||||||
stats: &NetworkStatistics{
|
stats: &NetworkStatistics{
|
||||||
LastUpdated: time.Now(),
|
LastUpdated: time.Now(),
|
||||||
},
|
},
|
||||||
@@ -255,33 +255,33 @@ func NewNetworkManagerImpl(dht *dht.DHT, config *config.Config) (*NetworkManager
|
|||||||
func (nm *NetworkManagerImpl) initializeComponents() error {
|
func (nm *NetworkManagerImpl) initializeComponents() error {
|
||||||
// Initialize topology
|
// Initialize topology
|
||||||
nm.topology = &NetworkTopology{
|
nm.topology = &NetworkTopology{
|
||||||
TotalNodes: 0,
|
TotalNodes: 0,
|
||||||
Connections: make(map[string][]string),
|
Connections: make(map[string][]string),
|
||||||
Regions: make(map[string][]string),
|
Regions: make(map[string][]string),
|
||||||
AvailabilityZones: make(map[string][]string),
|
AvailabilityZones: make(map[string][]string),
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize partition info
|
// Initialize partition info
|
||||||
nm.partitionInfo = &PartitionInfo{
|
nm.partitionInfo = &PartitionInfo{
|
||||||
PartitionDetected: false,
|
PartitionDetected: false,
|
||||||
PartitionCount: 1,
|
PartitionCount: 1,
|
||||||
IsolatedNodes: []string{},
|
IsolatedNodes: []string{},
|
||||||
ConnectivityMatrix: make(map[string]map[string]bool),
|
ConnectivityMatrix: make(map[string]map[string]bool),
|
||||||
DetectedAt: time.Now(),
|
DetectedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize health checker
|
// Initialize health checker
|
||||||
nm.healthChecker = &NetworkHealthChecker{
|
nm.healthChecker = &NetworkHealthChecker{
|
||||||
nodeHealth: make(map[string]*NodeHealth),
|
nodeHealth: make(map[string]*NodeHealth),
|
||||||
healthHistory: make(map[string][]*HealthCheckResult),
|
healthHistory: make(map[string][]*NetworkHealthCheckResult),
|
||||||
alertThresholds: &NetworkAlertThresholds{
|
alertThresholds: &NetworkAlertThresholds{
|
||||||
LatencyWarning: 500 * time.Millisecond,
|
LatencyWarning: 500 * time.Millisecond,
|
||||||
LatencyCritical: 2 * time.Second,
|
LatencyCritical: 2 * time.Second,
|
||||||
PacketLossWarning: 0.05, // 5%
|
PacketLossWarning: 0.05, // 5%
|
||||||
PacketLossCritical: 0.15, // 15%
|
PacketLossCritical: 0.15, // 15%
|
||||||
HealthScoreWarning: 0.7,
|
HealthScoreWarning: 0.7,
|
||||||
HealthScoreCritical: 0.4,
|
HealthScoreCritical: 0.4,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -307,20 +307,20 @@ func (nm *NetworkManagerImpl) initializeComponents() error {
|
|||||||
nm.recoveryManager = &RecoveryManager{
|
nm.recoveryManager = &RecoveryManager{
|
||||||
recoveryStrategies: map[RecoveryStrategy]*RecoveryStrategyConfig{
|
recoveryStrategies: map[RecoveryStrategy]*RecoveryStrategyConfig{
|
||||||
RecoveryStrategyAutomatic: {
|
RecoveryStrategyAutomatic: {
|
||||||
Strategy: RecoveryStrategyAutomatic,
|
Strategy: RecoveryStrategyAutomatic,
|
||||||
Timeout: 5 * time.Minute,
|
Timeout: 5 * time.Minute,
|
||||||
RetryAttempts: 3,
|
RetryAttempts: 3,
|
||||||
RetryInterval: 30 * time.Second,
|
RetryInterval: 30 * time.Second,
|
||||||
RequireConsensus: false,
|
RequireConsensus: false,
|
||||||
ForcedThreshold: 10 * time.Minute,
|
ForcedThreshold: 10 * time.Minute,
|
||||||
},
|
},
|
||||||
RecoveryStrategyGraceful: {
|
RecoveryStrategyGraceful: {
|
||||||
Strategy: RecoveryStrategyGraceful,
|
Strategy: RecoveryStrategyGraceful,
|
||||||
Timeout: 10 * time.Minute,
|
Timeout: 10 * time.Minute,
|
||||||
RetryAttempts: 5,
|
RetryAttempts: 5,
|
||||||
RetryInterval: 60 * time.Second,
|
RetryInterval: 60 * time.Second,
|
||||||
RequireConsensus: true,
|
RequireConsensus: true,
|
||||||
ForcedThreshold: 20 * time.Minute,
|
ForcedThreshold: 20 * time.Minute,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
activeRecoveries: make(map[string]*RecoveryOperation),
|
activeRecoveries: make(map[string]*RecoveryOperation),
|
||||||
@@ -677,7 +677,7 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
|
|||||||
|
|
||||||
// Store health check history
|
// Store health check history
|
||||||
if _, exists := nm.healthChecker.healthHistory[peer.String()]; !exists {
|
if _, exists := nm.healthChecker.healthHistory[peer.String()]; !exists {
|
||||||
nm.healthChecker.healthHistory[peer.String()] = []*HealthCheckResult{}
|
nm.healthChecker.healthHistory[peer.String()] = []*NetworkHealthCheckResult{}
|
||||||
}
|
}
|
||||||
nm.healthChecker.healthHistory[peer.String()] = append(
|
nm.healthChecker.healthHistory[peer.String()] = append(
|
||||||
nm.healthChecker.healthHistory[peer.String()],
|
nm.healthChecker.healthHistory[peer.String()],
|
||||||
@@ -907,7 +907,7 @@ func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID s
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *HealthCheckResult {
|
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *NetworkHealthCheckResult {
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
|
|
||||||
// In a real implementation, this would perform actual health checks
|
// In a real implementation, this would perform actual health checks
|
||||||
@@ -950,12 +950,12 @@ func (nm *NetworkManagerImpl) testConnection(ctx context.Context, peerID string)
|
|||||||
}
|
}
|
||||||
|
|
||||||
return &ConnectionInfo{
|
return &ConnectionInfo{
|
||||||
Connected: connected,
|
Connected: connected,
|
||||||
Latency: latency,
|
Latency: latency,
|
||||||
PacketLoss: 0.0,
|
PacketLoss: 0.0,
|
||||||
Bandwidth: 1000000, // 1 Mbps placeholder
|
Bandwidth: 1000000, // 1 Mbps placeholder
|
||||||
LastChecked: time.Now(),
|
LastChecked: time.Now(),
|
||||||
ErrorCount: 0,
|
ErrorCount: 0,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1024,14 +1024,14 @@ func (nm *NetworkManagerImpl) calculateOverallNetworkHealth() float64 {
|
|||||||
return float64(nm.stats.ConnectedNodes) / float64(nm.stats.TotalNodes)
|
return float64(nm.stats.ConnectedNodes) / float64(nm.stats.TotalNodes)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (nm *NetworkManagerImpl) determineNodeStatus(result *HealthCheckResult) NodeStatus {
|
func (nm *NetworkManagerImpl) determineNodeStatus(result *NetworkHealthCheckResult) NodeStatus {
|
||||||
if result.Success {
|
if result.Success {
|
||||||
return NodeStatusHealthy
|
return NodeStatusHealthy
|
||||||
}
|
}
|
||||||
return NodeStatusUnreachable
|
return NodeStatusUnreachable
|
||||||
}
|
}
|
||||||
|
|
||||||
func (nm *NetworkManagerImpl) calculateHealthScore(result *HealthCheckResult) float64 {
|
func (nm *NetworkManagerImpl) calculateHealthScore(result *NetworkHealthCheckResult) float64 {
|
||||||
if result.Success {
|
if result.Success {
|
||||||
return 1.0
|
return 1.0
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,39 +7,39 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
|
"chorus/pkg/dht"
|
||||||
"chorus/pkg/ucxl"
|
"chorus/pkg/ucxl"
|
||||||
"github.com/libp2p/go-libp2p/core/peer"
|
"github.com/libp2p/go-libp2p/core/peer"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ReplicationManagerImpl implements ReplicationManager interface
|
// ReplicationManagerImpl implements ReplicationManager interface
|
||||||
type ReplicationManagerImpl struct {
|
type ReplicationManagerImpl struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
dht *dht.DHT
|
dht *dht.DHT
|
||||||
config *config.Config
|
config *config.Config
|
||||||
replicationMap map[string]*ReplicationStatus
|
replicationMap map[string]*ReplicationStatus
|
||||||
repairQueue chan *RepairRequest
|
repairQueue chan *RepairRequest
|
||||||
rebalanceQueue chan *RebalanceRequest
|
rebalanceQueue chan *RebalanceRequest
|
||||||
consistentHash ConsistentHashing
|
consistentHash ConsistentHashing
|
||||||
policy *ReplicationPolicy
|
policy *ReplicationPolicy
|
||||||
stats *ReplicationStatistics
|
stats *ReplicationStatistics
|
||||||
running bool
|
running bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// RepairRequest represents a repair request
|
// RepairRequest represents a repair request
|
||||||
type RepairRequest struct {
|
type RepairRequest struct {
|
||||||
Address ucxl.Address
|
Address ucxl.Address
|
||||||
RequestedBy string
|
RequestedBy string
|
||||||
Priority Priority
|
Priority Priority
|
||||||
RequestTime time.Time
|
RequestTime time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// RebalanceRequest represents a rebalance request
|
// RebalanceRequest represents a rebalance request
|
||||||
type RebalanceRequest struct {
|
type RebalanceRequest struct {
|
||||||
Reason string
|
Reason string
|
||||||
RequestedBy string
|
RequestedBy string
|
||||||
RequestTime time.Time
|
RequestTime time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewReplicationManagerImpl creates a new replication manager implementation
|
// NewReplicationManagerImpl creates a new replication manager implementation
|
||||||
@@ -220,10 +220,10 @@ func (rm *ReplicationManagerImpl) BalanceReplicas(ctx context.Context) (*Rebalan
|
|||||||
start := time.Now()
|
start := time.Now()
|
||||||
|
|
||||||
result := &RebalanceResult{
|
result := &RebalanceResult{
|
||||||
RebalanceTime: 0,
|
RebalanceTime: 0,
|
||||||
RebalanceSuccessful: false,
|
RebalanceSuccessful: false,
|
||||||
Errors: []string{},
|
Errors: []string{},
|
||||||
RebalancedAt: time.Now(),
|
RebalancedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get current cluster topology
|
// Get current cluster topology
|
||||||
@@ -462,7 +462,7 @@ func (rm *ReplicationManagerImpl) discoverReplicas(ctx context.Context, address
|
|||||||
// For now, we'll simulate some replicas
|
// For now, we'll simulate some replicas
|
||||||
peers := rm.dht.GetConnectedPeers()
|
peers := rm.dht.GetConnectedPeers()
|
||||||
if len(peers) > 0 {
|
if len(peers) > 0 {
|
||||||
status.CurrentReplicas = min(len(peers), rm.policy.DefaultFactor)
|
status.CurrentReplicas = minInt(len(peers), rm.policy.DefaultFactor)
|
||||||
status.HealthyReplicas = status.CurrentReplicas
|
status.HealthyReplicas = status.CurrentReplicas
|
||||||
|
|
||||||
for i, peer := range peers {
|
for i, peer := range peers {
|
||||||
@@ -630,15 +630,15 @@ func (rm *ReplicationManagerImpl) isNodeOverloaded(nodeID string) bool {
|
|||||||
|
|
||||||
// RebalanceMove represents a replica move operation
|
// RebalanceMove represents a replica move operation
|
||||||
type RebalanceMove struct {
|
type RebalanceMove struct {
|
||||||
Address ucxl.Address `json:"address"`
|
Address ucxl.Address `json:"address"`
|
||||||
FromNode string `json:"from_node"`
|
FromNode string `json:"from_node"`
|
||||||
ToNode string `json:"to_node"`
|
ToNode string `json:"to_node"`
|
||||||
Priority Priority `json:"priority"`
|
Priority Priority `json:"priority"`
|
||||||
Reason string `json:"reason"`
|
Reason string `json:"reason"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Utility functions
|
// Utility functions
|
||||||
func min(a, b int) int {
|
func minInt(a, b int) int {
|
||||||
if a < b {
|
if a < b {
|
||||||
return a
|
return a
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -20,21 +20,21 @@ import (
|
|||||||
|
|
||||||
// SecurityManager handles all security aspects of the distributed system
|
// SecurityManager handles all security aspects of the distributed system
|
||||||
type SecurityManager struct {
|
type SecurityManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
config *config.Config
|
config *config.Config
|
||||||
tlsConfig *TLSConfig
|
tlsConfig *TLSConfig
|
||||||
authManager *AuthenticationManager
|
authManager *AuthenticationManager
|
||||||
authzManager *AuthorizationManager
|
authzManager *AuthorizationManager
|
||||||
auditLogger *SecurityAuditLogger
|
auditLogger *SecurityAuditLogger
|
||||||
nodeAuth *NodeAuthentication
|
nodeAuth *NodeAuthentication
|
||||||
encryption *DistributionEncryption
|
encryption *DistributionEncryption
|
||||||
certificateAuth *CertificateAuthority
|
certificateAuth *CertificateAuthority
|
||||||
|
|
||||||
// Security state
|
// Security state
|
||||||
trustedNodes map[string]*TrustedNode
|
trustedNodes map[string]*TrustedNode
|
||||||
activeSessions map[string]*SecuritySession
|
activeSessions map[string]*SecuritySession
|
||||||
securityPolicies map[string]*SecurityPolicy
|
securityPolicies map[string]*SecurityPolicy
|
||||||
threatDetector *ThreatDetector
|
threatDetector *ThreatDetector
|
||||||
|
|
||||||
// Configuration
|
// Configuration
|
||||||
tlsEnabled bool
|
tlsEnabled bool
|
||||||
@@ -45,28 +45,28 @@ type SecurityManager struct {
|
|||||||
|
|
||||||
// TLSConfig manages TLS configuration for secure communications
|
// TLSConfig manages TLS configuration for secure communications
|
||||||
type TLSConfig struct {
|
type TLSConfig struct {
|
||||||
ServerConfig *tls.Config
|
ServerConfig *tls.Config
|
||||||
ClientConfig *tls.Config
|
ClientConfig *tls.Config
|
||||||
CertificatePath string
|
CertificatePath string
|
||||||
PrivateKeyPath string
|
PrivateKeyPath string
|
||||||
CAPath string
|
CAPath string
|
||||||
MinTLSVersion uint16
|
MinTLSVersion uint16
|
||||||
CipherSuites []uint16
|
CipherSuites []uint16
|
||||||
CurvePreferences []tls.CurveID
|
CurvePreferences []tls.CurveID
|
||||||
ClientAuth tls.ClientAuthType
|
ClientAuth tls.ClientAuthType
|
||||||
VerifyConnection func(tls.ConnectionState) error
|
VerifyConnection func(tls.ConnectionState) error
|
||||||
}
|
}
|
||||||
|
|
||||||
// AuthenticationManager handles node and user authentication
|
// AuthenticationManager handles node and user authentication
|
||||||
type AuthenticationManager struct {
|
type AuthenticationManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
providers map[string]AuthProvider
|
providers map[string]AuthProvider
|
||||||
tokenValidator TokenValidator
|
tokenValidator TokenValidator
|
||||||
sessionManager *SessionManager
|
sessionManager *SessionManager
|
||||||
multiFactorAuth *MultiFactorAuth
|
multiFactorAuth *MultiFactorAuth
|
||||||
credentialStore *CredentialStore
|
credentialStore *CredentialStore
|
||||||
loginAttempts map[string]*LoginAttempts
|
loginAttempts map[string]*LoginAttempts
|
||||||
authPolicies map[string]*AuthPolicy
|
authPolicies map[string]*AuthPolicy
|
||||||
}
|
}
|
||||||
|
|
||||||
// AuthProvider interface for different authentication methods
|
// AuthProvider interface for different authentication methods
|
||||||
@@ -80,14 +80,14 @@ type AuthProvider interface {
|
|||||||
|
|
||||||
// Credentials represents authentication credentials
|
// Credentials represents authentication credentials
|
||||||
type Credentials struct {
|
type Credentials struct {
|
||||||
Type CredentialType `json:"type"`
|
Type CredentialType `json:"type"`
|
||||||
Username string `json:"username,omitempty"`
|
Username string `json:"username,omitempty"`
|
||||||
Password string `json:"password,omitempty"`
|
Password string `json:"password,omitempty"`
|
||||||
Token string `json:"token,omitempty"`
|
Token string `json:"token,omitempty"`
|
||||||
Certificate *x509.Certificate `json:"certificate,omitempty"`
|
Certificate *x509.Certificate `json:"certificate,omitempty"`
|
||||||
Signature []byte `json:"signature,omitempty"`
|
Signature []byte `json:"signature,omitempty"`
|
||||||
Challenge string `json:"challenge,omitempty"`
|
Challenge string `json:"challenge,omitempty"`
|
||||||
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// CredentialType represents different types of credentials
|
// CredentialType represents different types of credentials
|
||||||
@@ -104,15 +104,15 @@ const (
|
|||||||
|
|
||||||
// AuthResult represents the result of authentication
|
// AuthResult represents the result of authentication
|
||||||
type AuthResult struct {
|
type AuthResult struct {
|
||||||
Success bool `json:"success"`
|
Success bool `json:"success"`
|
||||||
UserID string `json:"user_id"`
|
UserID string `json:"user_id"`
|
||||||
Roles []string `json:"roles"`
|
Roles []string `json:"roles"`
|
||||||
Permissions []string `json:"permissions"`
|
Permissions []string `json:"permissions"`
|
||||||
TokenPair *TokenPair `json:"token_pair"`
|
TokenPair *TokenPair `json:"token_pair"`
|
||||||
SessionID string `json:"session_id"`
|
SessionID string `json:"session_id"`
|
||||||
ExpiresAt time.Time `json:"expires_at"`
|
ExpiresAt time.Time `json:"expires_at"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
FailureReason string `json:"failure_reason,omitempty"`
|
FailureReason string `json:"failure_reason,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// TokenPair represents access and refresh tokens
|
// TokenPair represents access and refresh tokens
|
||||||
@@ -140,13 +140,13 @@ type TokenClaims struct {
|
|||||||
|
|
||||||
// AuthorizationManager handles authorization and access control
|
// AuthorizationManager handles authorization and access control
|
||||||
type AuthorizationManager struct {
|
type AuthorizationManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
policyEngine PolicyEngine
|
policyEngine PolicyEngine
|
||||||
rbacManager *RBACManager
|
rbacManager *RBACManager
|
||||||
aclManager *ACLManager
|
aclManager *ACLManager
|
||||||
resourceManager *ResourceManager
|
resourceManager *ResourceManager
|
||||||
permissionCache *PermissionCache
|
permissionCache *PermissionCache
|
||||||
authzPolicies map[string]*AuthorizationPolicy
|
authzPolicies map[string]*AuthorizationPolicy
|
||||||
}
|
}
|
||||||
|
|
||||||
// PolicyEngine interface for policy evaluation
|
// PolicyEngine interface for policy evaluation
|
||||||
@@ -168,13 +168,13 @@ type AuthorizationRequest struct {
|
|||||||
|
|
||||||
// AuthorizationResult represents the result of authorization
|
// AuthorizationResult represents the result of authorization
|
||||||
type AuthorizationResult struct {
|
type AuthorizationResult struct {
|
||||||
Decision AuthorizationDecision `json:"decision"`
|
Decision AuthorizationDecision `json:"decision"`
|
||||||
Reason string `json:"reason"`
|
Reason string `json:"reason"`
|
||||||
Policies []string `json:"applied_policies"`
|
Policies []string `json:"applied_policies"`
|
||||||
Conditions []string `json:"conditions"`
|
Conditions []string `json:"conditions"`
|
||||||
TTL time.Duration `json:"ttl"`
|
TTL time.Duration `json:"ttl"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
EvaluationTime time.Duration `json:"evaluation_time"`
|
EvaluationTime time.Duration `json:"evaluation_time"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AuthorizationDecision represents authorization decisions
|
// AuthorizationDecision represents authorization decisions
|
||||||
@@ -188,13 +188,13 @@ const (
|
|||||||
|
|
||||||
// SecurityAuditLogger handles security event logging
|
// SecurityAuditLogger handles security event logging
|
||||||
type SecurityAuditLogger struct {
|
type SecurityAuditLogger struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
loggers []SecurityLogger
|
loggers []SecurityLogger
|
||||||
eventBuffer []*SecurityEvent
|
eventBuffer []*SecurityEvent
|
||||||
alertManager *SecurityAlertManager
|
alertManager *SecurityAlertManager
|
||||||
compliance *ComplianceManager
|
compliance *ComplianceManager
|
||||||
retention *AuditRetentionPolicy
|
retention *AuditRetentionPolicy
|
||||||
enabled bool
|
enabled bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// SecurityLogger interface for security event logging
|
// SecurityLogger interface for security event logging
|
||||||
@@ -206,22 +206,22 @@ type SecurityLogger interface {
|
|||||||
|
|
||||||
// SecurityEvent represents a security event
|
// SecurityEvent represents a security event
|
||||||
type SecurityEvent struct {
|
type SecurityEvent struct {
|
||||||
EventID string `json:"event_id"`
|
EventID string `json:"event_id"`
|
||||||
EventType SecurityEventType `json:"event_type"`
|
EventType SecurityEventType `json:"event_type"`
|
||||||
Severity SecuritySeverity `json:"severity"`
|
Severity SecuritySeverity `json:"severity"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
UserID string `json:"user_id,omitempty"`
|
UserID string `json:"user_id,omitempty"`
|
||||||
NodeID string `json:"node_id,omitempty"`
|
NodeID string `json:"node_id,omitempty"`
|
||||||
Resource string `json:"resource,omitempty"`
|
Resource string `json:"resource,omitempty"`
|
||||||
Action string `json:"action,omitempty"`
|
Action string `json:"action,omitempty"`
|
||||||
Result string `json:"result"`
|
Result string `json:"result"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
Details map[string]interface{} `json:"details"`
|
Details map[string]interface{} `json:"details"`
|
||||||
IPAddress string `json:"ip_address,omitempty"`
|
IPAddress string `json:"ip_address,omitempty"`
|
||||||
UserAgent string `json:"user_agent,omitempty"`
|
UserAgent string `json:"user_agent,omitempty"`
|
||||||
SessionID string `json:"session_id,omitempty"`
|
SessionID string `json:"session_id,omitempty"`
|
||||||
RequestID string `json:"request_id,omitempty"`
|
RequestID string `json:"request_id,omitempty"`
|
||||||
Fingerprint string `json:"fingerprint"`
|
Fingerprint string `json:"fingerprint"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// SecurityEventType represents different types of security events
|
// SecurityEventType represents different types of security events
|
||||||
@@ -242,12 +242,12 @@ const (
|
|||||||
type SecuritySeverity string
|
type SecuritySeverity string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
SeverityDebug SecuritySeverity = "debug"
|
SecuritySeverityDebug SecuritySeverity = "debug"
|
||||||
SeverityInfo SecuritySeverity = "info"
|
SecuritySeverityInfo SecuritySeverity = "info"
|
||||||
SeverityWarning SecuritySeverity = "warning"
|
SecuritySeverityWarning SecuritySeverity = "warning"
|
||||||
SeverityError SecuritySeverity = "error"
|
SecuritySeverityError SecuritySeverity = "error"
|
||||||
SeverityCritical SecuritySeverity = "critical"
|
SecuritySeverityCritical SecuritySeverity = "critical"
|
||||||
SeverityAlert SecuritySeverity = "alert"
|
SecuritySeverityAlert SecuritySeverity = "alert"
|
||||||
)
|
)
|
||||||
|
|
||||||
// NodeAuthentication handles node-to-node authentication
|
// NodeAuthentication handles node-to-node authentication
|
||||||
@@ -262,16 +262,16 @@ type NodeAuthentication struct {
|
|||||||
|
|
||||||
// TrustedNode represents a trusted node in the network
|
// TrustedNode represents a trusted node in the network
|
||||||
type TrustedNode struct {
|
type TrustedNode struct {
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
PublicKey []byte `json:"public_key"`
|
PublicKey []byte `json:"public_key"`
|
||||||
Certificate *x509.Certificate `json:"certificate"`
|
Certificate *x509.Certificate `json:"certificate"`
|
||||||
Roles []string `json:"roles"`
|
Roles []string `json:"roles"`
|
||||||
Capabilities []string `json:"capabilities"`
|
Capabilities []string `json:"capabilities"`
|
||||||
TrustLevel TrustLevel `json:"trust_level"`
|
TrustLevel TrustLevel `json:"trust_level"`
|
||||||
LastSeen time.Time `json:"last_seen"`
|
LastSeen time.Time `json:"last_seen"`
|
||||||
VerifiedAt time.Time `json:"verified_at"`
|
VerifiedAt time.Time `json:"verified_at"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
Status NodeStatus `json:"status"`
|
Status NodeStatus `json:"status"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// TrustLevel represents the trust level of a node
|
// TrustLevel represents the trust level of a node
|
||||||
@@ -287,18 +287,18 @@ const (
|
|||||||
|
|
||||||
// SecuritySession represents an active security session
|
// SecuritySession represents an active security session
|
||||||
type SecuritySession struct {
|
type SecuritySession struct {
|
||||||
SessionID string `json:"session_id"`
|
SessionID string `json:"session_id"`
|
||||||
UserID string `json:"user_id"`
|
UserID string `json:"user_id"`
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
Roles []string `json:"roles"`
|
Roles []string `json:"roles"`
|
||||||
Permissions []string `json:"permissions"`
|
Permissions []string `json:"permissions"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
ExpiresAt time.Time `json:"expires_at"`
|
ExpiresAt time.Time `json:"expires_at"`
|
||||||
LastActivity time.Time `json:"last_activity"`
|
LastActivity time.Time `json:"last_activity"`
|
||||||
IPAddress string `json:"ip_address"`
|
IPAddress string `json:"ip_address"`
|
||||||
UserAgent string `json:"user_agent"`
|
UserAgent string `json:"user_agent"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
Status SessionStatus `json:"status"`
|
Status SessionStatus `json:"status"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// SessionStatus represents session status
|
// SessionStatus represents session status
|
||||||
@@ -313,61 +313,61 @@ const (
|
|||||||
|
|
||||||
// ThreatDetector detects security threats and anomalies
|
// ThreatDetector detects security threats and anomalies
|
||||||
type ThreatDetector struct {
|
type ThreatDetector struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
detectionRules []*ThreatDetectionRule
|
detectionRules []*ThreatDetectionRule
|
||||||
behaviorAnalyzer *BehaviorAnalyzer
|
behaviorAnalyzer *BehaviorAnalyzer
|
||||||
anomalyDetector *AnomalyDetector
|
anomalyDetector *AnomalyDetector
|
||||||
threatIntelligence *ThreatIntelligence
|
threatIntelligence *ThreatIntelligence
|
||||||
activeThreats map[string]*ThreatEvent
|
activeThreats map[string]*ThreatEvent
|
||||||
mitigationStrategies map[ThreatType]*MitigationStrategy
|
mitigationStrategies map[ThreatType]*MitigationStrategy
|
||||||
}
|
}
|
||||||
|
|
||||||
// ThreatDetectionRule represents a threat detection rule
|
// ThreatDetectionRule represents a threat detection rule
|
||||||
type ThreatDetectionRule struct {
|
type ThreatDetectionRule struct {
|
||||||
RuleID string `json:"rule_id"`
|
RuleID string `json:"rule_id"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
ThreatType ThreatType `json:"threat_type"`
|
ThreatType ThreatType `json:"threat_type"`
|
||||||
Severity SecuritySeverity `json:"severity"`
|
Severity SecuritySeverity `json:"severity"`
|
||||||
Conditions []*ThreatCondition `json:"conditions"`
|
Conditions []*ThreatCondition `json:"conditions"`
|
||||||
Actions []*ThreatAction `json:"actions"`
|
Actions []*ThreatAction `json:"actions"`
|
||||||
Enabled bool `json:"enabled"`
|
Enabled bool `json:"enabled"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
UpdatedAt time.Time `json:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ThreatType represents different types of threats
|
// ThreatType represents different types of threats
|
||||||
type ThreatType string
|
type ThreatType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ThreatTypeBruteForce ThreatType = "brute_force"
|
ThreatTypeBruteForce ThreatType = "brute_force"
|
||||||
ThreatTypeUnauthorized ThreatType = "unauthorized_access"
|
ThreatTypeUnauthorized ThreatType = "unauthorized_access"
|
||||||
ThreatTypeDataExfiltration ThreatType = "data_exfiltration"
|
ThreatTypeDataExfiltration ThreatType = "data_exfiltration"
|
||||||
ThreatTypeDoS ThreatType = "denial_of_service"
|
ThreatTypeDoS ThreatType = "denial_of_service"
|
||||||
ThreatTypePrivilegeEscalation ThreatType = "privilege_escalation"
|
ThreatTypePrivilegeEscalation ThreatType = "privilege_escalation"
|
||||||
ThreatTypeAnomalous ThreatType = "anomalous_behavior"
|
ThreatTypeAnomalous ThreatType = "anomalous_behavior"
|
||||||
ThreatTypeMaliciousCode ThreatType = "malicious_code"
|
ThreatTypeMaliciousCode ThreatType = "malicious_code"
|
||||||
ThreatTypeInsiderThreat ThreatType = "insider_threat"
|
ThreatTypeInsiderThreat ThreatType = "insider_threat"
|
||||||
)
|
)
|
||||||
|
|
||||||
// CertificateAuthority manages certificate generation and validation
|
// CertificateAuthority manages certificate generation and validation
|
||||||
type CertificateAuthority struct {
|
type CertificateAuthority struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
rootCA *x509.Certificate
|
rootCA *x509.Certificate
|
||||||
rootKey interface{}
|
rootKey interface{}
|
||||||
intermediateCA *x509.Certificate
|
intermediateCA *x509.Certificate
|
||||||
intermediateKey interface{}
|
intermediateKey interface{}
|
||||||
certStore *CertificateStore
|
certStore *CertificateStore
|
||||||
crlManager *CRLManager
|
crlManager *CRLManager
|
||||||
ocspResponder *OCSPResponder
|
ocspResponder *OCSPResponder
|
||||||
}
|
}
|
||||||
|
|
||||||
// DistributionEncryption handles encryption for distributed communications
|
// DistributionEncryption handles encryption for distributed communications
|
||||||
type DistributionEncryption struct {
|
type DistributionEncryption struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
keyManager *DistributionKeyManager
|
keyManager *DistributionKeyManager
|
||||||
encryptionSuite *EncryptionSuite
|
encryptionSuite *EncryptionSuite
|
||||||
keyRotationPolicy *KeyRotationPolicy
|
keyRotationPolicy *KeyRotationPolicy
|
||||||
encryptionMetrics *EncryptionMetrics
|
encryptionMetrics *EncryptionMetrics
|
||||||
}
|
}
|
||||||
@@ -379,13 +379,13 @@ func NewSecurityManager(config *config.Config) (*SecurityManager, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
sm := &SecurityManager{
|
sm := &SecurityManager{
|
||||||
config: config,
|
config: config,
|
||||||
trustedNodes: make(map[string]*TrustedNode),
|
trustedNodes: make(map[string]*TrustedNode),
|
||||||
activeSessions: make(map[string]*SecuritySession),
|
activeSessions: make(map[string]*SecuritySession),
|
||||||
securityPolicies: make(map[string]*SecurityPolicy),
|
securityPolicies: make(map[string]*SecurityPolicy),
|
||||||
tlsEnabled: true,
|
tlsEnabled: true,
|
||||||
mutualTLSEnabled: true,
|
mutualTLSEnabled: true,
|
||||||
auditingEnabled: true,
|
auditingEnabled: true,
|
||||||
encryptionEnabled: true,
|
encryptionEnabled: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -508,12 +508,12 @@ func (sm *SecurityManager) Authenticate(ctx context.Context, credentials *Creden
|
|||||||
// Log authentication attempt
|
// Log authentication attempt
|
||||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||||
EventType: EventTypeAuthentication,
|
EventType: EventTypeAuthentication,
|
||||||
Severity: SeverityInfo,
|
Severity: SecuritySeverityInfo,
|
||||||
Action: "authenticate",
|
Action: "authenticate",
|
||||||
Message: "Authentication attempt",
|
Message: "Authentication attempt",
|
||||||
Details: map[string]interface{}{
|
Details: map[string]interface{}{
|
||||||
"credential_type": credentials.Type,
|
"credential_type": credentials.Type,
|
||||||
"username": credentials.Username,
|
"username": credentials.Username,
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -525,7 +525,7 @@ func (sm *SecurityManager) Authorize(ctx context.Context, request *Authorization
|
|||||||
// Log authorization attempt
|
// Log authorization attempt
|
||||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||||
EventType: EventTypeAuthorization,
|
EventType: EventTypeAuthorization,
|
||||||
Severity: SeverityInfo,
|
Severity: SecuritySeverityInfo,
|
||||||
UserID: request.UserID,
|
UserID: request.UserID,
|
||||||
Resource: request.Resource,
|
Resource: request.Resource,
|
||||||
Action: request.Action,
|
Action: request.Action,
|
||||||
@@ -554,7 +554,7 @@ func (sm *SecurityManager) ValidateNodeIdentity(ctx context.Context, nodeID stri
|
|||||||
// Log successful validation
|
// Log successful validation
|
||||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||||
EventType: EventTypeAuthentication,
|
EventType: EventTypeAuthentication,
|
||||||
Severity: SeverityInfo,
|
Severity: SecuritySeverityInfo,
|
||||||
NodeID: nodeID,
|
NodeID: nodeID,
|
||||||
Action: "validate_node_identity",
|
Action: "validate_node_identity",
|
||||||
Result: "success",
|
Result: "success",
|
||||||
@@ -609,7 +609,7 @@ func (sm *SecurityManager) AddTrustedNode(ctx context.Context, node *TrustedNode
|
|||||||
// Log node addition
|
// Log node addition
|
||||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||||
EventType: EventTypeConfiguration,
|
EventType: EventTypeConfiguration,
|
||||||
Severity: SeverityInfo,
|
Severity: SecuritySeverityInfo,
|
||||||
NodeID: node.NodeID,
|
NodeID: node.NodeID,
|
||||||
Action: "add_trusted_node",
|
Action: "add_trusted_node",
|
||||||
Result: "success",
|
Result: "success",
|
||||||
@@ -660,11 +660,11 @@ func (sm *SecurityManager) generateSelfSignedCertificate() ([]byte, []byte, erro
|
|||||||
StreetAddress: []string{""},
|
StreetAddress: []string{""},
|
||||||
PostalCode: []string{""},
|
PostalCode: []string{""},
|
||||||
},
|
},
|
||||||
NotBefore: time.Now(),
|
NotBefore: time.Now(),
|
||||||
NotAfter: time.Now().Add(365 * 24 * time.Hour),
|
NotAfter: time.Now().Add(365 * 24 * time.Hour),
|
||||||
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
|
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
|
||||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
|
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
|
||||||
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback},
|
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback},
|
||||||
}
|
}
|
||||||
|
|
||||||
// This is a simplified implementation
|
// This is a simplified implementation
|
||||||
@@ -765,8 +765,8 @@ func NewDistributionEncryption(config *config.Config) (*DistributionEncryption,
|
|||||||
|
|
||||||
func NewThreatDetector(config *config.Config) (*ThreatDetector, error) {
|
func NewThreatDetector(config *config.Config) (*ThreatDetector, error) {
|
||||||
return &ThreatDetector{
|
return &ThreatDetector{
|
||||||
detectionRules: []*ThreatDetectionRule{},
|
detectionRules: []*ThreatDetectionRule{},
|
||||||
activeThreats: make(map[string]*ThreatEvent),
|
activeThreats: make(map[string]*ThreatEvent),
|
||||||
mitigationStrategies: make(map[ThreatType]*MitigationStrategy),
|
mitigationStrategies: make(map[ThreatType]*MitigationStrategy),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -11,8 +11,8 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DefaultDirectoryAnalyzer provides comprehensive directory structure analysis
|
// DefaultDirectoryAnalyzer provides comprehensive directory structure analysis
|
||||||
@@ -268,11 +268,11 @@ func NewRelationshipAnalyzer() *RelationshipAnalyzer {
|
|||||||
// AnalyzeStructure analyzes directory organization patterns
|
// AnalyzeStructure analyzes directory organization patterns
|
||||||
func (da *DefaultDirectoryAnalyzer) AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error) {
|
func (da *DefaultDirectoryAnalyzer) AnalyzeStructure(ctx context.Context, dirPath string) (*DirectoryStructure, error) {
|
||||||
structure := &DirectoryStructure{
|
structure := &DirectoryStructure{
|
||||||
Path: dirPath,
|
Path: dirPath,
|
||||||
FileTypes: make(map[string]int),
|
FileTypes: make(map[string]int),
|
||||||
Languages: make(map[string]int),
|
Languages: make(map[string]int),
|
||||||
Dependencies: []string{},
|
Dependencies: []string{},
|
||||||
AnalyzedAt: time.Now(),
|
AnalyzedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Walk the directory tree
|
// Walk the directory tree
|
||||||
@@ -340,9 +340,9 @@ func (da *DefaultDirectoryAnalyzer) DetectConventions(ctx context.Context, dirPa
|
|||||||
OrganizationalPatterns: []*OrganizationalPattern{},
|
OrganizationalPatterns: []*OrganizationalPattern{},
|
||||||
Consistency: 0.0,
|
Consistency: 0.0,
|
||||||
Violations: []*Violation{},
|
Violations: []*Violation{},
|
||||||
Recommendations: []*Recommendation{},
|
Recommendations: []*BasicRecommendation{},
|
||||||
AppliedStandards: []string{},
|
AppliedStandards: []string{},
|
||||||
AnalyzedAt: time.Now(),
|
AnalyzedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Collect all files and directories
|
// Collect all files and directories
|
||||||
@@ -385,39 +385,39 @@ func (da *DefaultDirectoryAnalyzer) IdentifyPurpose(ctx context.Context, structu
|
|||||||
purpose string
|
purpose string
|
||||||
confidence float64
|
confidence float64
|
||||||
}{
|
}{
|
||||||
"src": {"Source code repository", 0.9},
|
"src": {"Source code repository", 0.9},
|
||||||
"source": {"Source code repository", 0.9},
|
"source": {"Source code repository", 0.9},
|
||||||
"lib": {"Library code", 0.8},
|
"lib": {"Library code", 0.8},
|
||||||
"libs": {"Library code", 0.8},
|
"libs": {"Library code", 0.8},
|
||||||
"vendor": {"Third-party dependencies", 0.9},
|
"vendor": {"Third-party dependencies", 0.9},
|
||||||
"node_modules": {"Node.js dependencies", 0.95},
|
"node_modules": {"Node.js dependencies", 0.95},
|
||||||
"build": {"Build artifacts", 0.9},
|
"build": {"Build artifacts", 0.9},
|
||||||
"dist": {"Distribution files", 0.9},
|
"dist": {"Distribution files", 0.9},
|
||||||
"bin": {"Binary executables", 0.9},
|
"bin": {"Binary executables", 0.9},
|
||||||
"test": {"Test code", 0.9},
|
"test": {"Test code", 0.9},
|
||||||
"tests": {"Test code", 0.9},
|
"tests": {"Test code", 0.9},
|
||||||
"docs": {"Documentation", 0.9},
|
"docs": {"Documentation", 0.9},
|
||||||
"doc": {"Documentation", 0.9},
|
"doc": {"Documentation", 0.9},
|
||||||
"config": {"Configuration files", 0.9},
|
"config": {"Configuration files", 0.9},
|
||||||
"configs": {"Configuration files", 0.9},
|
"configs": {"Configuration files", 0.9},
|
||||||
"scripts": {"Utility scripts", 0.8},
|
"scripts": {"Utility scripts", 0.8},
|
||||||
"tools": {"Development tools", 0.8},
|
"tools": {"Development tools", 0.8},
|
||||||
"assets": {"Static assets", 0.8},
|
"assets": {"Static assets", 0.8},
|
||||||
"public": {"Public web assets", 0.8},
|
"public": {"Public web assets", 0.8},
|
||||||
"static": {"Static files", 0.8},
|
"static": {"Static files", 0.8},
|
||||||
"templates": {"Template files", 0.8},
|
"templates": {"Template files", 0.8},
|
||||||
"migrations": {"Database migrations", 0.9},
|
"migrations": {"Database migrations", 0.9},
|
||||||
"models": {"Data models", 0.8},
|
"models": {"Data models", 0.8},
|
||||||
"views": {"View layer", 0.8},
|
"views": {"View layer", 0.8},
|
||||||
"controllers": {"Controller layer", 0.8},
|
"controllers": {"Controller layer", 0.8},
|
||||||
"services": {"Service layer", 0.8},
|
"services": {"Service layer", 0.8},
|
||||||
"components": {"Reusable components", 0.8},
|
"components": {"Reusable components", 0.8},
|
||||||
"modules": {"Modular components", 0.8},
|
"modules": {"Modular components", 0.8},
|
||||||
"packages": {"Package organization", 0.7},
|
"packages": {"Package organization", 0.7},
|
||||||
"internal": {"Internal implementation", 0.8},
|
"internal": {"Internal implementation", 0.8},
|
||||||
"cmd": {"Command-line applications", 0.9},
|
"cmd": {"Command-line applications", 0.9},
|
||||||
"api": {"API implementation", 0.8},
|
"api": {"API implementation", 0.8},
|
||||||
"pkg": {"Go package directory", 0.8},
|
"pkg": {"Go package directory", 0.8},
|
||||||
}
|
}
|
||||||
|
|
||||||
if p, exists := purposes[dirName]; exists {
|
if p, exists := purposes[dirName]; exists {
|
||||||
@@ -459,12 +459,12 @@ func (da *DefaultDirectoryAnalyzer) IdentifyPurpose(ctx context.Context, structu
|
|||||||
// AnalyzeRelationships analyzes relationships between subdirectories
|
// AnalyzeRelationships analyzes relationships between subdirectories
|
||||||
func (da *DefaultDirectoryAnalyzer) AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error) {
|
func (da *DefaultDirectoryAnalyzer) AnalyzeRelationships(ctx context.Context, dirPath string) (*RelationshipAnalysis, error) {
|
||||||
analysis := &RelationshipAnalysis{
|
analysis := &RelationshipAnalysis{
|
||||||
Dependencies: []*DirectoryDependency{},
|
Dependencies: []*DirectoryDependency{},
|
||||||
Relationships: []*DirectoryRelation{},
|
Relationships: []*DirectoryRelation{},
|
||||||
CouplingMetrics: &CouplingMetrics{},
|
CouplingMetrics: &CouplingMetrics{},
|
||||||
ModularityScore: 0.0,
|
ModularityScore: 0.0,
|
||||||
ArchitecturalStyle: "unknown",
|
ArchitecturalStyle: "unknown",
|
||||||
AnalyzedAt: time.Now(),
|
AnalyzedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find subdirectories
|
// Find subdirectories
|
||||||
@@ -568,20 +568,20 @@ func (da *DefaultDirectoryAnalyzer) GenerateHierarchy(ctx context.Context, rootP
|
|||||||
|
|
||||||
func (da *DefaultDirectoryAnalyzer) mapExtensionToLanguage(ext string) string {
|
func (da *DefaultDirectoryAnalyzer) mapExtensionToLanguage(ext string) string {
|
||||||
langMap := map[string]string{
|
langMap := map[string]string{
|
||||||
".go": "go",
|
".go": "go",
|
||||||
".py": "python",
|
".py": "python",
|
||||||
".js": "javascript",
|
".js": "javascript",
|
||||||
".jsx": "javascript",
|
".jsx": "javascript",
|
||||||
".ts": "typescript",
|
".ts": "typescript",
|
||||||
".tsx": "typescript",
|
".tsx": "typescript",
|
||||||
".java": "java",
|
".java": "java",
|
||||||
".c": "c",
|
".c": "c",
|
||||||
".cpp": "cpp",
|
".cpp": "cpp",
|
||||||
".cs": "csharp",
|
".cs": "csharp",
|
||||||
".php": "php",
|
".php": "php",
|
||||||
".rb": "ruby",
|
".rb": "ruby",
|
||||||
".rs": "rust",
|
".rs": "rust",
|
||||||
".kt": "kotlin",
|
".kt": "kotlin",
|
||||||
".swift": "swift",
|
".swift": "swift",
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -996,7 +996,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeNamingPattern(paths []string, scope s
|
|||||||
Type: "naming",
|
Type: "naming",
|
||||||
Description: fmt.Sprintf("Naming convention for %ss", scope),
|
Description: fmt.Sprintf("Naming convention for %ss", scope),
|
||||||
Confidence: da.calculateNamingConsistency(names, convention),
|
Confidence: da.calculateNamingConsistency(names, convention),
|
||||||
Examples: names[:min(5, len(names))],
|
Examples: names[:minInt(5, len(names))],
|
||||||
},
|
},
|
||||||
Convention: convention,
|
Convention: convention,
|
||||||
Scope: scope,
|
Scope: scope,
|
||||||
@@ -1100,12 +1100,12 @@ func (da *DefaultDirectoryAnalyzer) detectNamingStyle(name string) string {
|
|||||||
return "unknown"
|
return "unknown"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*Recommendation {
|
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*BasicRecommendation {
|
||||||
recommendations := []*Recommendation{}
|
recommendations := []*BasicRecommendation{}
|
||||||
|
|
||||||
// Recommend consistency improvements
|
// Recommend consistency improvements
|
||||||
if analysis.Consistency < 0.8 {
|
if analysis.Consistency < 0.8 {
|
||||||
recommendations = append(recommendations, &Recommendation{
|
recommendations = append(recommendations, &BasicRecommendation{
|
||||||
Type: "consistency",
|
Type: "consistency",
|
||||||
Title: "Improve naming consistency",
|
Title: "Improve naming consistency",
|
||||||
Description: "Consider standardizing naming conventions across the project",
|
Description: "Consider standardizing naming conventions across the project",
|
||||||
@@ -1118,7 +1118,7 @@ func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *
|
|||||||
|
|
||||||
// Recommend architectural improvements
|
// Recommend architectural improvements
|
||||||
if len(analysis.OrganizationalPatterns) == 0 {
|
if len(analysis.OrganizationalPatterns) == 0 {
|
||||||
recommendations = append(recommendations, &Recommendation{
|
recommendations = append(recommendations, &BasicRecommendation{
|
||||||
Type: "architecture",
|
Type: "architecture",
|
||||||
Title: "Consider architectural patterns",
|
Title: "Consider architectural patterns",
|
||||||
Description: "Project structure could benefit from established architectural patterns",
|
Description: "Project structure could benefit from established architectural patterns",
|
||||||
@@ -1225,12 +1225,11 @@ func (da *DefaultDirectoryAnalyzer) extractImports(content string, patterns []*r
|
|||||||
|
|
||||||
func (da *DefaultDirectoryAnalyzer) isLocalDependency(importPath, fromDir, toDir string) bool {
|
func (da *DefaultDirectoryAnalyzer) isLocalDependency(importPath, fromDir, toDir string) bool {
|
||||||
// Simple heuristic: check if import path references the target directory
|
// Simple heuristic: check if import path references the target directory
|
||||||
fromBase := filepath.Base(fromDir)
|
|
||||||
toBase := filepath.Base(toDir)
|
toBase := filepath.Base(toDir)
|
||||||
|
|
||||||
return strings.Contains(importPath, toBase) ||
|
return strings.Contains(importPath, toBase) ||
|
||||||
strings.Contains(importPath, "../"+toBase) ||
|
strings.Contains(importPath, "../"+toBase) ||
|
||||||
strings.Contains(importPath, "./"+toBase)
|
strings.Contains(importPath, "./"+toBase)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (da *DefaultDirectoryAnalyzer) analyzeDirectoryRelationships(subdirs []string, dependencies []*DirectoryDependency) []*DirectoryRelation {
|
func (da *DefaultDirectoryAnalyzer) analyzeDirectoryRelationships(subdirs []string, dependencies []*DirectoryDependency) []*DirectoryRelation {
|
||||||
@@ -1399,7 +1398,7 @@ func (da *DefaultDirectoryAnalyzer) walkDirectoryHierarchy(rootPath string, curr
|
|||||||
|
|
||||||
func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Address, error) {
|
func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Address, error) {
|
||||||
cleanPath := filepath.Clean(path)
|
cleanPath := filepath.Clean(path)
|
||||||
addr, err := ucxl.ParseAddress(fmt.Sprintf("dir://%s", cleanPath))
|
addr, err := ucxl.Parse(fmt.Sprintf("dir://%s", cleanPath))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
||||||
}
|
}
|
||||||
@@ -1417,7 +1416,7 @@ func (da *DefaultDirectoryAnalyzer) generateDirectorySummary(structure *Director
|
|||||||
langs = append(langs, fmt.Sprintf("%s (%d)", lang, count))
|
langs = append(langs, fmt.Sprintf("%s (%d)", lang, count))
|
||||||
}
|
}
|
||||||
sort.Strings(langs)
|
sort.Strings(langs)
|
||||||
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:min(3, len(langs))], ", "))
|
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:minInt(3, len(langs))], ", "))
|
||||||
}
|
}
|
||||||
|
|
||||||
return summary
|
return summary
|
||||||
@@ -1497,7 +1496,7 @@ func (da *DefaultDirectoryAnalyzer) calculateDirectorySpecificity(structure *Dir
|
|||||||
return specificity
|
return specificity
|
||||||
}
|
}
|
||||||
|
|
||||||
func min(a, b int) int {
|
func minInt(a, b int) int {
|
||||||
if a < b {
|
if a < b {
|
||||||
return a
|
return a
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,9 +2,9 @@ package intelligence
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -138,26 +138,26 @@ type RAGIntegration interface {
|
|||||||
|
|
||||||
// ProjectGoal represents a high-level project objective
|
// ProjectGoal represents a high-level project objective
|
||||||
type ProjectGoal struct {
|
type ProjectGoal struct {
|
||||||
ID string `json:"id"` // Unique identifier
|
ID string `json:"id"` // Unique identifier
|
||||||
Name string `json:"name"` // Goal name
|
Name string `json:"name"` // Goal name
|
||||||
Description string `json:"description"` // Detailed description
|
Description string `json:"description"` // Detailed description
|
||||||
Keywords []string `json:"keywords"` // Associated keywords
|
Keywords []string `json:"keywords"` // Associated keywords
|
||||||
Priority int `json:"priority"` // Priority level (1=highest)
|
Priority int `json:"priority"` // Priority level (1=highest)
|
||||||
Phase string `json:"phase"` // Project phase
|
Phase string `json:"phase"` // Project phase
|
||||||
Metrics []string `json:"metrics"` // Success metrics
|
Metrics []string `json:"metrics"` // Success metrics
|
||||||
Owner string `json:"owner"` // Goal owner
|
Owner string `json:"owner"` // Goal owner
|
||||||
Deadline *time.Time `json:"deadline,omitempty"` // Target deadline
|
Deadline *time.Time `json:"deadline,omitempty"` // Target deadline
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleProfile defines context requirements for different roles
|
// RoleProfile defines context requirements for different roles
|
||||||
type RoleProfile struct {
|
type RoleProfile struct {
|
||||||
Role string `json:"role"` // Role identifier
|
Role string `json:"role"` // Role identifier
|
||||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level"` // Required access level
|
AccessLevel slurpContext.RoleAccessLevel `json:"access_level"` // Required access level
|
||||||
RelevantTags []string `json:"relevant_tags"` // Relevant context tags
|
RelevantTags []string `json:"relevant_tags"` // Relevant context tags
|
||||||
ContextScope []string `json:"context_scope"` // Scope of interest
|
ContextScope []string `json:"context_scope"` // Scope of interest
|
||||||
InsightTypes []string `json:"insight_types"` // Types of insights needed
|
InsightTypes []string `json:"insight_types"` // Types of insights needed
|
||||||
QualityThreshold float64 `json:"quality_threshold"` // Minimum quality threshold
|
QualityThreshold float64 `json:"quality_threshold"` // Minimum quality threshold
|
||||||
Preferences map[string]interface{} `json:"preferences"` // Role-specific preferences
|
Preferences map[string]interface{} `json:"preferences"` // Role-specific preferences
|
||||||
}
|
}
|
||||||
|
|
||||||
// EngineConfig represents configuration for the intelligence engine
|
// EngineConfig represents configuration for the intelligence engine
|
||||||
@@ -168,59 +168,64 @@ type EngineConfig struct {
|
|||||||
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
|
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
|
||||||
|
|
||||||
// RAG integration settings
|
// RAG integration settings
|
||||||
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
|
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
|
||||||
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
|
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
|
||||||
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
|
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
|
||||||
|
EnableRAG bool `json:"enable_rag"` // Legacy toggle for RAG enablement
|
||||||
|
// Feature toggles
|
||||||
|
EnableGoalAlignment bool `json:"enable_goal_alignment"`
|
||||||
|
EnablePatternDetection bool `json:"enable_pattern_detection"`
|
||||||
|
EnableRoleAware bool `json:"enable_role_aware"`
|
||||||
|
|
||||||
// Quality settings
|
// Quality settings
|
||||||
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
|
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
|
||||||
RequireValidation bool `json:"require_validation"` // Whether validation is required
|
RequireValidation bool `json:"require_validation"` // Whether validation is required
|
||||||
|
|
||||||
// Performance settings
|
// Performance settings
|
||||||
CacheEnabled bool `json:"cache_enabled"` // Whether caching is enabled
|
CacheEnabled bool `json:"cache_enabled"` // Whether caching is enabled
|
||||||
CacheTTL time.Duration `json:"cache_ttl"` // Cache TTL
|
CacheTTL time.Duration `json:"cache_ttl"` // Cache TTL
|
||||||
|
|
||||||
// Role profiles
|
// Role profiles
|
||||||
RoleProfiles map[string]*RoleProfile `json:"role_profiles"` // Role-specific profiles
|
RoleProfiles map[string]*RoleProfile `json:"role_profiles"` // Role-specific profiles
|
||||||
|
|
||||||
// Project goals
|
// Project goals
|
||||||
ProjectGoals []*ProjectGoal `json:"project_goals"` // Active project goals
|
ProjectGoals []*ProjectGoal `json:"project_goals"` // Active project goals
|
||||||
}
|
}
|
||||||
|
|
||||||
// EngineStatistics represents performance statistics for the engine
|
// EngineStatistics represents performance statistics for the engine
|
||||||
type EngineStatistics struct {
|
type EngineStatistics struct {
|
||||||
TotalAnalyses int64 `json:"total_analyses"` // Total analyses performed
|
TotalAnalyses int64 `json:"total_analyses"` // Total analyses performed
|
||||||
SuccessfulAnalyses int64 `json:"successful_analyses"` // Successful analyses
|
SuccessfulAnalyses int64 `json:"successful_analyses"` // Successful analyses
|
||||||
FailedAnalyses int64 `json:"failed_analyses"` // Failed analyses
|
FailedAnalyses int64 `json:"failed_analyses"` // Failed analyses
|
||||||
AverageAnalysisTime time.Duration `json:"average_analysis_time"` // Average analysis time
|
AverageAnalysisTime time.Duration `json:"average_analysis_time"` // Average analysis time
|
||||||
CacheHitRate float64 `json:"cache_hit_rate"` // Cache hit rate
|
CacheHitRate float64 `json:"cache_hit_rate"` // Cache hit rate
|
||||||
RAGQueriesPerformed int64 `json:"rag_queries_performed"` // RAG queries made
|
RAGQueriesPerformed int64 `json:"rag_queries_performed"` // RAG queries made
|
||||||
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
|
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
|
||||||
FilesAnalyzed int64 `json:"files_analyzed"` // Total files analyzed
|
FilesAnalyzed int64 `json:"files_analyzed"` // Total files analyzed
|
||||||
DirectoriesAnalyzed int64 `json:"directories_analyzed"` // Total directories analyzed
|
DirectoriesAnalyzed int64 `json:"directories_analyzed"` // Total directories analyzed
|
||||||
PatternsDetected int64 `json:"patterns_detected"` // Patterns detected
|
PatternsDetected int64 `json:"patterns_detected"` // Patterns detected
|
||||||
LastResetAt time.Time `json:"last_reset_at"` // When stats were last reset
|
LastResetAt time.Time `json:"last_reset_at"` // When stats were last reset
|
||||||
}
|
}
|
||||||
|
|
||||||
// FileAnalysis represents the result of file analysis
|
// FileAnalysis represents the result of file analysis
|
||||||
type FileAnalysis struct {
|
type FileAnalysis struct {
|
||||||
FilePath string `json:"file_path"` // Path to analyzed file
|
FilePath string `json:"file_path"` // Path to analyzed file
|
||||||
Language string `json:"language"` // Detected language
|
Language string `json:"language"` // Detected language
|
||||||
LanguageConf float64 `json:"language_conf"` // Language detection confidence
|
LanguageConf float64 `json:"language_conf"` // Language detection confidence
|
||||||
FileType string `json:"file_type"` // File type classification
|
FileType string `json:"file_type"` // File type classification
|
||||||
Size int64 `json:"size"` // File size in bytes
|
Size int64 `json:"size"` // File size in bytes
|
||||||
LineCount int `json:"line_count"` // Number of lines
|
LineCount int `json:"line_count"` // Number of lines
|
||||||
Complexity float64 `json:"complexity"` // Code complexity score
|
Complexity float64 `json:"complexity"` // Code complexity score
|
||||||
Dependencies []string `json:"dependencies"` // Identified dependencies
|
Dependencies []string `json:"dependencies"` // Identified dependencies
|
||||||
Exports []string `json:"exports"` // Exported symbols/functions
|
Exports []string `json:"exports"` // Exported symbols/functions
|
||||||
Imports []string `json:"imports"` // Import statements
|
Imports []string `json:"imports"` // Import statements
|
||||||
Functions []string `json:"functions"` // Function/method names
|
Functions []string `json:"functions"` // Function/method names
|
||||||
Classes []string `json:"classes"` // Class names
|
Classes []string `json:"classes"` // Class names
|
||||||
Variables []string `json:"variables"` // Variable names
|
Variables []string `json:"variables"` // Variable names
|
||||||
Comments []string `json:"comments"` // Extracted comments
|
Comments []string `json:"comments"` // Extracted comments
|
||||||
TODOs []string `json:"todos"` // TODO comments
|
TODOs []string `json:"todos"` // TODO comments
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
}
|
}
|
||||||
|
|
||||||
// DefaultIntelligenceEngine provides a complete implementation of the IntelligenceEngine interface
|
// DefaultIntelligenceEngine provides a complete implementation of the IntelligenceEngine interface
|
||||||
@@ -250,6 +255,10 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
|
|||||||
config = DefaultEngineConfig()
|
config = DefaultEngineConfig()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if config.EnableRAG {
|
||||||
|
config.RAGEnabled = true
|
||||||
|
}
|
||||||
|
|
||||||
// Initialize file analyzer
|
// Initialize file analyzer
|
||||||
fileAnalyzer := NewDefaultFileAnalyzer(config)
|
fileAnalyzer := NewDefaultFileAnalyzer(config)
|
||||||
|
|
||||||
@@ -273,13 +282,22 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
|
|||||||
directoryAnalyzer: dirAnalyzer,
|
directoryAnalyzer: dirAnalyzer,
|
||||||
patternDetector: patternDetector,
|
patternDetector: patternDetector,
|
||||||
ragIntegration: ragIntegration,
|
ragIntegration: ragIntegration,
|
||||||
stats: &EngineStatistics{
|
stats: &EngineStatistics{
|
||||||
LastResetAt: time.Now(),
|
LastResetAt: time.Now(),
|
||||||
},
|
},
|
||||||
cache: &sync.Map{},
|
cache: &sync.Map{},
|
||||||
projectGoals: config.ProjectGoals,
|
projectGoals: config.ProjectGoals,
|
||||||
roleProfiles: config.RoleProfiles,
|
roleProfiles: config.RoleProfiles,
|
||||||
}
|
}
|
||||||
|
|
||||||
return engine, nil
|
return engine, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// NewIntelligenceEngine is a convenience wrapper expected by legacy callers.
|
||||||
|
func NewIntelligenceEngine(config *EngineConfig) *DefaultIntelligenceEngine {
|
||||||
|
engine, err := NewDefaultIntelligenceEngine(config)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return engine
|
||||||
|
}
|
||||||
|
|||||||
@@ -4,14 +4,13 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// AnalyzeFile analyzes a single file and generates contextual understanding
|
// AnalyzeFile analyzes a single file and generates contextual understanding
|
||||||
@@ -136,8 +135,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeDirectory(ctx context.Context, dirPat
|
|||||||
}()
|
}()
|
||||||
|
|
||||||
// Analyze directory structure
|
// Analyze directory structure
|
||||||
structure, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath)
|
if _, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath); err != nil {
|
||||||
if err != nil {
|
|
||||||
e.updateStats("directory_analysis", time.Since(start), false)
|
e.updateStats("directory_analysis", time.Since(start), false)
|
||||||
return nil, fmt.Errorf("failed to analyze directory structure: %w", err)
|
return nil, fmt.Errorf("failed to analyze directory structure: %w", err)
|
||||||
}
|
}
|
||||||
@@ -232,7 +230,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeBatch(ctx context.Context, filePaths
|
|||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(path string) {
|
go func(path string) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
semaphore <- struct{}{} // Acquire semaphore
|
semaphore <- struct{}{} // Acquire semaphore
|
||||||
defer func() { <-semaphore }() // Release semaphore
|
defer func() { <-semaphore }() // Release semaphore
|
||||||
|
|
||||||
ctxNode, err := e.AnalyzeFile(ctx, path, role)
|
ctxNode, err := e.AnalyzeFile(ctx, path, role)
|
||||||
@@ -430,7 +428,7 @@ func (e *DefaultIntelligenceEngine) readFileContent(filePath string) ([]byte, er
|
|||||||
func (e *DefaultIntelligenceEngine) generateUCXLAddress(filePath string) (*ucxl.Address, error) {
|
func (e *DefaultIntelligenceEngine) generateUCXLAddress(filePath string) (*ucxl.Address, error) {
|
||||||
// Simple implementation - in reality this would be more sophisticated
|
// Simple implementation - in reality this would be more sophisticated
|
||||||
cleanPath := filepath.Clean(filePath)
|
cleanPath := filepath.Clean(filePath)
|
||||||
addr, err := ucxl.ParseAddress(fmt.Sprintf("file://%s", cleanPath))
|
addr, err := ucxl.Parse(fmt.Sprintf("file://%s", cleanPath))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
||||||
}
|
}
|
||||||
@@ -640,6 +638,10 @@ func DefaultEngineConfig() *EngineConfig {
|
|||||||
RAGEndpoint: "",
|
RAGEndpoint: "",
|
||||||
RAGTimeout: 10 * time.Second,
|
RAGTimeout: 10 * time.Second,
|
||||||
RAGEnabled: false,
|
RAGEnabled: false,
|
||||||
|
EnableRAG: false,
|
||||||
|
EnableGoalAlignment: false,
|
||||||
|
EnablePatternDetection: false,
|
||||||
|
EnableRoleAware: false,
|
||||||
MinConfidenceThreshold: 0.6,
|
MinConfidenceThreshold: 0.6,
|
||||||
RequireValidation: true,
|
RequireValidation: true,
|
||||||
CacheEnabled: true,
|
CacheEnabled: true,
|
||||||
|
|||||||
@@ -1,3 +1,6 @@
|
|||||||
|
//go:build integration
|
||||||
|
// +build integration
|
||||||
|
|
||||||
package intelligence
|
package intelligence
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@@ -13,12 +16,12 @@ import (
|
|||||||
func TestIntelligenceEngine_Integration(t *testing.T) {
|
func TestIntelligenceEngine_Integration(t *testing.T) {
|
||||||
// Create test configuration
|
// Create test configuration
|
||||||
config := &EngineConfig{
|
config := &EngineConfig{
|
||||||
EnableRAG: false, // Disable RAG for testing
|
EnableRAG: false, // Disable RAG for testing
|
||||||
EnableGoalAlignment: true,
|
EnableGoalAlignment: true,
|
||||||
EnablePatternDetection: true,
|
EnablePatternDetection: true,
|
||||||
EnableRoleAware: true,
|
EnableRoleAware: true,
|
||||||
MaxConcurrentAnalysis: 2,
|
MaxConcurrentAnalysis: 2,
|
||||||
AnalysisTimeout: 30 * time.Second,
|
AnalysisTimeout: 30 * time.Second,
|
||||||
CacheTTL: 5 * time.Minute,
|
CacheTTL: 5 * time.Minute,
|
||||||
MinConfidenceThreshold: 0.5,
|
MinConfidenceThreshold: 0.5,
|
||||||
}
|
}
|
||||||
@@ -29,13 +32,13 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
|
|||||||
|
|
||||||
// Create test context node
|
// Create test context node
|
||||||
testNode := &slurpContext.ContextNode{
|
testNode := &slurpContext.ContextNode{
|
||||||
Path: "/test/example.go",
|
Path: "/test/example.go",
|
||||||
Summary: "A Go service implementing user authentication",
|
Summary: "A Go service implementing user authentication",
|
||||||
Purpose: "Handles user login and authentication for the web application",
|
Purpose: "Handles user login and authentication for the web application",
|
||||||
Technologies: []string{"go", "jwt", "bcrypt"},
|
Technologies: []string{"go", "jwt", "bcrypt"},
|
||||||
Tags: []string{"authentication", "security", "web"},
|
Tags: []string{"authentication", "security", "web"},
|
||||||
CreatedAt: time.Now(),
|
GeneratedAt: time.Now(),
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create test project goal
|
// Create test project goal
|
||||||
@@ -47,7 +50,7 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
|
|||||||
Priority: 1,
|
Priority: 1,
|
||||||
Phase: "development",
|
Phase: "development",
|
||||||
Deadline: nil,
|
Deadline: nil,
|
||||||
CreatedAt: time.Now(),
|
GeneratedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
t.Run("AnalyzeFile", func(t *testing.T) {
|
t.Run("AnalyzeFile", func(t *testing.T) {
|
||||||
@@ -220,9 +223,9 @@ func TestPatternDetector_DetectDesignPatterns(t *testing.T) {
|
|||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
filename string
|
filename string
|
||||||
content []byte
|
content []byte
|
||||||
expectedPattern string
|
expectedPattern string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
@@ -652,7 +655,7 @@ func createTestContextNode(path, summary, purpose string, technologies, tags []s
|
|||||||
Purpose: purpose,
|
Purpose: purpose,
|
||||||
Technologies: technologies,
|
Technologies: technologies,
|
||||||
Tags: tags,
|
Tags: tags,
|
||||||
CreatedAt: time.Now(),
|
GeneratedAt: time.Now(),
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -665,7 +668,7 @@ func createTestProjectGoal(id, name, description string, keywords []string, prio
|
|||||||
Keywords: keywords,
|
Keywords: keywords,
|
||||||
Priority: priority,
|
Priority: priority,
|
||||||
Phase: phase,
|
Phase: phase,
|
||||||
CreatedAt: time.Now(),
|
GeneratedAt: time.Now(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
package intelligence
|
package intelligence
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -33,12 +32,12 @@ type CodeStructureAnalyzer struct {
|
|||||||
|
|
||||||
// LanguagePatterns contains regex patterns for different language constructs
|
// LanguagePatterns contains regex patterns for different language constructs
|
||||||
type LanguagePatterns struct {
|
type LanguagePatterns struct {
|
||||||
Functions []*regexp.Regexp
|
Functions []*regexp.Regexp
|
||||||
Classes []*regexp.Regexp
|
Classes []*regexp.Regexp
|
||||||
Variables []*regexp.Regexp
|
Variables []*regexp.Regexp
|
||||||
Imports []*regexp.Regexp
|
Imports []*regexp.Regexp
|
||||||
Comments []*regexp.Regexp
|
Comments []*regexp.Regexp
|
||||||
TODOs []*regexp.Regexp
|
TODOs []*regexp.Regexp
|
||||||
}
|
}
|
||||||
|
|
||||||
// MetadataExtractor extracts file system metadata
|
// MetadataExtractor extracts file system metadata
|
||||||
@@ -65,66 +64,66 @@ func NewLanguageDetector() *LanguageDetector {
|
|||||||
|
|
||||||
// Map file extensions to languages
|
// Map file extensions to languages
|
||||||
extensions := map[string]string{
|
extensions := map[string]string{
|
||||||
".go": "go",
|
".go": "go",
|
||||||
".py": "python",
|
".py": "python",
|
||||||
".js": "javascript",
|
".js": "javascript",
|
||||||
".jsx": "javascript",
|
".jsx": "javascript",
|
||||||
".ts": "typescript",
|
".ts": "typescript",
|
||||||
".tsx": "typescript",
|
".tsx": "typescript",
|
||||||
".java": "java",
|
".java": "java",
|
||||||
".c": "c",
|
".c": "c",
|
||||||
".cpp": "cpp",
|
".cpp": "cpp",
|
||||||
".cc": "cpp",
|
".cc": "cpp",
|
||||||
".cxx": "cpp",
|
".cxx": "cpp",
|
||||||
".h": "c",
|
".h": "c",
|
||||||
".hpp": "cpp",
|
".hpp": "cpp",
|
||||||
".cs": "csharp",
|
".cs": "csharp",
|
||||||
".php": "php",
|
".php": "php",
|
||||||
".rb": "ruby",
|
".rb": "ruby",
|
||||||
".rs": "rust",
|
".rs": "rust",
|
||||||
".kt": "kotlin",
|
".kt": "kotlin",
|
||||||
".swift": "swift",
|
".swift": "swift",
|
||||||
".m": "objective-c",
|
".m": "objective-c",
|
||||||
".mm": "objective-c",
|
".mm": "objective-c",
|
||||||
".scala": "scala",
|
".scala": "scala",
|
||||||
".clj": "clojure",
|
".clj": "clojure",
|
||||||
".hs": "haskell",
|
".hs": "haskell",
|
||||||
".ex": "elixir",
|
".ex": "elixir",
|
||||||
".exs": "elixir",
|
".exs": "elixir",
|
||||||
".erl": "erlang",
|
".erl": "erlang",
|
||||||
".lua": "lua",
|
".lua": "lua",
|
||||||
".pl": "perl",
|
".pl": "perl",
|
||||||
".r": "r",
|
".r": "r",
|
||||||
".sh": "shell",
|
".sh": "shell",
|
||||||
".bash": "shell",
|
".bash": "shell",
|
||||||
".zsh": "shell",
|
".zsh": "shell",
|
||||||
".fish": "shell",
|
".fish": "shell",
|
||||||
".sql": "sql",
|
".sql": "sql",
|
||||||
".html": "html",
|
".html": "html",
|
||||||
".htm": "html",
|
".htm": "html",
|
||||||
".css": "css",
|
".css": "css",
|
||||||
".scss": "scss",
|
".scss": "scss",
|
||||||
".sass": "sass",
|
".sass": "sass",
|
||||||
".less": "less",
|
".less": "less",
|
||||||
".xml": "xml",
|
".xml": "xml",
|
||||||
".json": "json",
|
".json": "json",
|
||||||
".yaml": "yaml",
|
".yaml": "yaml",
|
||||||
".yml": "yaml",
|
".yml": "yaml",
|
||||||
".toml": "toml",
|
".toml": "toml",
|
||||||
".ini": "ini",
|
".ini": "ini",
|
||||||
".cfg": "ini",
|
".cfg": "ini",
|
||||||
".conf": "config",
|
".conf": "config",
|
||||||
".md": "markdown",
|
".md": "markdown",
|
||||||
".rst": "rst",
|
".rst": "rst",
|
||||||
".tex": "latex",
|
".tex": "latex",
|
||||||
".proto": "protobuf",
|
".proto": "protobuf",
|
||||||
".tf": "terraform",
|
".tf": "terraform",
|
||||||
".hcl": "hcl",
|
".hcl": "hcl",
|
||||||
".dockerfile": "dockerfile",
|
".dockerfile": "dockerfile",
|
||||||
".dockerignore": "dockerignore",
|
".dockerignore": "dockerignore",
|
||||||
".gitignore": "gitignore",
|
".gitignore": "gitignore",
|
||||||
".vim": "vim",
|
".vim": "vim",
|
||||||
".emacs": "emacs",
|
".emacs": "emacs",
|
||||||
}
|
}
|
||||||
|
|
||||||
for ext, lang := range extensions {
|
for ext, lang := range extensions {
|
||||||
@@ -500,8 +499,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
|||||||
|
|
||||||
// Configuration files
|
// Configuration files
|
||||||
if strings.Contains(filenameUpper, "CONFIG") ||
|
if strings.Contains(filenameUpper, "CONFIG") ||
|
||||||
strings.Contains(filenameUpper, "CONF") ||
|
strings.Contains(filenameUpper, "CONF") ||
|
||||||
analysis.FileType == ".ini" || analysis.FileType == ".toml" {
|
analysis.FileType == ".ini" || analysis.FileType == ".toml" {
|
||||||
purpose = "Configuration management"
|
purpose = "Configuration management"
|
||||||
confidence = 0.9
|
confidence = 0.9
|
||||||
return purpose, confidence, nil
|
return purpose, confidence, nil
|
||||||
@@ -509,9 +508,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
|||||||
|
|
||||||
// Test files
|
// Test files
|
||||||
if strings.Contains(filenameUpper, "TEST") ||
|
if strings.Contains(filenameUpper, "TEST") ||
|
||||||
strings.Contains(filenameUpper, "SPEC") ||
|
strings.Contains(filenameUpper, "SPEC") ||
|
||||||
strings.HasSuffix(filenameUpper, "_TEST.GO") ||
|
strings.HasSuffix(filenameUpper, "_TEST.GO") ||
|
||||||
strings.HasSuffix(filenameUpper, "_TEST.PY") {
|
strings.HasSuffix(filenameUpper, "_TEST.PY") {
|
||||||
purpose = "Testing and quality assurance"
|
purpose = "Testing and quality assurance"
|
||||||
confidence = 0.9
|
confidence = 0.9
|
||||||
return purpose, confidence, nil
|
return purpose, confidence, nil
|
||||||
@@ -519,8 +518,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
|||||||
|
|
||||||
// Documentation files
|
// Documentation files
|
||||||
if analysis.FileType == ".md" || analysis.FileType == ".rst" ||
|
if analysis.FileType == ".md" || analysis.FileType == ".rst" ||
|
||||||
strings.Contains(filenameUpper, "README") ||
|
strings.Contains(filenameUpper, "README") ||
|
||||||
strings.Contains(filenameUpper, "DOC") {
|
strings.Contains(filenameUpper, "DOC") {
|
||||||
purpose = "Documentation and guidance"
|
purpose = "Documentation and guidance"
|
||||||
confidence = 0.9
|
confidence = 0.9
|
||||||
return purpose, confidence, nil
|
return purpose, confidence, nil
|
||||||
@@ -528,8 +527,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
|||||||
|
|
||||||
// API files
|
// API files
|
||||||
if strings.Contains(filenameUpper, "API") ||
|
if strings.Contains(filenameUpper, "API") ||
|
||||||
strings.Contains(filenameUpper, "ROUTER") ||
|
strings.Contains(filenameUpper, "ROUTER") ||
|
||||||
strings.Contains(filenameUpper, "HANDLER") {
|
strings.Contains(filenameUpper, "HANDLER") {
|
||||||
purpose = "API endpoint management"
|
purpose = "API endpoint management"
|
||||||
confidence = 0.8
|
confidence = 0.8
|
||||||
return purpose, confidence, nil
|
return purpose, confidence, nil
|
||||||
@@ -537,9 +536,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
|||||||
|
|
||||||
// Database files
|
// Database files
|
||||||
if strings.Contains(filenameUpper, "DB") ||
|
if strings.Contains(filenameUpper, "DB") ||
|
||||||
strings.Contains(filenameUpper, "DATABASE") ||
|
strings.Contains(filenameUpper, "DATABASE") ||
|
||||||
strings.Contains(filenameUpper, "MODEL") ||
|
strings.Contains(filenameUpper, "MODEL") ||
|
||||||
strings.Contains(filenameUpper, "SCHEMA") {
|
strings.Contains(filenameUpper, "SCHEMA") {
|
||||||
purpose = "Data storage and management"
|
purpose = "Data storage and management"
|
||||||
confidence = 0.8
|
confidence = 0.8
|
||||||
return purpose, confidence, nil
|
return purpose, confidence, nil
|
||||||
@@ -547,9 +546,9 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
|||||||
|
|
||||||
// UI/Frontend files
|
// UI/Frontend files
|
||||||
if analysis.Language == "javascript" || analysis.Language == "typescript" ||
|
if analysis.Language == "javascript" || analysis.Language == "typescript" ||
|
||||||
strings.Contains(filenameUpper, "COMPONENT") ||
|
strings.Contains(filenameUpper, "COMPONENT") ||
|
||||||
strings.Contains(filenameUpper, "VIEW") ||
|
strings.Contains(filenameUpper, "VIEW") ||
|
||||||
strings.Contains(filenameUpper, "UI") {
|
strings.Contains(filenameUpper, "UI") {
|
||||||
purpose = "User interface component"
|
purpose = "User interface component"
|
||||||
confidence = 0.7
|
confidence = 0.7
|
||||||
return purpose, confidence, nil
|
return purpose, confidence, nil
|
||||||
@@ -557,8 +556,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
|||||||
|
|
||||||
// Service/Business logic
|
// Service/Business logic
|
||||||
if strings.Contains(filenameUpper, "SERVICE") ||
|
if strings.Contains(filenameUpper, "SERVICE") ||
|
||||||
strings.Contains(filenameUpper, "BUSINESS") ||
|
strings.Contains(filenameUpper, "BUSINESS") ||
|
||||||
strings.Contains(filenameUpper, "LOGIC") {
|
strings.Contains(filenameUpper, "LOGIC") {
|
||||||
purpose = "Business logic implementation"
|
purpose = "Business logic implementation"
|
||||||
confidence = 0.7
|
confidence = 0.7
|
||||||
return purpose, confidence, nil
|
return purpose, confidence, nil
|
||||||
@@ -566,8 +565,8 @@ func (fa *DefaultFileAnalyzer) IdentifyPurpose(ctx context.Context, analysis *Fi
|
|||||||
|
|
||||||
// Utility files
|
// Utility files
|
||||||
if strings.Contains(filenameUpper, "UTIL") ||
|
if strings.Contains(filenameUpper, "UTIL") ||
|
||||||
strings.Contains(filenameUpper, "HELPER") ||
|
strings.Contains(filenameUpper, "HELPER") ||
|
||||||
strings.Contains(filenameUpper, "COMMON") {
|
strings.Contains(filenameUpper, "COMMON") {
|
||||||
purpose = "Utility and helper functions"
|
purpose = "Utility and helper functions"
|
||||||
confidence = 0.7
|
confidence = 0.7
|
||||||
return purpose, confidence, nil
|
return purpose, confidence, nil
|
||||||
@@ -646,20 +645,20 @@ func (fa *DefaultFileAnalyzer) ExtractTechnologies(ctx context.Context, analysis
|
|||||||
|
|
||||||
// Framework detection
|
// Framework detection
|
||||||
frameworks := map[string]string{
|
frameworks := map[string]string{
|
||||||
"react": "React",
|
"react": "React",
|
||||||
"vue": "Vue.js",
|
"vue": "Vue.js",
|
||||||
"angular": "Angular",
|
"angular": "Angular",
|
||||||
"express": "Express.js",
|
"express": "Express.js",
|
||||||
"django": "Django",
|
"django": "Django",
|
||||||
"flask": "Flask",
|
"flask": "Flask",
|
||||||
"spring": "Spring",
|
"spring": "Spring",
|
||||||
"gin": "Gin",
|
"gin": "Gin",
|
||||||
"echo": "Echo",
|
"echo": "Echo",
|
||||||
"fastapi": "FastAPI",
|
"fastapi": "FastAPI",
|
||||||
"bootstrap": "Bootstrap",
|
"bootstrap": "Bootstrap",
|
||||||
"tailwind": "Tailwind CSS",
|
"tailwind": "Tailwind CSS",
|
||||||
"material": "Material UI",
|
"material": "Material UI",
|
||||||
"antd": "Ant Design",
|
"antd": "Ant Design",
|
||||||
}
|
}
|
||||||
|
|
||||||
for pattern, tech := range frameworks {
|
for pattern, tech := range frameworks {
|
||||||
@@ -832,12 +831,12 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
|
|||||||
// Technology mapping based on common imports
|
// Technology mapping based on common imports
|
||||||
techMap := map[string]string{
|
techMap := map[string]string{
|
||||||
// Go
|
// Go
|
||||||
"gin-gonic/gin": "Gin",
|
"gin-gonic/gin": "Gin",
|
||||||
"labstack/echo": "Echo",
|
"labstack/echo": "Echo",
|
||||||
"gorilla/mux": "Gorilla Mux",
|
"gorilla/mux": "Gorilla Mux",
|
||||||
"gorm.io/gorm": "GORM",
|
"gorm.io/gorm": "GORM",
|
||||||
"github.com/redis": "Redis",
|
"github.com/redis": "Redis",
|
||||||
"go.mongodb.org": "MongoDB",
|
"go.mongodb.org": "MongoDB",
|
||||||
|
|
||||||
// Python
|
// Python
|
||||||
"django": "Django",
|
"django": "Django",
|
||||||
@@ -851,13 +850,13 @@ func (fa *DefaultFileAnalyzer) mapImportToTechnology(importPath, language string
|
|||||||
"torch": "PyTorch",
|
"torch": "PyTorch",
|
||||||
|
|
||||||
// JavaScript/TypeScript
|
// JavaScript/TypeScript
|
||||||
"react": "React",
|
"react": "React",
|
||||||
"vue": "Vue.js",
|
"vue": "Vue.js",
|
||||||
"angular": "Angular",
|
"angular": "Angular",
|
||||||
"express": "Express.js",
|
"express": "Express.js",
|
||||||
"axios": "Axios",
|
"axios": "Axios",
|
||||||
"lodash": "Lodash",
|
"lodash": "Lodash",
|
||||||
"moment": "Moment.js",
|
"moment": "Moment.js",
|
||||||
"socket.io": "Socket.IO",
|
"socket.io": "Socket.IO",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -8,80 +8,79 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
)
|
)
|
||||||
|
|
||||||
// RoleAwareProcessor provides role-based context processing and insight generation
|
// RoleAwareProcessor provides role-based context processing and insight generation
|
||||||
type RoleAwareProcessor struct {
|
type RoleAwareProcessor struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
config *EngineConfig
|
config *EngineConfig
|
||||||
roleManager *RoleManager
|
roleManager *RoleManager
|
||||||
securityFilter *SecurityFilter
|
securityFilter *SecurityFilter
|
||||||
insightGenerator *InsightGenerator
|
insightGenerator *InsightGenerator
|
||||||
accessController *AccessController
|
accessController *AccessController
|
||||||
auditLogger *AuditLogger
|
auditLogger *AuditLogger
|
||||||
permissions *PermissionMatrix
|
permissions *PermissionMatrix
|
||||||
roleProfiles map[string]*RoleProfile
|
roleProfiles map[string]*RoleBlueprint
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleManager manages role definitions and hierarchies
|
// RoleManager manages role definitions and hierarchies
|
||||||
type RoleManager struct {
|
type RoleManager struct {
|
||||||
roles map[string]*Role
|
roles map[string]*Role
|
||||||
hierarchies map[string]*RoleHierarchy
|
hierarchies map[string]*RoleHierarchy
|
||||||
capabilities map[string]*RoleCapabilities
|
capabilities map[string]*RoleCapabilities
|
||||||
restrictions map[string]*RoleRestrictions
|
restrictions map[string]*RoleRestrictions
|
||||||
}
|
}
|
||||||
|
|
||||||
// Role represents an AI agent role with specific permissions and capabilities
|
// Role represents an AI agent role with specific permissions and capabilities
|
||||||
type Role struct {
|
type Role struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
SecurityLevel int `json:"security_level"`
|
SecurityLevel int `json:"security_level"`
|
||||||
Capabilities []string `json:"capabilities"`
|
Capabilities []string `json:"capabilities"`
|
||||||
Restrictions []string `json:"restrictions"`
|
Restrictions []string `json:"restrictions"`
|
||||||
AccessPatterns []string `json:"access_patterns"`
|
AccessPatterns []string `json:"access_patterns"`
|
||||||
ContextFilters []string `json:"context_filters"`
|
ContextFilters []string `json:"context_filters"`
|
||||||
Priority int `json:"priority"`
|
Priority int `json:"priority"`
|
||||||
ParentRoles []string `json:"parent_roles"`
|
ParentRoles []string `json:"parent_roles"`
|
||||||
ChildRoles []string `json:"child_roles"`
|
ChildRoles []string `json:"child_roles"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
UpdatedAt time.Time `json:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
IsActive bool `json:"is_active"`
|
IsActive bool `json:"is_active"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleHierarchy defines role inheritance and relationships
|
// RoleHierarchy defines role inheritance and relationships
|
||||||
type RoleHierarchy struct {
|
type RoleHierarchy struct {
|
||||||
ParentRole string `json:"parent_role"`
|
ParentRole string `json:"parent_role"`
|
||||||
ChildRoles []string `json:"child_roles"`
|
ChildRoles []string `json:"child_roles"`
|
||||||
InheritLevel int `json:"inherit_level"`
|
InheritLevel int `json:"inherit_level"`
|
||||||
OverrideRules []string `json:"override_rules"`
|
OverrideRules []string `json:"override_rules"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleCapabilities defines what a role can do
|
// RoleCapabilities defines what a role can do
|
||||||
type RoleCapabilities struct {
|
type RoleCapabilities struct {
|
||||||
RoleID string `json:"role_id"`
|
RoleID string `json:"role_id"`
|
||||||
ReadAccess []string `json:"read_access"`
|
ReadAccess []string `json:"read_access"`
|
||||||
WriteAccess []string `json:"write_access"`
|
WriteAccess []string `json:"write_access"`
|
||||||
ExecuteAccess []string `json:"execute_access"`
|
ExecuteAccess []string `json:"execute_access"`
|
||||||
AnalysisTypes []string `json:"analysis_types"`
|
AnalysisTypes []string `json:"analysis_types"`
|
||||||
InsightLevels []string `json:"insight_levels"`
|
InsightLevels []string `json:"insight_levels"`
|
||||||
SecurityScopes []string `json:"security_scopes"`
|
SecurityScopes []string `json:"security_scopes"`
|
||||||
DataClassifications []string `json:"data_classifications"`
|
DataClassifications []string `json:"data_classifications"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleRestrictions defines what a role cannot do or access
|
// RoleRestrictions defines what a role cannot do or access
|
||||||
type RoleRestrictions struct {
|
type RoleRestrictions struct {
|
||||||
RoleID string `json:"role_id"`
|
RoleID string `json:"role_id"`
|
||||||
ForbiddenPaths []string `json:"forbidden_paths"`
|
ForbiddenPaths []string `json:"forbidden_paths"`
|
||||||
ForbiddenTypes []string `json:"forbidden_types"`
|
ForbiddenTypes []string `json:"forbidden_types"`
|
||||||
ForbiddenKeywords []string `json:"forbidden_keywords"`
|
ForbiddenKeywords []string `json:"forbidden_keywords"`
|
||||||
TimeRestrictions []string `json:"time_restrictions"`
|
TimeRestrictions []string `json:"time_restrictions"`
|
||||||
RateLimit *RateLimit `json:"rate_limit"`
|
RateLimit *RateLimit `json:"rate_limit"`
|
||||||
MaxContextSize int `json:"max_context_size"`
|
MaxContextSize int `json:"max_context_size"`
|
||||||
MaxInsights int `json:"max_insights"`
|
MaxInsights int `json:"max_insights"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RateLimit defines rate limiting for role operations
|
// RateLimit defines rate limiting for role operations
|
||||||
@@ -111,9 +110,9 @@ type ContentFilter struct {
|
|||||||
|
|
||||||
// AccessMatrix defines access control rules
|
// AccessMatrix defines access control rules
|
||||||
type AccessMatrix struct {
|
type AccessMatrix struct {
|
||||||
Rules map[string]*AccessRule `json:"rules"`
|
Rules map[string]*AccessRule `json:"rules"`
|
||||||
DefaultDeny bool `json:"default_deny"`
|
DefaultDeny bool `json:"default_deny"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AccessRule defines a specific access control rule
|
// AccessRule defines a specific access control rule
|
||||||
@@ -144,14 +143,14 @@ type RoleInsightGenerator interface {
|
|||||||
|
|
||||||
// InsightTemplate defines templates for generating insights
|
// InsightTemplate defines templates for generating insights
|
||||||
type InsightTemplate struct {
|
type InsightTemplate struct {
|
||||||
TemplateID string `json:"template_id"`
|
TemplateID string `json:"template_id"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Template string `json:"template"`
|
Template string `json:"template"`
|
||||||
Variables []string `json:"variables"`
|
Variables []string `json:"variables"`
|
||||||
Roles []string `json:"roles"`
|
Roles []string `json:"roles"`
|
||||||
Category string `json:"category"`
|
Category string `json:"category"`
|
||||||
Priority int `json:"priority"`
|
Priority int `json:"priority"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// InsightFilter filters insights based on role permissions
|
// InsightFilter filters insights based on role permissions
|
||||||
@@ -179,39 +178,39 @@ type PermissionMatrix struct {
|
|||||||
|
|
||||||
// RolePermissions defines permissions for a specific role
|
// RolePermissions defines permissions for a specific role
|
||||||
type RolePermissions struct {
|
type RolePermissions struct {
|
||||||
RoleID string `json:"role_id"`
|
RoleID string `json:"role_id"`
|
||||||
ContextAccess *ContextAccessRights `json:"context_access"`
|
ContextAccess *ContextAccessRights `json:"context_access"`
|
||||||
AnalysisAccess *AnalysisAccessRights `json:"analysis_access"`
|
AnalysisAccess *AnalysisAccessRights `json:"analysis_access"`
|
||||||
InsightAccess *InsightAccessRights `json:"insight_access"`
|
InsightAccess *InsightAccessRights `json:"insight_access"`
|
||||||
SystemAccess *SystemAccessRights `json:"system_access"`
|
SystemAccess *SystemAccessRights `json:"system_access"`
|
||||||
CustomAccess map[string]interface{} `json:"custom_access"`
|
CustomAccess map[string]interface{} `json:"custom_access"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContextAccessRights defines context-related access rights
|
// ContextAccessRights defines context-related access rights
|
||||||
type ContextAccessRights struct {
|
type ContextAccessRights struct {
|
||||||
ReadLevel int `json:"read_level"`
|
ReadLevel int `json:"read_level"`
|
||||||
WriteLevel int `json:"write_level"`
|
WriteLevel int `json:"write_level"`
|
||||||
AllowedTypes []string `json:"allowed_types"`
|
AllowedTypes []string `json:"allowed_types"`
|
||||||
ForbiddenTypes []string `json:"forbidden_types"`
|
ForbiddenTypes []string `json:"forbidden_types"`
|
||||||
PathRestrictions []string `json:"path_restrictions"`
|
PathRestrictions []string `json:"path_restrictions"`
|
||||||
SizeLimit int `json:"size_limit"`
|
SizeLimit int `json:"size_limit"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AnalysisAccessRights defines analysis-related access rights
|
// AnalysisAccessRights defines analysis-related access rights
|
||||||
type AnalysisAccessRights struct {
|
type AnalysisAccessRights struct {
|
||||||
AllowedAnalysisTypes []string `json:"allowed_analysis_types"`
|
AllowedAnalysisTypes []string `json:"allowed_analysis_types"`
|
||||||
MaxComplexity int `json:"max_complexity"`
|
MaxComplexity int `json:"max_complexity"`
|
||||||
TimeoutLimit time.Duration `json:"timeout_limit"`
|
TimeoutLimit time.Duration `json:"timeout_limit"`
|
||||||
ResourceLimit int `json:"resource_limit"`
|
ResourceLimit int `json:"resource_limit"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// InsightAccessRights defines insight-related access rights
|
// InsightAccessRights defines insight-related access rights
|
||||||
type InsightAccessRights struct {
|
type InsightAccessRights struct {
|
||||||
GenerationLevel int `json:"generation_level"`
|
GenerationLevel int `json:"generation_level"`
|
||||||
AccessLevel int `json:"access_level"`
|
AccessLevel int `json:"access_level"`
|
||||||
CategoryFilters []string `json:"category_filters"`
|
CategoryFilters []string `json:"category_filters"`
|
||||||
ConfidenceThreshold float64 `json:"confidence_threshold"`
|
ConfidenceThreshold float64 `json:"confidence_threshold"`
|
||||||
MaxInsights int `json:"max_insights"`
|
MaxInsights int `json:"max_insights"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// SystemAccessRights defines system-level access rights
|
// SystemAccessRights defines system-level access rights
|
||||||
@@ -254,15 +253,15 @@ type AuditLogger struct {
|
|||||||
|
|
||||||
// AuditEntry represents an audit log entry
|
// AuditEntry represents an audit log entry
|
||||||
type AuditEntry struct {
|
type AuditEntry struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
RoleID string `json:"role_id"`
|
RoleID string `json:"role_id"`
|
||||||
Action string `json:"action"`
|
Action string `json:"action"`
|
||||||
Resource string `json:"resource"`
|
Resource string `json:"resource"`
|
||||||
Result string `json:"result"` // success, denied, error
|
Result string `json:"result"` // success, denied, error
|
||||||
Details string `json:"details"`
|
Details string `json:"details"`
|
||||||
Context map[string]interface{} `json:"context"`
|
Context map[string]interface{} `json:"context"`
|
||||||
SecurityLevel int `json:"security_level"`
|
SecurityLevel int `json:"security_level"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AuditConfig defines audit logging configuration
|
// AuditConfig defines audit logging configuration
|
||||||
@@ -276,49 +275,49 @@ type AuditConfig struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RoleProfile contains comprehensive role configuration
|
// RoleProfile contains comprehensive role configuration
|
||||||
type RoleProfile struct {
|
type RoleBlueprint struct {
|
||||||
Role *Role `json:"role"`
|
Role *Role `json:"role"`
|
||||||
Capabilities *RoleCapabilities `json:"capabilities"`
|
Capabilities *RoleCapabilities `json:"capabilities"`
|
||||||
Restrictions *RoleRestrictions `json:"restrictions"`
|
Restrictions *RoleRestrictions `json:"restrictions"`
|
||||||
Permissions *RolePermissions `json:"permissions"`
|
Permissions *RolePermissions `json:"permissions"`
|
||||||
InsightConfig *RoleInsightConfig `json:"insight_config"`
|
InsightConfig *RoleInsightConfig `json:"insight_config"`
|
||||||
SecurityConfig *RoleSecurityConfig `json:"security_config"`
|
SecurityConfig *RoleSecurityConfig `json:"security_config"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleInsightConfig defines insight generation configuration for a role
|
// RoleInsightConfig defines insight generation configuration for a role
|
||||||
type RoleInsightConfig struct {
|
type RoleInsightConfig struct {
|
||||||
EnabledGenerators []string `json:"enabled_generators"`
|
EnabledGenerators []string `json:"enabled_generators"`
|
||||||
MaxInsights int `json:"max_insights"`
|
MaxInsights int `json:"max_insights"`
|
||||||
ConfidenceThreshold float64 `json:"confidence_threshold"`
|
ConfidenceThreshold float64 `json:"confidence_threshold"`
|
||||||
CategoryWeights map[string]float64 `json:"category_weights"`
|
CategoryWeights map[string]float64 `json:"category_weights"`
|
||||||
CustomFilters []string `json:"custom_filters"`
|
CustomFilters []string `json:"custom_filters"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleSecurityConfig defines security configuration for a role
|
// RoleSecurityConfig defines security configuration for a role
|
||||||
type RoleSecurityConfig struct {
|
type RoleSecurityConfig struct {
|
||||||
EncryptionRequired bool `json:"encryption_required"`
|
EncryptionRequired bool `json:"encryption_required"`
|
||||||
AccessLogging bool `json:"access_logging"`
|
AccessLogging bool `json:"access_logging"`
|
||||||
RateLimit *RateLimit `json:"rate_limit"`
|
RateLimit *RateLimit `json:"rate_limit"`
|
||||||
IPWhitelist []string `json:"ip_whitelist"`
|
IPWhitelist []string `json:"ip_whitelist"`
|
||||||
RequiredClaims []string `json:"required_claims"`
|
RequiredClaims []string `json:"required_claims"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleSpecificInsight represents an insight tailored to a specific role
|
// RoleSpecificInsight represents an insight tailored to a specific role
|
||||||
type RoleSpecificInsight struct {
|
type RoleSpecificInsight struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
RoleID string `json:"role_id"`
|
RoleID string `json:"role_id"`
|
||||||
Category string `json:"category"`
|
Category string `json:"category"`
|
||||||
Title string `json:"title"`
|
Title string `json:"title"`
|
||||||
Content string `json:"content"`
|
Content string `json:"content"`
|
||||||
Confidence float64 `json:"confidence"`
|
Confidence float64 `json:"confidence"`
|
||||||
Priority int `json:"priority"`
|
Priority int `json:"priority"`
|
||||||
SecurityLevel int `json:"security_level"`
|
SecurityLevel int `json:"security_level"`
|
||||||
Tags []string `json:"tags"`
|
Tags []string `json:"tags"`
|
||||||
ActionItems []string `json:"action_items"`
|
ActionItems []string `json:"action_items"`
|
||||||
References []string `json:"references"`
|
References []string `json:"references"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
GeneratedAt time.Time `json:"generated_at"`
|
GeneratedAt time.Time `json:"generated_at"`
|
||||||
ExpiresAt *time.Time `json:"expires_at,omitempty"`
|
ExpiresAt *time.Time `json:"expires_at,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRoleAwareProcessor creates a new role-aware processor
|
// NewRoleAwareProcessor creates a new role-aware processor
|
||||||
@@ -331,7 +330,7 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
|
|||||||
accessController: NewAccessController(),
|
accessController: NewAccessController(),
|
||||||
auditLogger: NewAuditLogger(),
|
auditLogger: NewAuditLogger(),
|
||||||
permissions: NewPermissionMatrix(),
|
permissions: NewPermissionMatrix(),
|
||||||
roleProfiles: make(map[string]*RoleProfile),
|
roleProfiles: make(map[string]*RoleBlueprint),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize default roles
|
// Initialize default roles
|
||||||
@@ -342,10 +341,10 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
|
|||||||
// NewRoleManager creates a role manager with default roles
|
// NewRoleManager creates a role manager with default roles
|
||||||
func NewRoleManager() *RoleManager {
|
func NewRoleManager() *RoleManager {
|
||||||
rm := &RoleManager{
|
rm := &RoleManager{
|
||||||
roles: make(map[string]*Role),
|
roles: make(map[string]*Role),
|
||||||
hierarchies: make(map[string]*RoleHierarchy),
|
hierarchies: make(map[string]*RoleHierarchy),
|
||||||
capabilities: make(map[string]*RoleCapabilities),
|
capabilities: make(map[string]*RoleCapabilities),
|
||||||
restrictions: make(map[string]*RoleRestrictions),
|
restrictions: make(map[string]*RoleRestrictions),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize with default roles
|
// Initialize with default roles
|
||||||
@@ -383,8 +382,11 @@ func (rap *RoleAwareProcessor) ProcessContextForRole(ctx context.Context, node *
|
|||||||
|
|
||||||
// Apply insights to node
|
// Apply insights to node
|
||||||
if len(insights) > 0 {
|
if len(insights) > 0 {
|
||||||
filteredNode.RoleSpecificInsights = insights
|
if filteredNode.Metadata == nil {
|
||||||
filteredNode.ProcessedForRole = roleID
|
filteredNode.Metadata = make(map[string]interface{})
|
||||||
|
}
|
||||||
|
filteredNode.Metadata["role_specific_insights"] = insights
|
||||||
|
filteredNode.Metadata["processed_for_role"] = roleID
|
||||||
}
|
}
|
||||||
|
|
||||||
// Log successful processing
|
// Log successful processing
|
||||||
@@ -448,69 +450,69 @@ func (rap *RoleAwareProcessor) GetRoleCapabilities(roleID string) (*RoleCapabili
|
|||||||
func (rap *RoleAwareProcessor) initializeDefaultRoles() {
|
func (rap *RoleAwareProcessor) initializeDefaultRoles() {
|
||||||
defaultRoles := []*Role{
|
defaultRoles := []*Role{
|
||||||
{
|
{
|
||||||
ID: "architect",
|
ID: "architect",
|
||||||
Name: "System Architect",
|
Name: "System Architect",
|
||||||
Description: "High-level system design and architecture decisions",
|
Description: "High-level system design and architecture decisions",
|
||||||
SecurityLevel: 8,
|
SecurityLevel: 8,
|
||||||
Capabilities: []string{"architecture_design", "high_level_analysis", "strategic_planning"},
|
Capabilities: []string{"architecture_design", "high_level_analysis", "strategic_planning"},
|
||||||
Restrictions: []string{"no_implementation_details", "no_low_level_code"},
|
Restrictions: []string{"no_implementation_details", "no_low_level_code"},
|
||||||
AccessPatterns: []string{"architecture/**", "design/**", "docs/**"},
|
AccessPatterns: []string{"architecture/**", "design/**", "docs/**"},
|
||||||
Priority: 1,
|
Priority: 1,
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
ID: "developer",
|
ID: "developer",
|
||||||
Name: "Software Developer",
|
Name: "Software Developer",
|
||||||
Description: "Code implementation and development tasks",
|
Description: "Code implementation and development tasks",
|
||||||
SecurityLevel: 6,
|
SecurityLevel: 6,
|
||||||
Capabilities: []string{"code_analysis", "implementation", "debugging", "testing"},
|
Capabilities: []string{"code_analysis", "implementation", "debugging", "testing"},
|
||||||
Restrictions: []string{"no_architecture_changes", "no_security_config"},
|
Restrictions: []string{"no_architecture_changes", "no_security_config"},
|
||||||
AccessPatterns: []string{"src/**", "lib/**", "test/**"},
|
AccessPatterns: []string{"src/**", "lib/**", "test/**"},
|
||||||
Priority: 2,
|
Priority: 2,
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
ID: "security_analyst",
|
ID: "security_analyst",
|
||||||
Name: "Security Analyst",
|
Name: "Security Analyst",
|
||||||
Description: "Security analysis and vulnerability assessment",
|
Description: "Security analysis and vulnerability assessment",
|
||||||
SecurityLevel: 9,
|
SecurityLevel: 9,
|
||||||
Capabilities: []string{"security_analysis", "vulnerability_assessment", "compliance_check"},
|
Capabilities: []string{"security_analysis", "vulnerability_assessment", "compliance_check"},
|
||||||
Restrictions: []string{"no_code_modification"},
|
Restrictions: []string{"no_code_modification"},
|
||||||
AccessPatterns: []string{"**/*"},
|
AccessPatterns: []string{"**/*"},
|
||||||
Priority: 1,
|
Priority: 1,
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
ID: "devops_engineer",
|
ID: "devops_engineer",
|
||||||
Name: "DevOps Engineer",
|
Name: "DevOps Engineer",
|
||||||
Description: "Infrastructure and deployment operations",
|
Description: "Infrastructure and deployment operations",
|
||||||
SecurityLevel: 7,
|
SecurityLevel: 7,
|
||||||
Capabilities: []string{"infrastructure_analysis", "deployment", "monitoring", "ci_cd"},
|
Capabilities: []string{"infrastructure_analysis", "deployment", "monitoring", "ci_cd"},
|
||||||
Restrictions: []string{"no_business_logic"},
|
Restrictions: []string{"no_business_logic"},
|
||||||
AccessPatterns: []string{"infra/**", "deploy/**", "config/**", "docker/**"},
|
AccessPatterns: []string{"infra/**", "deploy/**", "config/**", "docker/**"},
|
||||||
Priority: 2,
|
Priority: 2,
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
ID: "qa_engineer",
|
ID: "qa_engineer",
|
||||||
Name: "Quality Assurance Engineer",
|
Name: "Quality Assurance Engineer",
|
||||||
Description: "Quality assurance and testing",
|
Description: "Quality assurance and testing",
|
||||||
SecurityLevel: 5,
|
SecurityLevel: 5,
|
||||||
Capabilities: []string{"quality_analysis", "testing", "test_planning"},
|
Capabilities: []string{"quality_analysis", "testing", "test_planning"},
|
||||||
Restrictions: []string{"no_production_access", "no_code_modification"},
|
Restrictions: []string{"no_production_access", "no_code_modification"},
|
||||||
AccessPatterns: []string{"test/**", "spec/**", "qa/**"},
|
AccessPatterns: []string{"test/**", "spec/**", "qa/**"},
|
||||||
Priority: 3,
|
Priority: 3,
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, role := range defaultRoles {
|
for _, role := range defaultRoles {
|
||||||
rap.roleProfiles[role.ID] = &RoleProfile{
|
rap.roleProfiles[role.ID] = &RoleBlueprint{
|
||||||
Role: role,
|
Role: role,
|
||||||
Capabilities: rap.createDefaultCapabilities(role),
|
Capabilities: rap.createDefaultCapabilities(role),
|
||||||
Restrictions: rap.createDefaultRestrictions(role),
|
Restrictions: rap.createDefaultRestrictions(role),
|
||||||
@@ -615,10 +617,10 @@ func (rap *RoleAwareProcessor) createDefaultPermissions(role *Role) *RolePermiss
|
|||||||
return &RolePermissions{
|
return &RolePermissions{
|
||||||
RoleID: role.ID,
|
RoleID: role.ID,
|
||||||
ContextAccess: &ContextAccessRights{
|
ContextAccess: &ContextAccessRights{
|
||||||
ReadLevel: role.SecurityLevel,
|
ReadLevel: role.SecurityLevel,
|
||||||
WriteLevel: role.SecurityLevel - 2,
|
WriteLevel: role.SecurityLevel - 2,
|
||||||
AllowedTypes: []string{"code", "documentation", "configuration"},
|
AllowedTypes: []string{"code", "documentation", "configuration"},
|
||||||
SizeLimit: 1000000,
|
SizeLimit: 1000000,
|
||||||
},
|
},
|
||||||
AnalysisAccess: &AnalysisAccessRights{
|
AnalysisAccess: &AnalysisAccessRights{
|
||||||
AllowedAnalysisTypes: role.Capabilities,
|
AllowedAnalysisTypes: role.Capabilities,
|
||||||
@@ -627,10 +629,10 @@ func (rap *RoleAwareProcessor) createDefaultPermissions(role *Role) *RolePermiss
|
|||||||
ResourceLimit: 100,
|
ResourceLimit: 100,
|
||||||
},
|
},
|
||||||
InsightAccess: &InsightAccessRights{
|
InsightAccess: &InsightAccessRights{
|
||||||
GenerationLevel: role.SecurityLevel,
|
GenerationLevel: role.SecurityLevel,
|
||||||
AccessLevel: role.SecurityLevel,
|
AccessLevel: role.SecurityLevel,
|
||||||
ConfidenceThreshold: 0.5,
|
ConfidenceThreshold: 0.5,
|
||||||
MaxInsights: 50,
|
MaxInsights: 50,
|
||||||
},
|
},
|
||||||
SystemAccess: &SystemAccessRights{
|
SystemAccess: &SystemAccessRights{
|
||||||
AdminAccess: role.SecurityLevel >= 8,
|
AdminAccess: role.SecurityLevel >= 8,
|
||||||
@@ -664,19 +666,19 @@ func (rap *RoleAwareProcessor) createDefaultInsightConfig(role *Role) *RoleInsig
|
|||||||
case "developer":
|
case "developer":
|
||||||
config.EnabledGenerators = []string{"code_insights", "implementation_suggestions", "bug_detection"}
|
config.EnabledGenerators = []string{"code_insights", "implementation_suggestions", "bug_detection"}
|
||||||
config.CategoryWeights = map[string]float64{
|
config.CategoryWeights = map[string]float64{
|
||||||
"code_quality": 1.0,
|
"code_quality": 1.0,
|
||||||
"implementation": 0.9,
|
"implementation": 0.9,
|
||||||
"bugs": 0.8,
|
"bugs": 0.8,
|
||||||
"performance": 0.6,
|
"performance": 0.6,
|
||||||
}
|
}
|
||||||
|
|
||||||
case "security_analyst":
|
case "security_analyst":
|
||||||
config.EnabledGenerators = []string{"security_insights", "vulnerability_analysis", "compliance_check"}
|
config.EnabledGenerators = []string{"security_insights", "vulnerability_analysis", "compliance_check"}
|
||||||
config.CategoryWeights = map[string]float64{
|
config.CategoryWeights = map[string]float64{
|
||||||
"security": 1.0,
|
"security": 1.0,
|
||||||
"vulnerabilities": 1.0,
|
"vulnerabilities": 1.0,
|
||||||
"compliance": 0.9,
|
"compliance": 0.9,
|
||||||
"privacy": 0.8,
|
"privacy": 0.8,
|
||||||
}
|
}
|
||||||
config.MaxInsights = 200
|
config.MaxInsights = 200
|
||||||
|
|
||||||
@@ -751,7 +753,7 @@ func NewSecurityFilter() *SecurityFilter {
|
|||||||
"top_secret": 10,
|
"top_secret": 10,
|
||||||
},
|
},
|
||||||
contentFilters: make(map[string]*ContentFilter),
|
contentFilters: make(map[string]*ContentFilter),
|
||||||
accessMatrix: &AccessMatrix{
|
accessMatrix: &AccessMatrix{
|
||||||
Rules: make(map[string]*AccessRule),
|
Rules: make(map[string]*AccessRule),
|
||||||
DefaultDeny: true,
|
DefaultDeny: true,
|
||||||
LastUpdated: time.Now(),
|
LastUpdated: time.Now(),
|
||||||
@@ -1174,6 +1176,7 @@ func (al *AuditLogger) GetAuditLog(limit int) []*AuditEntry {
|
|||||||
// These would be fully implemented with sophisticated logic in production
|
// These would be fully implemented with sophisticated logic in production
|
||||||
|
|
||||||
type ArchitectInsightGenerator struct{}
|
type ArchitectInsightGenerator struct{}
|
||||||
|
|
||||||
func NewArchitectInsightGenerator() *ArchitectInsightGenerator { return &ArchitectInsightGenerator{} }
|
func NewArchitectInsightGenerator() *ArchitectInsightGenerator { return &ArchitectInsightGenerator{} }
|
||||||
func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1191,10 +1194,15 @@ func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (aig *ArchitectInsightGenerator) GetSupportedRoles() []string { return []string{"architect"} }
|
func (aig *ArchitectInsightGenerator) GetSupportedRoles() []string { return []string{"architect"} }
|
||||||
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string { return []string{"architecture", "design", "patterns"} }
|
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string {
|
||||||
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
return []string{"architecture", "design", "patterns"}
|
||||||
|
}
|
||||||
|
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
type DeveloperInsightGenerator struct{}
|
type DeveloperInsightGenerator struct{}
|
||||||
|
|
||||||
func NewDeveloperInsightGenerator() *DeveloperInsightGenerator { return &DeveloperInsightGenerator{} }
|
func NewDeveloperInsightGenerator() *DeveloperInsightGenerator { return &DeveloperInsightGenerator{} }
|
||||||
func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1212,10 +1220,15 @@ func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (dig *DeveloperInsightGenerator) GetSupportedRoles() []string { return []string{"developer"} }
|
func (dig *DeveloperInsightGenerator) GetSupportedRoles() []string { return []string{"developer"} }
|
||||||
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string { return []string{"code_quality", "implementation", "bugs"} }
|
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string {
|
||||||
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
return []string{"code_quality", "implementation", "bugs"}
|
||||||
|
}
|
||||||
|
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
type SecurityInsightGenerator struct{}
|
type SecurityInsightGenerator struct{}
|
||||||
|
|
||||||
func NewSecurityInsightGenerator() *SecurityInsightGenerator { return &SecurityInsightGenerator{} }
|
func NewSecurityInsightGenerator() *SecurityInsightGenerator { return &SecurityInsightGenerator{} }
|
||||||
func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1232,11 +1245,18 @@ func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node
|
|||||||
},
|
},
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string { return []string{"security_analyst"} }
|
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string {
|
||||||
func (sig *SecurityInsightGenerator) GetInsightTypes() []string { return []string{"security", "vulnerability", "compliance"} }
|
return []string{"security_analyst"}
|
||||||
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
}
|
||||||
|
func (sig *SecurityInsightGenerator) GetInsightTypes() []string {
|
||||||
|
return []string{"security", "vulnerability", "compliance"}
|
||||||
|
}
|
||||||
|
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
type DevOpsInsightGenerator struct{}
|
type DevOpsInsightGenerator struct{}
|
||||||
|
|
||||||
func NewDevOpsInsightGenerator() *DevOpsInsightGenerator { return &DevOpsInsightGenerator{} }
|
func NewDevOpsInsightGenerator() *DevOpsInsightGenerator { return &DevOpsInsightGenerator{} }
|
||||||
func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1254,10 +1274,15 @@ func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (doig *DevOpsInsightGenerator) GetSupportedRoles() []string { return []string{"devops_engineer"} }
|
func (doig *DevOpsInsightGenerator) GetSupportedRoles() []string { return []string{"devops_engineer"} }
|
||||||
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string { return []string{"infrastructure", "deployment", "monitoring"} }
|
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string {
|
||||||
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
return []string{"infrastructure", "deployment", "monitoring"}
|
||||||
|
}
|
||||||
|
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
type QAInsightGenerator struct{}
|
type QAInsightGenerator struct{}
|
||||||
|
|
||||||
func NewQAInsightGenerator() *QAInsightGenerator { return &QAInsightGenerator{} }
|
func NewQAInsightGenerator() *QAInsightGenerator { return &QAInsightGenerator{} }
|
||||||
func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1275,5 +1300,9 @@ func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slur
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (qaig *QAInsightGenerator) GetSupportedRoles() []string { return []string{"qa_engineer"} }
|
func (qaig *QAInsightGenerator) GetSupportedRoles() []string { return []string{"qa_engineer"} }
|
||||||
func (qaig *QAInsightGenerator) GetInsightTypes() []string { return []string{"quality", "testing", "validation"} }
|
func (qaig *QAInsightGenerator) GetInsightTypes() []string {
|
||||||
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
return []string{"quality", "testing", "validation"}
|
||||||
|
}
|
||||||
|
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -6,236 +6,236 @@ import (
|
|||||||
|
|
||||||
// FileMetadata represents metadata extracted from file system
|
// FileMetadata represents metadata extracted from file system
|
||||||
type FileMetadata struct {
|
type FileMetadata struct {
|
||||||
Path string `json:"path"` // File path
|
Path string `json:"path"` // File path
|
||||||
Size int64 `json:"size"` // File size in bytes
|
Size int64 `json:"size"` // File size in bytes
|
||||||
ModTime time.Time `json:"mod_time"` // Last modification time
|
ModTime time.Time `json:"mod_time"` // Last modification time
|
||||||
Mode uint32 `json:"mode"` // File mode
|
Mode uint32 `json:"mode"` // File mode
|
||||||
IsDir bool `json:"is_dir"` // Whether it's a directory
|
IsDir bool `json:"is_dir"` // Whether it's a directory
|
||||||
Extension string `json:"extension"` // File extension
|
Extension string `json:"extension"` // File extension
|
||||||
MimeType string `json:"mime_type"` // MIME type
|
MimeType string `json:"mime_type"` // MIME type
|
||||||
Hash string `json:"hash"` // Content hash
|
Hash string `json:"hash"` // Content hash
|
||||||
Permissions string `json:"permissions"` // File permissions
|
Permissions string `json:"permissions"` // File permissions
|
||||||
}
|
}
|
||||||
|
|
||||||
// StructureAnalysis represents analysis of code structure
|
// StructureAnalysis represents analysis of code structure
|
||||||
type StructureAnalysis struct {
|
type StructureAnalysis struct {
|
||||||
Architecture string `json:"architecture"` // Architectural pattern
|
Architecture string `json:"architecture"` // Architectural pattern
|
||||||
Patterns []string `json:"patterns"` // Design patterns used
|
Patterns []string `json:"patterns"` // Design patterns used
|
||||||
Components []*Component `json:"components"` // Code components
|
Components []*Component `json:"components"` // Code components
|
||||||
Relationships []*Relationship `json:"relationships"` // Component relationships
|
Relationships []*Relationship `json:"relationships"` // Component relationships
|
||||||
Complexity *ComplexityMetrics `json:"complexity"` // Complexity metrics
|
Complexity *ComplexityMetrics `json:"complexity"` // Complexity metrics
|
||||||
QualityMetrics *QualityMetrics `json:"quality_metrics"` // Code quality metrics
|
QualityMetrics *QualityMetrics `json:"quality_metrics"` // Code quality metrics
|
||||||
TestCoverage float64 `json:"test_coverage"` // Test coverage percentage
|
TestCoverage float64 `json:"test_coverage"` // Test coverage percentage
|
||||||
Documentation *DocMetrics `json:"documentation"` // Documentation metrics
|
Documentation *DocMetrics `json:"documentation"` // Documentation metrics
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
}
|
}
|
||||||
|
|
||||||
// Component represents a code component
|
// Component represents a code component
|
||||||
type Component struct {
|
type Component struct {
|
||||||
Name string `json:"name"` // Component name
|
Name string `json:"name"` // Component name
|
||||||
Type string `json:"type"` // Component type (class, function, etc.)
|
Type string `json:"type"` // Component type (class, function, etc.)
|
||||||
Purpose string `json:"purpose"` // Component purpose
|
Purpose string `json:"purpose"` // Component purpose
|
||||||
Visibility string `json:"visibility"` // Visibility (public, private, etc.)
|
Visibility string `json:"visibility"` // Visibility (public, private, etc.)
|
||||||
Lines int `json:"lines"` // Lines of code
|
Lines int `json:"lines"` // Lines of code
|
||||||
Complexity int `json:"complexity"` // Cyclomatic complexity
|
Complexity int `json:"complexity"` // Cyclomatic complexity
|
||||||
Dependencies []string `json:"dependencies"` // Dependencies
|
Dependencies []string `json:"dependencies"` // Dependencies
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// Relationship represents a relationship between components
|
// Relationship represents a relationship between components
|
||||||
type Relationship struct {
|
type Relationship struct {
|
||||||
From string `json:"from"` // Source component
|
From string `json:"from"` // Source component
|
||||||
To string `json:"to"` // Target component
|
To string `json:"to"` // Target component
|
||||||
Type string `json:"type"` // Relationship type
|
Type string `json:"type"` // Relationship type
|
||||||
Strength float64 `json:"strength"` // Relationship strength (0-1)
|
Strength float64 `json:"strength"` // Relationship strength (0-1)
|
||||||
Direction string `json:"direction"` // Direction (unidirectional, bidirectional)
|
Direction string `json:"direction"` // Direction (unidirectional, bidirectional)
|
||||||
Description string `json:"description"` // Relationship description
|
Description string `json:"description"` // Relationship description
|
||||||
}
|
}
|
||||||
|
|
||||||
// ComplexityMetrics represents code complexity metrics
|
// ComplexityMetrics represents code complexity metrics
|
||||||
type ComplexityMetrics struct {
|
type ComplexityMetrics struct {
|
||||||
Cyclomatic float64 `json:"cyclomatic"` // Cyclomatic complexity
|
Cyclomatic float64 `json:"cyclomatic"` // Cyclomatic complexity
|
||||||
Cognitive float64 `json:"cognitive"` // Cognitive complexity
|
Cognitive float64 `json:"cognitive"` // Cognitive complexity
|
||||||
Halstead float64 `json:"halstead"` // Halstead complexity
|
Halstead float64 `json:"halstead"` // Halstead complexity
|
||||||
Maintainability float64 `json:"maintainability"` // Maintainability index
|
Maintainability float64 `json:"maintainability"` // Maintainability index
|
||||||
TechnicalDebt float64 `json:"technical_debt"` // Technical debt estimate
|
TechnicalDebt float64 `json:"technical_debt"` // Technical debt estimate
|
||||||
}
|
}
|
||||||
|
|
||||||
// QualityMetrics represents code quality metrics
|
// QualityMetrics represents code quality metrics
|
||||||
type QualityMetrics struct {
|
type QualityMetrics struct {
|
||||||
Readability float64 `json:"readability"` // Readability score
|
Readability float64 `json:"readability"` // Readability score
|
||||||
Testability float64 `json:"testability"` // Testability score
|
Testability float64 `json:"testability"` // Testability score
|
||||||
Reusability float64 `json:"reusability"` // Reusability score
|
Reusability float64 `json:"reusability"` // Reusability score
|
||||||
Reliability float64 `json:"reliability"` // Reliability score
|
Reliability float64 `json:"reliability"` // Reliability score
|
||||||
Security float64 `json:"security"` // Security score
|
Security float64 `json:"security"` // Security score
|
||||||
Performance float64 `json:"performance"` // Performance score
|
Performance float64 `json:"performance"` // Performance score
|
||||||
Duplication float64 `json:"duplication"` // Code duplication percentage
|
Duplication float64 `json:"duplication"` // Code duplication percentage
|
||||||
Consistency float64 `json:"consistency"` // Code consistency score
|
Consistency float64 `json:"consistency"` // Code consistency score
|
||||||
}
|
}
|
||||||
|
|
||||||
// DocMetrics represents documentation metrics
|
// DocMetrics represents documentation metrics
|
||||||
type DocMetrics struct {
|
type DocMetrics struct {
|
||||||
Coverage float64 `json:"coverage"` // Documentation coverage
|
Coverage float64 `json:"coverage"` // Documentation coverage
|
||||||
Quality float64 `json:"quality"` // Documentation quality
|
Quality float64 `json:"quality"` // Documentation quality
|
||||||
CommentRatio float64 `json:"comment_ratio"` // Comment to code ratio
|
CommentRatio float64 `json:"comment_ratio"` // Comment to code ratio
|
||||||
APIDocCoverage float64 `json:"api_doc_coverage"` // API documentation coverage
|
APIDocCoverage float64 `json:"api_doc_coverage"` // API documentation coverage
|
||||||
ExampleCount int `json:"example_count"` // Number of examples
|
ExampleCount int `json:"example_count"` // Number of examples
|
||||||
TODOCount int `json:"todo_count"` // Number of TODO comments
|
TODOCount int `json:"todo_count"` // Number of TODO comments
|
||||||
FIXMECount int `json:"fixme_count"` // Number of FIXME comments
|
FIXMECount int `json:"fixme_count"` // Number of FIXME comments
|
||||||
}
|
}
|
||||||
|
|
||||||
// DirectoryStructure represents analysis of directory organization
|
// DirectoryStructure represents analysis of directory organization
|
||||||
type DirectoryStructure struct {
|
type DirectoryStructure struct {
|
||||||
Path string `json:"path"` // Directory path
|
Path string `json:"path"` // Directory path
|
||||||
FileCount int `json:"file_count"` // Number of files
|
FileCount int `json:"file_count"` // Number of files
|
||||||
DirectoryCount int `json:"directory_count"` // Number of subdirectories
|
DirectoryCount int `json:"directory_count"` // Number of subdirectories
|
||||||
TotalSize int64 `json:"total_size"` // Total size in bytes
|
TotalSize int64 `json:"total_size"` // Total size in bytes
|
||||||
FileTypes map[string]int `json:"file_types"` // File type distribution
|
FileTypes map[string]int `json:"file_types"` // File type distribution
|
||||||
Languages map[string]int `json:"languages"` // Language distribution
|
Languages map[string]int `json:"languages"` // Language distribution
|
||||||
Organization *OrganizationInfo `json:"organization"` // Organization information
|
Organization *OrganizationInfo `json:"organization"` // Organization information
|
||||||
Conventions *ConventionInfo `json:"conventions"` // Convention information
|
Conventions *ConventionInfo `json:"conventions"` // Convention information
|
||||||
Dependencies []string `json:"dependencies"` // Directory dependencies
|
Dependencies []string `json:"dependencies"` // Directory dependencies
|
||||||
Purpose string `json:"purpose"` // Directory purpose
|
Purpose string `json:"purpose"` // Directory purpose
|
||||||
Architecture string `json:"architecture"` // Architectural pattern
|
Architecture string `json:"architecture"` // Architectural pattern
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
}
|
}
|
||||||
|
|
||||||
// OrganizationInfo represents directory organization information
|
// OrganizationInfo represents directory organization information
|
||||||
type OrganizationInfo struct {
|
type OrganizationInfo struct {
|
||||||
Pattern string `json:"pattern"` // Organization pattern
|
Pattern string `json:"pattern"` // Organization pattern
|
||||||
Consistency float64 `json:"consistency"` // Organization consistency
|
Consistency float64 `json:"consistency"` // Organization consistency
|
||||||
Depth int `json:"depth"` // Directory depth
|
Depth int `json:"depth"` // Directory depth
|
||||||
FanOut int `json:"fan_out"` // Average fan-out
|
FanOut int `json:"fan_out"` // Average fan-out
|
||||||
Modularity float64 `json:"modularity"` // Modularity score
|
Modularity float64 `json:"modularity"` // Modularity score
|
||||||
Cohesion float64 `json:"cohesion"` // Cohesion score
|
Cohesion float64 `json:"cohesion"` // Cohesion score
|
||||||
Coupling float64 `json:"coupling"` // Coupling score
|
Coupling float64 `json:"coupling"` // Coupling score
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConventionInfo represents naming and organizational conventions
|
// ConventionInfo represents naming and organizational conventions
|
||||||
type ConventionInfo struct {
|
type ConventionInfo struct {
|
||||||
NamingStyle string `json:"naming_style"` // Naming convention style
|
NamingStyle string `json:"naming_style"` // Naming convention style
|
||||||
FileNaming string `json:"file_naming"` // File naming pattern
|
FileNaming string `json:"file_naming"` // File naming pattern
|
||||||
DirectoryNaming string `json:"directory_naming"` // Directory naming pattern
|
DirectoryNaming string `json:"directory_naming"` // Directory naming pattern
|
||||||
Consistency float64 `json:"consistency"` // Convention consistency
|
Consistency float64 `json:"consistency"` // Convention consistency
|
||||||
Violations []*Violation `json:"violations"` // Convention violations
|
Violations []*Violation `json:"violations"` // Convention violations
|
||||||
Standards []string `json:"standards"` // Applied standards
|
Standards []string `json:"standards"` // Applied standards
|
||||||
}
|
}
|
||||||
|
|
||||||
// Violation represents a convention violation
|
// Violation represents a convention violation
|
||||||
type Violation struct {
|
type Violation struct {
|
||||||
Type string `json:"type"` // Violation type
|
Type string `json:"type"` // Violation type
|
||||||
Path string `json:"path"` // Violating path
|
Path string `json:"path"` // Violating path
|
||||||
Expected string `json:"expected"` // Expected format
|
Expected string `json:"expected"` // Expected format
|
||||||
Actual string `json:"actual"` // Actual format
|
Actual string `json:"actual"` // Actual format
|
||||||
Severity string `json:"severity"` // Violation severity
|
Severity string `json:"severity"` // Violation severity
|
||||||
Suggestion string `json:"suggestion"` // Suggested fix
|
Suggestion string `json:"suggestion"` // Suggested fix
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConventionAnalysis represents analysis of naming and organizational conventions
|
// ConventionAnalysis represents analysis of naming and organizational conventions
|
||||||
type ConventionAnalysis struct {
|
type ConventionAnalysis struct {
|
||||||
NamingPatterns []*NamingPattern `json:"naming_patterns"` // Detected naming patterns
|
NamingPatterns []*NamingPattern `json:"naming_patterns"` // Detected naming patterns
|
||||||
OrganizationalPatterns []*OrganizationalPattern `json:"organizational_patterns"` // Organizational patterns
|
OrganizationalPatterns []*OrganizationalPattern `json:"organizational_patterns"` // Organizational patterns
|
||||||
Consistency float64 `json:"consistency"` // Overall consistency score
|
Consistency float64 `json:"consistency"` // Overall consistency score
|
||||||
Violations []*Violation `json:"violations"` // Convention violations
|
Violations []*Violation `json:"violations"` // Convention violations
|
||||||
Recommendations []*Recommendation `json:"recommendations"` // Improvement recommendations
|
Recommendations []*BasicRecommendation `json:"recommendations"` // Improvement recommendations
|
||||||
AppliedStandards []string `json:"applied_standards"` // Applied coding standards
|
AppliedStandards []string `json:"applied_standards"` // Applied coding standards
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
}
|
}
|
||||||
|
|
||||||
// RelationshipAnalysis represents analysis of directory relationships
|
// RelationshipAnalysis represents analysis of directory relationships
|
||||||
type RelationshipAnalysis struct {
|
type RelationshipAnalysis struct {
|
||||||
Dependencies []*DirectoryDependency `json:"dependencies"` // Directory dependencies
|
Dependencies []*DirectoryDependency `json:"dependencies"` // Directory dependencies
|
||||||
Relationships []*DirectoryRelation `json:"relationships"` // Directory relationships
|
Relationships []*DirectoryRelation `json:"relationships"` // Directory relationships
|
||||||
CouplingMetrics *CouplingMetrics `json:"coupling_metrics"` // Coupling metrics
|
CouplingMetrics *CouplingMetrics `json:"coupling_metrics"` // Coupling metrics
|
||||||
ModularityScore float64 `json:"modularity_score"` // Modularity score
|
ModularityScore float64 `json:"modularity_score"` // Modularity score
|
||||||
ArchitecturalStyle string `json:"architectural_style"` // Architectural style
|
ArchitecturalStyle string `json:"architectural_style"` // Architectural style
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
}
|
}
|
||||||
|
|
||||||
// DirectoryDependency represents a dependency between directories
|
// DirectoryDependency represents a dependency between directories
|
||||||
type DirectoryDependency struct {
|
type DirectoryDependency struct {
|
||||||
From string `json:"from"` // Source directory
|
From string `json:"from"` // Source directory
|
||||||
To string `json:"to"` // Target directory
|
To string `json:"to"` // Target directory
|
||||||
Type string `json:"type"` // Dependency type
|
Type string `json:"type"` // Dependency type
|
||||||
Strength float64 `json:"strength"` // Dependency strength
|
Strength float64 `json:"strength"` // Dependency strength
|
||||||
Reason string `json:"reason"` // Reason for dependency
|
Reason string `json:"reason"` // Reason for dependency
|
||||||
FileCount int `json:"file_count"` // Number of files involved
|
FileCount int `json:"file_count"` // Number of files involved
|
||||||
}
|
}
|
||||||
|
|
||||||
// DirectoryRelation represents a relationship between directories
|
// DirectoryRelation represents a relationship between directories
|
||||||
type DirectoryRelation struct {
|
type DirectoryRelation struct {
|
||||||
Directory1 string `json:"directory1"` // First directory
|
Directory1 string `json:"directory1"` // First directory
|
||||||
Directory2 string `json:"directory2"` // Second directory
|
Directory2 string `json:"directory2"` // Second directory
|
||||||
Type string `json:"type"` // Relation type
|
Type string `json:"type"` // Relation type
|
||||||
Strength float64 `json:"strength"` // Relation strength
|
Strength float64 `json:"strength"` // Relation strength
|
||||||
Description string `json:"description"` // Relation description
|
Description string `json:"description"` // Relation description
|
||||||
Bidirectional bool `json:"bidirectional"` // Whether relation is bidirectional
|
Bidirectional bool `json:"bidirectional"` // Whether relation is bidirectional
|
||||||
}
|
}
|
||||||
|
|
||||||
// CouplingMetrics represents coupling metrics between directories
|
// CouplingMetrics represents coupling metrics between directories
|
||||||
type CouplingMetrics struct {
|
type CouplingMetrics struct {
|
||||||
AfferentCoupling float64 `json:"afferent_coupling"` // Afferent coupling
|
AfferentCoupling float64 `json:"afferent_coupling"` // Afferent coupling
|
||||||
EfferentCoupling float64 `json:"efferent_coupling"` // Efferent coupling
|
EfferentCoupling float64 `json:"efferent_coupling"` // Efferent coupling
|
||||||
Instability float64 `json:"instability"` // Instability metric
|
Instability float64 `json:"instability"` // Instability metric
|
||||||
Abstractness float64 `json:"abstractness"` // Abstractness metric
|
Abstractness float64 `json:"abstractness"` // Abstractness metric
|
||||||
DistanceFromMain float64 `json:"distance_from_main"` // Distance from main sequence
|
DistanceFromMain float64 `json:"distance_from_main"` // Distance from main sequence
|
||||||
}
|
}
|
||||||
|
|
||||||
// Pattern represents a detected pattern in code or organization
|
// Pattern represents a detected pattern in code or organization
|
||||||
type Pattern struct {
|
type Pattern struct {
|
||||||
ID string `json:"id"` // Pattern identifier
|
ID string `json:"id"` // Pattern identifier
|
||||||
Name string `json:"name"` // Pattern name
|
Name string `json:"name"` // Pattern name
|
||||||
Type string `json:"type"` // Pattern type
|
Type string `json:"type"` // Pattern type
|
||||||
Description string `json:"description"` // Pattern description
|
Description string `json:"description"` // Pattern description
|
||||||
Confidence float64 `json:"confidence"` // Detection confidence
|
Confidence float64 `json:"confidence"` // Detection confidence
|
||||||
Frequency int `json:"frequency"` // Pattern frequency
|
Frequency int `json:"frequency"` // Pattern frequency
|
||||||
Examples []string `json:"examples"` // Example instances
|
Examples []string `json:"examples"` // Example instances
|
||||||
Criteria map[string]interface{} `json:"criteria"` // Pattern criteria
|
Criteria map[string]interface{} `json:"criteria"` // Pattern criteria
|
||||||
Benefits []string `json:"benefits"` // Pattern benefits
|
Benefits []string `json:"benefits"` // Pattern benefits
|
||||||
Drawbacks []string `json:"drawbacks"` // Pattern drawbacks
|
Drawbacks []string `json:"drawbacks"` // Pattern drawbacks
|
||||||
ApplicableRoles []string `json:"applicable_roles"` // Roles that benefit from this pattern
|
ApplicableRoles []string `json:"applicable_roles"` // Roles that benefit from this pattern
|
||||||
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
|
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
|
||||||
}
|
}
|
||||||
|
|
||||||
// CodePattern represents a code-specific pattern
|
// CodePattern represents a code-specific pattern
|
||||||
type CodePattern struct {
|
type CodePattern struct {
|
||||||
Pattern // Embedded base pattern
|
Pattern // Embedded base pattern
|
||||||
Language string `json:"language"` // Programming language
|
Language string `json:"language"` // Programming language
|
||||||
Framework string `json:"framework"` // Framework context
|
Framework string `json:"framework"` // Framework context
|
||||||
Complexity float64 `json:"complexity"` // Pattern complexity
|
Complexity float64 `json:"complexity"` // Pattern complexity
|
||||||
Usage *UsagePattern `json:"usage"` // Usage pattern
|
Usage *UsagePattern `json:"usage"` // Usage pattern
|
||||||
Performance *PerformanceInfo `json:"performance"` // Performance characteristics
|
Performance *PerformanceInfo `json:"performance"` // Performance characteristics
|
||||||
}
|
}
|
||||||
|
|
||||||
// NamingPattern represents a naming convention pattern
|
// NamingPattern represents a naming convention pattern
|
||||||
type NamingPattern struct {
|
type NamingPattern struct {
|
||||||
Pattern // Embedded base pattern
|
Pattern // Embedded base pattern
|
||||||
Convention string `json:"convention"` // Naming convention
|
Convention string `json:"convention"` // Naming convention
|
||||||
Scope string `json:"scope"` // Pattern scope
|
Scope string `json:"scope"` // Pattern scope
|
||||||
Regex string `json:"regex"` // Regex pattern
|
Regex string `json:"regex"` // Regex pattern
|
||||||
CaseStyle string `json:"case_style"` // Case style (camelCase, snake_case, etc.)
|
CaseStyle string `json:"case_style"` // Case style (camelCase, snake_case, etc.)
|
||||||
Prefix string `json:"prefix"` // Common prefix
|
Prefix string `json:"prefix"` // Common prefix
|
||||||
Suffix string `json:"suffix"` // Common suffix
|
Suffix string `json:"suffix"` // Common suffix
|
||||||
}
|
}
|
||||||
|
|
||||||
// OrganizationalPattern represents an organizational pattern
|
// OrganizationalPattern represents an organizational pattern
|
||||||
type OrganizationalPattern struct {
|
type OrganizationalPattern struct {
|
||||||
Pattern // Embedded base pattern
|
Pattern // Embedded base pattern
|
||||||
Structure string `json:"structure"` // Organizational structure
|
Structure string `json:"structure"` // Organizational structure
|
||||||
Depth int `json:"depth"` // Typical depth
|
Depth int `json:"depth"` // Typical depth
|
||||||
FanOut int `json:"fan_out"` // Typical fan-out
|
FanOut int `json:"fan_out"` // Typical fan-out
|
||||||
Modularity float64 `json:"modularity"` // Modularity characteristics
|
Modularity float64 `json:"modularity"` // Modularity characteristics
|
||||||
Scalability string `json:"scalability"` // Scalability characteristics
|
Scalability string `json:"scalability"` // Scalability characteristics
|
||||||
}
|
}
|
||||||
|
|
||||||
// UsagePattern represents how a pattern is typically used
|
// UsagePattern represents how a pattern is typically used
|
||||||
type UsagePattern struct {
|
type UsagePattern struct {
|
||||||
Frequency string `json:"frequency"` // Usage frequency
|
Frequency string `json:"frequency"` // Usage frequency
|
||||||
Context []string `json:"context"` // Usage contexts
|
Context []string `json:"context"` // Usage contexts
|
||||||
Prerequisites []string `json:"prerequisites"` // Prerequisites
|
Prerequisites []string `json:"prerequisites"` // Prerequisites
|
||||||
Alternatives []string `json:"alternatives"` // Alternative patterns
|
Alternatives []string `json:"alternatives"` // Alternative patterns
|
||||||
Compatibility map[string]string `json:"compatibility"` // Compatibility with other patterns
|
Compatibility map[string]string `json:"compatibility"` // Compatibility with other patterns
|
||||||
}
|
}
|
||||||
|
|
||||||
// PerformanceInfo represents performance characteristics of a pattern
|
// PerformanceInfo represents performance characteristics of a pattern
|
||||||
@@ -249,12 +249,12 @@ type PerformanceInfo struct {
|
|||||||
|
|
||||||
// PatternMatch represents a match between context and a pattern
|
// PatternMatch represents a match between context and a pattern
|
||||||
type PatternMatch struct {
|
type PatternMatch struct {
|
||||||
PatternID string `json:"pattern_id"` // Pattern identifier
|
PatternID string `json:"pattern_id"` // Pattern identifier
|
||||||
MatchScore float64 `json:"match_score"` // Match score (0-1)
|
MatchScore float64 `json:"match_score"` // Match score (0-1)
|
||||||
Confidence float64 `json:"confidence"` // Match confidence
|
Confidence float64 `json:"confidence"` // Match confidence
|
||||||
MatchedFields []string `json:"matched_fields"` // Fields that matched
|
MatchedFields []string `json:"matched_fields"` // Fields that matched
|
||||||
Explanation string `json:"explanation"` // Match explanation
|
Explanation string `json:"explanation"` // Match explanation
|
||||||
Suggestions []string `json:"suggestions"` // Improvement suggestions
|
Suggestions []string `json:"suggestions"` // Improvement suggestions
|
||||||
}
|
}
|
||||||
|
|
||||||
// ValidationResult represents context validation results
|
// ValidationResult represents context validation results
|
||||||
@@ -269,12 +269,12 @@ type ValidationResult struct {
|
|||||||
|
|
||||||
// ValidationIssue represents a validation issue
|
// ValidationIssue represents a validation issue
|
||||||
type ValidationIssue struct {
|
type ValidationIssue struct {
|
||||||
Type string `json:"type"` // Issue type
|
Type string `json:"type"` // Issue type
|
||||||
Severity string `json:"severity"` // Issue severity
|
Severity string `json:"severity"` // Issue severity
|
||||||
Message string `json:"message"` // Issue message
|
Message string `json:"message"` // Issue message
|
||||||
Field string `json:"field"` // Affected field
|
Field string `json:"field"` // Affected field
|
||||||
Suggestion string `json:"suggestion"` // Suggested fix
|
Suggestion string `json:"suggestion"` // Suggested fix
|
||||||
Impact float64 `json:"impact"` // Impact score
|
Impact float64 `json:"impact"` // Impact score
|
||||||
}
|
}
|
||||||
|
|
||||||
// Suggestion represents an improvement suggestion
|
// Suggestion represents an improvement suggestion
|
||||||
@@ -289,61 +289,61 @@ type Suggestion struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Recommendation represents an improvement recommendation
|
// Recommendation represents an improvement recommendation
|
||||||
type Recommendation struct {
|
type BasicRecommendation struct {
|
||||||
Type string `json:"type"` // Recommendation type
|
Type string `json:"type"` // Recommendation type
|
||||||
Title string `json:"title"` // Recommendation title
|
Title string `json:"title"` // Recommendation title
|
||||||
Description string `json:"description"` // Detailed description
|
Description string `json:"description"` // Detailed description
|
||||||
Priority int `json:"priority"` // Priority level
|
Priority int `json:"priority"` // Priority level
|
||||||
Effort string `json:"effort"` // Effort required
|
Effort string `json:"effort"` // Effort required
|
||||||
Impact string `json:"impact"` // Expected impact
|
Impact string `json:"impact"` // Expected impact
|
||||||
Steps []string `json:"steps"` // Implementation steps
|
Steps []string `json:"steps"` // Implementation steps
|
||||||
Resources []string `json:"resources"` // Required resources
|
Resources []string `json:"resources"` // Required resources
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// RAGResponse represents a response from the RAG system
|
// RAGResponse represents a response from the RAG system
|
||||||
type RAGResponse struct {
|
type RAGResponse struct {
|
||||||
Query string `json:"query"` // Original query
|
Query string `json:"query"` // Original query
|
||||||
Answer string `json:"answer"` // Generated answer
|
Answer string `json:"answer"` // Generated answer
|
||||||
Sources []*RAGSource `json:"sources"` // Source documents
|
Sources []*RAGSource `json:"sources"` // Source documents
|
||||||
Confidence float64 `json:"confidence"` // Response confidence
|
Confidence float64 `json:"confidence"` // Response confidence
|
||||||
Context map[string]interface{} `json:"context"` // Additional context
|
Context map[string]interface{} `json:"context"` // Additional context
|
||||||
ProcessedAt time.Time `json:"processed_at"` // When processed
|
ProcessedAt time.Time `json:"processed_at"` // When processed
|
||||||
}
|
}
|
||||||
|
|
||||||
// RAGSource represents a source document from RAG system
|
// RAGSource represents a source document from RAG system
|
||||||
type RAGSource struct {
|
type RAGSource struct {
|
||||||
ID string `json:"id"` // Source identifier
|
ID string `json:"id"` // Source identifier
|
||||||
Title string `json:"title"` // Source title
|
Title string `json:"title"` // Source title
|
||||||
Content string `json:"content"` // Source content excerpt
|
Content string `json:"content"` // Source content excerpt
|
||||||
Score float64 `json:"score"` // Relevance score
|
Score float64 `json:"score"` // Relevance score
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Source metadata
|
Metadata map[string]interface{} `json:"metadata"` // Source metadata
|
||||||
URL string `json:"url"` // Source URL if available
|
URL string `json:"url"` // Source URL if available
|
||||||
}
|
}
|
||||||
|
|
||||||
// RAGResult represents a result from RAG similarity search
|
// RAGResult represents a result from RAG similarity search
|
||||||
type RAGResult struct {
|
type RAGResult struct {
|
||||||
ID string `json:"id"` // Result identifier
|
ID string `json:"id"` // Result identifier
|
||||||
Content string `json:"content"` // Content
|
Content string `json:"content"` // Content
|
||||||
Score float64 `json:"score"` // Similarity score
|
Score float64 `json:"score"` // Similarity score
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Result metadata
|
Metadata map[string]interface{} `json:"metadata"` // Result metadata
|
||||||
Highlights []string `json:"highlights"` // Content highlights
|
Highlights []string `json:"highlights"` // Content highlights
|
||||||
}
|
}
|
||||||
|
|
||||||
// RAGUpdate represents an update to the RAG index
|
// RAGUpdate represents an update to the RAG index
|
||||||
type RAGUpdate struct {
|
type RAGUpdate struct {
|
||||||
ID string `json:"id"` // Document identifier
|
ID string `json:"id"` // Document identifier
|
||||||
Content string `json:"content"` // Document content
|
Content string `json:"content"` // Document content
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Document metadata
|
Metadata map[string]interface{} `json:"metadata"` // Document metadata
|
||||||
Operation string `json:"operation"` // Operation type (add, update, delete)
|
Operation string `json:"operation"` // Operation type (add, update, delete)
|
||||||
}
|
}
|
||||||
|
|
||||||
// RAGStatistics represents RAG system statistics
|
// RAGStatistics represents RAG system statistics
|
||||||
type RAGStatistics struct {
|
type RAGStatistics struct {
|
||||||
TotalDocuments int64 `json:"total_documents"` // Total indexed documents
|
TotalDocuments int64 `json:"total_documents"` // Total indexed documents
|
||||||
TotalQueries int64 `json:"total_queries"` // Total queries processed
|
TotalQueries int64 `json:"total_queries"` // Total queries processed
|
||||||
AverageQueryTime time.Duration `json:"average_query_time"` // Average query time
|
AverageQueryTime time.Duration `json:"average_query_time"` // Average query time
|
||||||
IndexSize int64 `json:"index_size"` // Index size in bytes
|
IndexSize int64 `json:"index_size"` // Index size in bytes
|
||||||
LastIndexUpdate time.Time `json:"last_index_update"` // When index was last updated
|
LastIndexUpdate time.Time `json:"last_index_update"` // When index was last updated
|
||||||
ErrorRate float64 `json:"error_rate"` // Error rate
|
ErrorRate float64 `json:"error_rate"` // Error rate
|
||||||
}
|
}
|
||||||
@@ -282,25 +282,25 @@ func (cau *ContentAnalysisUtils) DetectTechnologies(content, filename string) []
|
|||||||
|
|
||||||
// Language detection
|
// Language detection
|
||||||
languageMap := map[string][]string{
|
languageMap := map[string][]string{
|
||||||
".go": {"go", "golang"},
|
".go": {"go", "golang"},
|
||||||
".py": {"python"},
|
".py": {"python"},
|
||||||
".js": {"javascript", "node.js"},
|
".js": {"javascript", "node.js"},
|
||||||
".jsx": {"javascript", "react", "jsx"},
|
".jsx": {"javascript", "react", "jsx"},
|
||||||
".ts": {"typescript"},
|
".ts": {"typescript"},
|
||||||
".tsx": {"typescript", "react", "jsx"},
|
".tsx": {"typescript", "react", "jsx"},
|
||||||
".java": {"java"},
|
".java": {"java"},
|
||||||
".kt": {"kotlin"},
|
".kt": {"kotlin"},
|
||||||
".rs": {"rust"},
|
".rs": {"rust"},
|
||||||
".cpp": {"c++"},
|
".cpp": {"c++"},
|
||||||
".c": {"c"},
|
".c": {"c"},
|
||||||
".cs": {"c#", ".net"},
|
".cs": {"c#", ".net"},
|
||||||
".php": {"php"},
|
".php": {"php"},
|
||||||
".rb": {"ruby"},
|
".rb": {"ruby"},
|
||||||
".swift": {"swift"},
|
".swift": {"swift"},
|
||||||
".scala": {"scala"},
|
".scala": {"scala"},
|
||||||
".clj": {"clojure"},
|
".clj": {"clojure"},
|
||||||
".hs": {"haskell"},
|
".hs": {"haskell"},
|
||||||
".ml": {"ocaml"},
|
".ml": {"ocaml"},
|
||||||
}
|
}
|
||||||
|
|
||||||
if langs, exists := languageMap[ext]; exists {
|
if langs, exists := languageMap[ext]; exists {
|
||||||
@@ -309,34 +309,34 @@ func (cau *ContentAnalysisUtils) DetectTechnologies(content, filename string) []
|
|||||||
|
|
||||||
// Framework and library detection
|
// Framework and library detection
|
||||||
frameworkPatterns := map[string][]string{
|
frameworkPatterns := map[string][]string{
|
||||||
"react": {"import.*react", "from [\"']react[\"']", "<.*/>", "jsx"},
|
"react": {"import.*react", "from [\"']react[\"']", "<.*/>", "jsx"},
|
||||||
"vue": {"import.*vue", "from [\"']vue[\"']", "<template>", "vue"},
|
"vue": {"import.*vue", "from [\"']vue[\"']", "<template>", "vue"},
|
||||||
"angular": {"import.*@angular", "from [\"']@angular", "ngmodule", "component"},
|
"angular": {"import.*@angular", "from [\"']@angular", "ngmodule", "component"},
|
||||||
"express": {"import.*express", "require.*express", "app.get", "app.post"},
|
"express": {"import.*express", "require.*express", "app.get", "app.post"},
|
||||||
"django": {"from django", "import django", "django.db", "models.model"},
|
"django": {"from django", "import django", "django.db", "models.model"},
|
||||||
"flask": {"from flask", "import flask", "@app.route", "flask.request"},
|
"flask": {"from flask", "import flask", "@app.route", "flask.request"},
|
||||||
"spring": {"@springboot", "@controller", "@service", "@repository"},
|
"spring": {"@springboot", "@controller", "@service", "@repository"},
|
||||||
"hibernate": {"@entity", "@table", "@column", "hibernate"},
|
"hibernate": {"@entity", "@table", "@column", "hibernate"},
|
||||||
"jquery": {"$\\(", "jquery"},
|
"jquery": {"$\\(", "jquery"},
|
||||||
"bootstrap": {"bootstrap", "btn-", "col-", "row"},
|
"bootstrap": {"bootstrap", "btn-", "col-", "row"},
|
||||||
"docker": {"dockerfile", "docker-compose", "from.*:", "run.*"},
|
"docker": {"dockerfile", "docker-compose", "from.*:", "run.*"},
|
||||||
"kubernetes": {"apiversion:", "kind:", "metadata:", "spec:"},
|
"kubernetes": {"apiversion:", "kind:", "metadata:", "spec:"},
|
||||||
"terraform": {"\\.tf$", "resource \"", "provider \"", "terraform"},
|
"terraform": {"\\.tf$", "resource \"", "provider \"", "terraform"},
|
||||||
"ansible": {"\\.yml$", "hosts:", "tasks:", "playbook"},
|
"ansible": {"\\.yml$", "hosts:", "tasks:", "playbook"},
|
||||||
"jenkins": {"jenkinsfile", "pipeline", "stage", "steps"},
|
"jenkins": {"jenkinsfile", "pipeline", "stage", "steps"},
|
||||||
"git": {"\\.git", "git add", "git commit", "git push"},
|
"git": {"\\.git", "git add", "git commit", "git push"},
|
||||||
"mysql": {"mysql", "select.*from", "insert into", "create table"},
|
"mysql": {"mysql", "select.*from", "insert into", "create table"},
|
||||||
"postgresql": {"postgresql", "postgres", "psql"},
|
"postgresql": {"postgresql", "postgres", "psql"},
|
||||||
"mongodb": {"mongodb", "mongo", "find\\(", "insert\\("},
|
"mongodb": {"mongodb", "mongo", "find\\(", "insert\\("},
|
||||||
"redis": {"redis", "set.*", "get.*", "rpush"},
|
"redis": {"redis", "set.*", "get.*", "rpush"},
|
||||||
"elasticsearch": {"elasticsearch", "elastic", "query.*", "search.*"},
|
"elasticsearch": {"elasticsearch", "elastic", "query.*", "search.*"},
|
||||||
"graphql": {"graphql", "query.*{", "mutation.*{", "subscription.*{"},
|
"graphql": {"graphql", "query.*{", "mutation.*{", "subscription.*{"},
|
||||||
"grpc": {"grpc", "proto", "service.*rpc", "\\.proto$"},
|
"grpc": {"grpc", "proto", "service.*rpc", "\\.proto$"},
|
||||||
"websocket": {"websocket", "ws://", "wss://", "socket.io"},
|
"websocket": {"websocket", "ws://", "wss://", "socket.io"},
|
||||||
"jwt": {"jwt", "jsonwebtoken", "bearer.*token"},
|
"jwt": {"jwt", "jsonwebtoken", "bearer.*token"},
|
||||||
"oauth": {"oauth", "oauth2", "client_id", "client_secret"},
|
"oauth": {"oauth", "oauth2", "client_id", "client_secret"},
|
||||||
"ssl": {"ssl", "tls", "https", "certificate"},
|
"ssl": {"ssl", "tls", "https", "certificate"},
|
||||||
"encryption": {"encrypt", "decrypt", "bcrypt", "sha256"},
|
"encryption": {"encrypt", "decrypt", "bcrypt", "sha256"},
|
||||||
}
|
}
|
||||||
|
|
||||||
for tech, patterns := range frameworkPatterns {
|
for tech, patterns := range frameworkPatterns {
|
||||||
@@ -741,30 +741,58 @@ func CloneContextNode(node *slurpContext.ContextNode) *slurpContext.ContextNode
|
|||||||
}
|
}
|
||||||
|
|
||||||
clone := &slurpContext.ContextNode{
|
clone := &slurpContext.ContextNode{
|
||||||
Path: node.Path,
|
Path: node.Path,
|
||||||
Summary: node.Summary,
|
UCXLAddress: node.UCXLAddress,
|
||||||
Purpose: node.Purpose,
|
Summary: node.Summary,
|
||||||
Technologies: make([]string, len(node.Technologies)),
|
Purpose: node.Purpose,
|
||||||
Tags: make([]string, len(node.Tags)),
|
Technologies: make([]string, len(node.Technologies)),
|
||||||
Insights: make([]string, len(node.Insights)),
|
Tags: make([]string, len(node.Tags)),
|
||||||
CreatedAt: node.CreatedAt,
|
Insights: make([]string, len(node.Insights)),
|
||||||
UpdatedAt: node.UpdatedAt,
|
OverridesParent: node.OverridesParent,
|
||||||
ContextSpecificity: node.ContextSpecificity,
|
ContextSpecificity: node.ContextSpecificity,
|
||||||
RAGConfidence: node.RAGConfidence,
|
AppliesToChildren: node.AppliesToChildren,
|
||||||
ProcessedForRole: node.ProcessedForRole,
|
AppliesTo: node.AppliesTo,
|
||||||
|
GeneratedAt: node.GeneratedAt,
|
||||||
|
UpdatedAt: node.UpdatedAt,
|
||||||
|
CreatedBy: node.CreatedBy,
|
||||||
|
WhoUpdated: node.WhoUpdated,
|
||||||
|
RAGConfidence: node.RAGConfidence,
|
||||||
|
EncryptedFor: make([]string, len(node.EncryptedFor)),
|
||||||
|
AccessLevel: node.AccessLevel,
|
||||||
}
|
}
|
||||||
|
|
||||||
copy(clone.Technologies, node.Technologies)
|
copy(clone.Technologies, node.Technologies)
|
||||||
copy(clone.Tags, node.Tags)
|
copy(clone.Tags, node.Tags)
|
||||||
copy(clone.Insights, node.Insights)
|
copy(clone.Insights, node.Insights)
|
||||||
|
copy(clone.EncryptedFor, node.EncryptedFor)
|
||||||
|
|
||||||
if node.RoleSpecificInsights != nil {
|
if node.Parent != nil {
|
||||||
clone.RoleSpecificInsights = make([]*RoleSpecificInsight, len(node.RoleSpecificInsights))
|
parent := *node.Parent
|
||||||
copy(clone.RoleSpecificInsights, node.RoleSpecificInsights)
|
clone.Parent = &parent
|
||||||
|
}
|
||||||
|
if len(node.Children) > 0 {
|
||||||
|
clone.Children = make([]string, len(node.Children))
|
||||||
|
copy(clone.Children, node.Children)
|
||||||
|
}
|
||||||
|
if node.Language != nil {
|
||||||
|
language := *node.Language
|
||||||
|
clone.Language = &language
|
||||||
|
}
|
||||||
|
if node.Size != nil {
|
||||||
|
sz := *node.Size
|
||||||
|
clone.Size = &sz
|
||||||
|
}
|
||||||
|
if node.LastModified != nil {
|
||||||
|
lm := *node.LastModified
|
||||||
|
clone.LastModified = &lm
|
||||||
|
}
|
||||||
|
if node.ContentHash != nil {
|
||||||
|
hash := *node.ContentHash
|
||||||
|
clone.ContentHash = &hash
|
||||||
}
|
}
|
||||||
|
|
||||||
if node.Metadata != nil {
|
if node.Metadata != nil {
|
||||||
clone.Metadata = make(map[string]interface{})
|
clone.Metadata = make(map[string]interface{}, len(node.Metadata))
|
||||||
for k, v := range node.Metadata {
|
for k, v := range node.Metadata {
|
||||||
clone.Metadata[k] = v
|
clone.Metadata[k] = v
|
||||||
}
|
}
|
||||||
@@ -799,9 +827,11 @@ func MergeContextNodes(nodes ...*slurpContext.ContextNode) *slurpContext.Context
|
|||||||
// Merge insights
|
// Merge insights
|
||||||
merged.Insights = mergeStringSlices(merged.Insights, node.Insights)
|
merged.Insights = mergeStringSlices(merged.Insights, node.Insights)
|
||||||
|
|
||||||
// Use most recent timestamps
|
// Use most relevant timestamps
|
||||||
if node.CreatedAt.Before(merged.CreatedAt) {
|
if merged.GeneratedAt.IsZero() {
|
||||||
merged.CreatedAt = node.CreatedAt
|
merged.GeneratedAt = node.GeneratedAt
|
||||||
|
} else if !node.GeneratedAt.IsZero() && node.GeneratedAt.Before(merged.GeneratedAt) {
|
||||||
|
merged.GeneratedAt = node.GeneratedAt
|
||||||
}
|
}
|
||||||
if node.UpdatedAt.After(merged.UpdatedAt) {
|
if node.UpdatedAt.After(merged.UpdatedAt) {
|
||||||
merged.UpdatedAt = node.UpdatedAt
|
merged.UpdatedAt = node.UpdatedAt
|
||||||
|
|||||||
@@ -2,6 +2,9 @@ package slurp
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"chorus/pkg/crypto"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Core interfaces for the SLURP contextual intelligence system.
|
// Core interfaces for the SLURP contextual intelligence system.
|
||||||
@@ -144,13 +147,13 @@ type TemporalGraph interface {
|
|||||||
// CreateInitialContext creates the first version of context.
|
// CreateInitialContext creates the first version of context.
|
||||||
// Establishes the starting point for temporal evolution tracking.
|
// Establishes the starting point for temporal evolution tracking.
|
||||||
CreateInitialContext(ctx context.Context, ucxlAddress string,
|
CreateInitialContext(ctx context.Context, ucxlAddress string,
|
||||||
contextData *ContextNode, creator string) (*TemporalNode, error)
|
contextData *ContextNode, creator string) (*TemporalNode, error)
|
||||||
|
|
||||||
// EvolveContext creates a new temporal version due to a decision.
|
// EvolveContext creates a new temporal version due to a decision.
|
||||||
// Records the decision that caused the change and updates the graph.
|
// Records the decision that caused the change and updates the graph.
|
||||||
EvolveContext(ctx context.Context, ucxlAddress string,
|
EvolveContext(ctx context.Context, ucxlAddress string,
|
||||||
newContext *ContextNode, reason ChangeReason,
|
newContext *ContextNode, reason ChangeReason,
|
||||||
decision *DecisionMetadata) (*TemporalNode, error)
|
decision *DecisionMetadata) (*TemporalNode, error)
|
||||||
|
|
||||||
// GetLatestVersion gets the most recent temporal node.
|
// GetLatestVersion gets the most recent temporal node.
|
||||||
GetLatestVersion(ctx context.Context, ucxlAddress string) (*TemporalNode, error)
|
GetLatestVersion(ctx context.Context, ucxlAddress string) (*TemporalNode, error)
|
||||||
@@ -158,7 +161,7 @@ type TemporalGraph interface {
|
|||||||
// GetVersionAtDecision gets context as it was at a specific decision point.
|
// GetVersionAtDecision gets context as it was at a specific decision point.
|
||||||
// Navigation based on decision hops, not chronological time.
|
// Navigation based on decision hops, not chronological time.
|
||||||
GetVersionAtDecision(ctx context.Context, ucxlAddress string,
|
GetVersionAtDecision(ctx context.Context, ucxlAddress string,
|
||||||
decisionHop int) (*TemporalNode, error)
|
decisionHop int) (*TemporalNode, error)
|
||||||
|
|
||||||
// GetEvolutionHistory gets complete evolution history.
|
// GetEvolutionHistory gets complete evolution history.
|
||||||
// Returns all temporal versions ordered by decision sequence.
|
// Returns all temporal versions ordered by decision sequence.
|
||||||
@@ -177,7 +180,7 @@ type TemporalGraph interface {
|
|||||||
// FindRelatedDecisions finds decisions within N decision hops.
|
// FindRelatedDecisions finds decisions within N decision hops.
|
||||||
// Explores the decision graph by conceptual distance, not time.
|
// Explores the decision graph by conceptual distance, not time.
|
||||||
FindRelatedDecisions(ctx context.Context, ucxlAddress string,
|
FindRelatedDecisions(ctx context.Context, ucxlAddress string,
|
||||||
maxHops int) ([]*DecisionPath, error)
|
maxHops int) ([]*DecisionPath, error)
|
||||||
|
|
||||||
// FindDecisionPath finds shortest decision path between addresses.
|
// FindDecisionPath finds shortest decision path between addresses.
|
||||||
// Returns the path of decisions connecting two contexts.
|
// Returns the path of decisions connecting two contexts.
|
||||||
@@ -205,12 +208,12 @@ type DecisionNavigator interface {
|
|||||||
// NavigateDecisionHops navigates by decision distance, not time.
|
// NavigateDecisionHops navigates by decision distance, not time.
|
||||||
// Moves through the decision graph by the specified number of hops.
|
// Moves through the decision graph by the specified number of hops.
|
||||||
NavigateDecisionHops(ctx context.Context, ucxlAddress string,
|
NavigateDecisionHops(ctx context.Context, ucxlAddress string,
|
||||||
hops int, direction NavigationDirection) (*TemporalNode, error)
|
hops int, direction NavigationDirection) (*TemporalNode, error)
|
||||||
|
|
||||||
// GetDecisionTimeline gets timeline ordered by decision sequence.
|
// GetDecisionTimeline gets timeline ordered by decision sequence.
|
||||||
// Returns decisions in the order they were made, not chronological order.
|
// Returns decisions in the order they were made, not chronological order.
|
||||||
GetDecisionTimeline(ctx context.Context, ucxlAddress string,
|
GetDecisionTimeline(ctx context.Context, ucxlAddress string,
|
||||||
includeRelated bool, maxHops int) (*DecisionTimeline, error)
|
includeRelated bool, maxHops int) (*DecisionTimeline, error)
|
||||||
|
|
||||||
// FindStaleContexts finds contexts that may be outdated.
|
// FindStaleContexts finds contexts that may be outdated.
|
||||||
// Identifies contexts that haven't been updated despite related changes.
|
// Identifies contexts that haven't been updated despite related changes.
|
||||||
@@ -235,7 +238,7 @@ type DistributedStorage interface {
|
|||||||
// Store stores context data in the DHT with encryption.
|
// Store stores context data in the DHT with encryption.
|
||||||
// Data is encrypted based on access level and role requirements.
|
// Data is encrypted based on access level and role requirements.
|
||||||
Store(ctx context.Context, key string, data interface{},
|
Store(ctx context.Context, key string, data interface{},
|
||||||
accessLevel crypto.AccessLevel) error
|
accessLevel crypto.AccessLevel) error
|
||||||
|
|
||||||
// Retrieve retrieves and decrypts context data.
|
// Retrieve retrieves and decrypts context data.
|
||||||
// Automatically handles decryption based on current role permissions.
|
// Automatically handles decryption based on current role permissions.
|
||||||
@@ -281,7 +284,7 @@ type EncryptedStorage interface {
|
|||||||
// StoreEncrypted stores data encrypted for specific roles.
|
// StoreEncrypted stores data encrypted for specific roles.
|
||||||
// Supports multi-role encryption for shared access.
|
// Supports multi-role encryption for shared access.
|
||||||
StoreEncrypted(ctx context.Context, key string, data interface{},
|
StoreEncrypted(ctx context.Context, key string, data interface{},
|
||||||
roles []string) error
|
roles []string) error
|
||||||
|
|
||||||
// RetrieveDecrypted retrieves and decrypts data using current role.
|
// RetrieveDecrypted retrieves and decrypts data using current role.
|
||||||
// Automatically selects appropriate decryption key.
|
// Automatically selects appropriate decryption key.
|
||||||
@@ -318,12 +321,12 @@ type ContextGenerator interface {
|
|||||||
// GenerateContext generates context for a path (requires admin role).
|
// GenerateContext generates context for a path (requires admin role).
|
||||||
// Analyzes content, structure, and patterns to create comprehensive context.
|
// Analyzes content, structure, and patterns to create comprehensive context.
|
||||||
GenerateContext(ctx context.Context, path string,
|
GenerateContext(ctx context.Context, path string,
|
||||||
options *GenerationOptions) (*ContextNode, error)
|
options *GenerationOptions) (*ContextNode, error)
|
||||||
|
|
||||||
// RegenerateHierarchy regenerates entire hierarchy (admin-only).
|
// RegenerateHierarchy regenerates entire hierarchy (admin-only).
|
||||||
// Rebuilds context hierarchy from scratch with improved analysis.
|
// Rebuilds context hierarchy from scratch with improved analysis.
|
||||||
RegenerateHierarchy(ctx context.Context, rootPath string,
|
RegenerateHierarchy(ctx context.Context, rootPath string,
|
||||||
options *GenerationOptions) (*HierarchyStats, error)
|
options *GenerationOptions) (*HierarchyStats, error)
|
||||||
|
|
||||||
// ValidateGeneration validates generated context quality.
|
// ValidateGeneration validates generated context quality.
|
||||||
// Ensures generated context meets quality and consistency standards.
|
// Ensures generated context meets quality and consistency standards.
|
||||||
@@ -336,12 +339,12 @@ type ContextGenerator interface {
|
|||||||
// GenerateBatch generates context for multiple paths efficiently.
|
// GenerateBatch generates context for multiple paths efficiently.
|
||||||
// Optimized for bulk generation operations.
|
// Optimized for bulk generation operations.
|
||||||
GenerateBatch(ctx context.Context, paths []string,
|
GenerateBatch(ctx context.Context, paths []string,
|
||||||
options *GenerationOptions) (map[string]*ContextNode, error)
|
options *GenerationOptions) (map[string]*ContextNode, error)
|
||||||
|
|
||||||
// ScheduleGeneration schedules background context generation.
|
// ScheduleGeneration schedules background context generation.
|
||||||
// Queues generation tasks for processing during low-activity periods.
|
// Queues generation tasks for processing during low-activity periods.
|
||||||
ScheduleGeneration(ctx context.Context, paths []string,
|
ScheduleGeneration(ctx context.Context, paths []string,
|
||||||
options *GenerationOptions, priority int) error
|
options *GenerationOptions, priority int) error
|
||||||
|
|
||||||
// GetGenerationStatus gets status of background generation tasks.
|
// GetGenerationStatus gets status of background generation tasks.
|
||||||
GetGenerationStatus(ctx context.Context) (*GenerationStatus, error)
|
GetGenerationStatus(ctx context.Context) (*GenerationStatus, error)
|
||||||
@@ -447,7 +450,7 @@ type QueryEngine interface {
|
|||||||
// TemporalQuery performs temporal-aware queries.
|
// TemporalQuery performs temporal-aware queries.
|
||||||
// Queries context as it existed at specific decision points.
|
// Queries context as it existed at specific decision points.
|
||||||
TemporalQuery(ctx context.Context, query *SearchQuery,
|
TemporalQuery(ctx context.Context, query *SearchQuery,
|
||||||
temporal *TemporalFilter) ([]*SearchResult, error)
|
temporal *TemporalFilter) ([]*SearchResult, error)
|
||||||
|
|
||||||
// FuzzySearch performs fuzzy text search.
|
// FuzzySearch performs fuzzy text search.
|
||||||
// Handles typos and approximate matching.
|
// Handles typos and approximate matching.
|
||||||
@@ -497,83 +500,81 @@ type HealthChecker interface {
|
|||||||
|
|
||||||
// Additional types needed by interfaces
|
// Additional types needed by interfaces
|
||||||
|
|
||||||
import "time"
|
|
||||||
|
|
||||||
type StorageStats struct {
|
type StorageStats struct {
|
||||||
TotalKeys int64 `json:"total_keys"`
|
TotalKeys int64 `json:"total_keys"`
|
||||||
TotalSize int64 `json:"total_size"`
|
TotalSize int64 `json:"total_size"`
|
||||||
IndexSize int64 `json:"index_size"`
|
IndexSize int64 `json:"index_size"`
|
||||||
CacheSize int64 `json:"cache_size"`
|
CacheSize int64 `json:"cache_size"`
|
||||||
ReplicationStatus string `json:"replication_status"`
|
ReplicationStatus string `json:"replication_status"`
|
||||||
LastSync time.Time `json:"last_sync"`
|
LastSync time.Time `json:"last_sync"`
|
||||||
SyncErrors int64 `json:"sync_errors"`
|
SyncErrors int64 `json:"sync_errors"`
|
||||||
AvailableSpace int64 `json:"available_space"`
|
AvailableSpace int64 `json:"available_space"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type GenerationStatus struct {
|
type GenerationStatus struct {
|
||||||
ActiveTasks int `json:"active_tasks"`
|
ActiveTasks int `json:"active_tasks"`
|
||||||
QueuedTasks int `json:"queued_tasks"`
|
QueuedTasks int `json:"queued_tasks"`
|
||||||
CompletedTasks int `json:"completed_tasks"`
|
CompletedTasks int `json:"completed_tasks"`
|
||||||
FailedTasks int `json:"failed_tasks"`
|
FailedTasks int `json:"failed_tasks"`
|
||||||
EstimatedCompletion time.Time `json:"estimated_completion"`
|
EstimatedCompletion time.Time `json:"estimated_completion"`
|
||||||
CurrentTask *GenerationTask `json:"current_task,omitempty"`
|
CurrentTask *GenerationTask `json:"current_task,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type GenerationTask struct {
|
type GenerationTask struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Path string `json:"path"`
|
Path string `json:"path"`
|
||||||
Status string `json:"status"`
|
Status string `json:"status"`
|
||||||
Progress float64 `json:"progress"`
|
Progress float64 `json:"progress"`
|
||||||
StartedAt time.Time `json:"started_at"`
|
StartedAt time.Time `json:"started_at"`
|
||||||
EstimatedCompletion time.Time `json:"estimated_completion"`
|
EstimatedCompletion time.Time `json:"estimated_completion"`
|
||||||
Error string `json:"error,omitempty"`
|
Error string `json:"error,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type TrendAnalysis struct {
|
type TrendAnalysis struct {
|
||||||
TimeRange time.Duration `json:"time_range"`
|
TimeRange time.Duration `json:"time_range"`
|
||||||
TotalChanges int `json:"total_changes"`
|
TotalChanges int `json:"total_changes"`
|
||||||
ChangeVelocity float64 `json:"change_velocity"`
|
ChangeVelocity float64 `json:"change_velocity"`
|
||||||
DominantReasons []ChangeReason `json:"dominant_reasons"`
|
DominantReasons []ChangeReason `json:"dominant_reasons"`
|
||||||
QualityTrend string `json:"quality_trend"`
|
QualityTrend string `json:"quality_trend"`
|
||||||
ConfidenceTrend string `json:"confidence_trend"`
|
ConfidenceTrend string `json:"confidence_trend"`
|
||||||
MostActiveAreas []string `json:"most_active_areas"`
|
MostActiveAreas []string `json:"most_active_areas"`
|
||||||
EmergingPatterns []*Pattern `json:"emerging_patterns"`
|
EmergingPatterns []*Pattern `json:"emerging_patterns"`
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"`
|
AnalyzedAt time.Time `json:"analyzed_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ComparisonResult struct {
|
type ComparisonResult struct {
|
||||||
SimilarityScore float64 `json:"similarity_score"`
|
SimilarityScore float64 `json:"similarity_score"`
|
||||||
Differences []*Difference `json:"differences"`
|
Differences []*Difference `json:"differences"`
|
||||||
CommonElements []string `json:"common_elements"`
|
CommonElements []string `json:"common_elements"`
|
||||||
Recommendations []*Suggestion `json:"recommendations"`
|
Recommendations []*Suggestion `json:"recommendations"`
|
||||||
ComparedAt time.Time `json:"compared_at"`
|
ComparedAt time.Time `json:"compared_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type Difference struct {
|
type Difference struct {
|
||||||
Field string `json:"field"`
|
Field string `json:"field"`
|
||||||
Value1 interface{} `json:"value1"`
|
Value1 interface{} `json:"value1"`
|
||||||
Value2 interface{} `json:"value2"`
|
Value2 interface{} `json:"value2"`
|
||||||
DifferenceType string `json:"difference_type"`
|
DifferenceType string `json:"difference_type"`
|
||||||
Significance float64 `json:"significance"`
|
Significance float64 `json:"significance"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ConsistencyIssue struct {
|
type ConsistencyIssue struct {
|
||||||
Type string `json:"type"`
|
Type string `json:"type"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
AffectedNodes []string `json:"affected_nodes"`
|
AffectedNodes []string `json:"affected_nodes"`
|
||||||
Severity string `json:"severity"`
|
Severity string `json:"severity"`
|
||||||
Suggestion string `json:"suggestion"`
|
Suggestion string `json:"suggestion"`
|
||||||
DetectedAt time.Time `json:"detected_at"`
|
DetectedAt time.Time `json:"detected_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type QueryStats struct {
|
type QueryStats struct {
|
||||||
TotalQueries int64 `json:"total_queries"`
|
TotalQueries int64 `json:"total_queries"`
|
||||||
AverageQueryTime time.Duration `json:"average_query_time"`
|
AverageQueryTime time.Duration `json:"average_query_time"`
|
||||||
CacheHitRate float64 `json:"cache_hit_rate"`
|
CacheHitRate float64 `json:"cache_hit_rate"`
|
||||||
IndexUsage map[string]int64 `json:"index_usage"`
|
IndexUsage map[string]int64 `json:"index_usage"`
|
||||||
PopularQueries []string `json:"popular_queries"`
|
PopularQueries []string `json:"popular_queries"`
|
||||||
SlowQueries []string `json:"slow_queries"`
|
SlowQueries []string `json:"slow_queries"`
|
||||||
ErrorRate float64 `json:"error_rate"`
|
ErrorRate float64 `json:"error_rate"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type CacheStats struct {
|
type CacheStats struct {
|
||||||
@@ -588,17 +589,17 @@ type CacheStats struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type HealthStatus struct {
|
type HealthStatus struct {
|
||||||
Overall string `json:"overall"`
|
Overall string `json:"overall"`
|
||||||
Components map[string]*ComponentHealth `json:"components"`
|
Components map[string]*ComponentHealth `json:"components"`
|
||||||
CheckedAt time.Time `json:"checked_at"`
|
CheckedAt time.Time `json:"checked_at"`
|
||||||
Version string `json:"version"`
|
Version string `json:"version"`
|
||||||
Uptime time.Duration `json:"uptime"`
|
Uptime time.Duration `json:"uptime"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ComponentHealth struct {
|
type ComponentHealth struct {
|
||||||
Status string `json:"status"`
|
Status string `json:"status"`
|
||||||
Message string `json:"message,omitempty"`
|
Message string `json:"message,omitempty"`
|
||||||
LastCheck time.Time `json:"last_check"`
|
LastCheck time.Time `json:"last_check"`
|
||||||
ResponseTime time.Duration `json:"response_time"`
|
ResponseTime time.Duration `json:"response_time"`
|
||||||
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
||||||
}
|
}
|
||||||
@@ -631,7 +631,7 @@ func (s *SLURP) GetTemporalEvolution(ctx context.Context, ucxlAddress string) ([
|
|||||||
return nil, fmt.Errorf("invalid UCXL address: %w", err)
|
return nil, fmt.Errorf("invalid UCXL address: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return s.temporalGraph.GetEvolutionHistory(ctx, *parsed)
|
return s.temporalGraph.GetEvolutionHistory(ctx, parsed.String())
|
||||||
}
|
}
|
||||||
|
|
||||||
// NavigateDecisionHops navigates through the decision graph by hop distance.
|
// NavigateDecisionHops navigates through the decision graph by hop distance.
|
||||||
@@ -654,7 +654,7 @@ func (s *SLURP) NavigateDecisionHops(ctx context.Context, ucxlAddress string, ho
|
|||||||
}
|
}
|
||||||
|
|
||||||
if navigator, ok := s.temporalGraph.(DecisionNavigator); ok {
|
if navigator, ok := s.temporalGraph.(DecisionNavigator); ok {
|
||||||
return navigator.NavigateDecisionHops(ctx, *parsed, hops, direction)
|
return navigator.NavigateDecisionHops(ctx, parsed.String(), hops, direction)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil, fmt.Errorf("decision navigation not supported by temporal graph")
|
return nil, fmt.Errorf("decision navigation not supported by temporal graph")
|
||||||
@@ -1348,26 +1348,42 @@ func (s *SLURP) handleEvent(event *SLURPEvent) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// validateSLURPConfig validates SLURP configuration for consistency and correctness
|
// validateSLURPConfig normalises runtime tunables sourced from configuration.
|
||||||
func validateSLURPConfig(config *SLURPConfig) error {
|
func validateSLURPConfig(cfg *config.SlurpConfig) error {
|
||||||
if config.ContextResolution.MaxHierarchyDepth < 1 {
|
if cfg == nil {
|
||||||
return fmt.Errorf("max_hierarchy_depth must be at least 1")
|
return fmt.Errorf("slurp config is nil")
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.ContextResolution.MinConfidenceThreshold < 0 || config.ContextResolution.MinConfidenceThreshold > 1 {
|
if cfg.Timeout <= 0 {
|
||||||
return fmt.Errorf("min_confidence_threshold must be between 0 and 1")
|
cfg.Timeout = 15 * time.Second
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.TemporalAnalysis.MaxDecisionHops < 1 {
|
if cfg.RetryCount < 0 {
|
||||||
return fmt.Errorf("max_decision_hops must be at least 1")
|
cfg.RetryCount = 0
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.TemporalAnalysis.StalenessThreshold < 0 || config.TemporalAnalysis.StalenessThreshold > 1 {
|
if cfg.RetryDelay <= 0 && cfg.RetryCount > 0 {
|
||||||
return fmt.Errorf("staleness_threshold must be between 0 and 1")
|
cfg.RetryDelay = 2 * time.Second
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.Performance.MaxConcurrentResolutions < 1 {
|
if cfg.Performance.MaxConcurrentResolutions <= 0 {
|
||||||
return fmt.Errorf("max_concurrent_resolutions must be at least 1")
|
cfg.Performance.MaxConcurrentResolutions = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Performance.MetricsCollectionInterval <= 0 {
|
||||||
|
cfg.Performance.MetricsCollectionInterval = time.Minute
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.TemporalAnalysis.MaxDecisionHops <= 0 {
|
||||||
|
cfg.TemporalAnalysis.MaxDecisionHops = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.TemporalAnalysis.StalenessCheckInterval <= 0 {
|
||||||
|
cfg.TemporalAnalysis.StalenessCheckInterval = 5 * time.Minute
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.TemporalAnalysis.StalenessThreshold < 0 || cfg.TemporalAnalysis.StalenessThreshold > 1 {
|
||||||
|
cfg.TemporalAnalysis.StalenessThreshold = 0.2
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -164,6 +164,8 @@ func (bm *BackupManagerImpl) CreateBackup(
|
|||||||
Incremental: config.Incremental,
|
Incremental: config.Incremental,
|
||||||
ParentBackupID: config.ParentBackupID,
|
ParentBackupID: config.ParentBackupID,
|
||||||
Status: BackupStatusInProgress,
|
Status: BackupStatusInProgress,
|
||||||
|
Progress: 0,
|
||||||
|
ErrorMessage: "",
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
RetentionUntil: time.Now().Add(config.Retention),
|
RetentionUntil: time.Now().Add(config.Retention),
|
||||||
}
|
}
|
||||||
@@ -707,6 +709,7 @@ func (bm *BackupManagerImpl) validateFile(filePath string) error {
|
|||||||
func (bm *BackupManagerImpl) failBackup(job *BackupJob, backupInfo *BackupInfo, err error) {
|
func (bm *BackupManagerImpl) failBackup(job *BackupJob, backupInfo *BackupInfo, err error) {
|
||||||
bm.mu.Lock()
|
bm.mu.Lock()
|
||||||
backupInfo.Status = BackupStatusFailed
|
backupInfo.Status = BackupStatusFailed
|
||||||
|
backupInfo.Progress = 0
|
||||||
backupInfo.ErrorMessage = err.Error()
|
backupInfo.ErrorMessage = err.Error()
|
||||||
job.Error = err
|
job.Error = err
|
||||||
bm.mu.Unlock()
|
bm.mu.Unlock()
|
||||||
|
|||||||
@@ -3,18 +3,19 @@ package storage
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// BatchOperationsImpl provides efficient batch operations for context storage
|
// BatchOperationsImpl provides efficient batch operations for context storage
|
||||||
type BatchOperationsImpl struct {
|
type BatchOperationsImpl struct {
|
||||||
contextStore *ContextStoreImpl
|
contextStore *ContextStoreImpl
|
||||||
batchSize int
|
batchSize int
|
||||||
maxConcurrency int
|
maxConcurrency int
|
||||||
operationTimeout time.Duration
|
operationTimeout time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -22,8 +23,8 @@ type BatchOperationsImpl struct {
|
|||||||
func NewBatchOperations(contextStore *ContextStoreImpl, batchSize, maxConcurrency int, timeout time.Duration) *BatchOperationsImpl {
|
func NewBatchOperations(contextStore *ContextStoreImpl, batchSize, maxConcurrency int, timeout time.Duration) *BatchOperationsImpl {
|
||||||
return &BatchOperationsImpl{
|
return &BatchOperationsImpl{
|
||||||
contextStore: contextStore,
|
contextStore: contextStore,
|
||||||
batchSize: batchSize,
|
batchSize: batchSize,
|
||||||
maxConcurrency: maxConcurrency,
|
maxConcurrency: maxConcurrency,
|
||||||
operationTimeout: timeout,
|
operationTimeout: timeout,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"regexp"
|
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -13,13 +12,13 @@ import (
|
|||||||
|
|
||||||
// CacheManagerImpl implements the CacheManager interface using Redis
|
// CacheManagerImpl implements the CacheManager interface using Redis
|
||||||
type CacheManagerImpl struct {
|
type CacheManagerImpl struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
client *redis.Client
|
client *redis.Client
|
||||||
stats *CacheStatistics
|
stats *CacheStatistics
|
||||||
policy *CachePolicy
|
policy *CachePolicy
|
||||||
prefix string
|
prefix string
|
||||||
nodeID string
|
nodeID string
|
||||||
warmupKeys map[string]bool
|
warmupKeys map[string]bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewCacheManager creates a new cache manager with Redis backend
|
// NewCacheManager creates a new cache manager with Redis backend
|
||||||
@@ -68,13 +67,13 @@ func NewCacheManager(redisAddr, nodeID string, policy *CachePolicy) (*CacheManag
|
|||||||
// DefaultCachePolicy returns default caching policy
|
// DefaultCachePolicy returns default caching policy
|
||||||
func DefaultCachePolicy() *CachePolicy {
|
func DefaultCachePolicy() *CachePolicy {
|
||||||
return &CachePolicy{
|
return &CachePolicy{
|
||||||
TTL: 24 * time.Hour,
|
TTL: 24 * time.Hour,
|
||||||
MaxSize: 1024 * 1024 * 1024, // 1GB
|
MaxSize: 1024 * 1024 * 1024, // 1GB
|
||||||
EvictionPolicy: "LRU",
|
EvictionPolicy: "LRU",
|
||||||
RefreshThreshold: 0.8, // Refresh when 80% of TTL elapsed
|
RefreshThreshold: 0.8, // Refresh when 80% of TTL elapsed
|
||||||
WarmupEnabled: true,
|
WarmupEnabled: true,
|
||||||
CompressEntries: true,
|
CompressEntries: true,
|
||||||
MaxEntrySize: 10 * 1024 * 1024, // 10MB
|
MaxEntrySize: 10 * 1024 * 1024, // 10MB
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -314,17 +313,17 @@ func (cm *CacheManagerImpl) SetCachePolicy(policy *CachePolicy) error {
|
|||||||
|
|
||||||
// CacheEntry represents a cached data entry with metadata
|
// CacheEntry represents a cached data entry with metadata
|
||||||
type CacheEntry struct {
|
type CacheEntry struct {
|
||||||
Key string `json:"key"`
|
Key string `json:"key"`
|
||||||
Data []byte `json:"data"`
|
Data []byte `json:"data"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
ExpiresAt time.Time `json:"expires_at"`
|
ExpiresAt time.Time `json:"expires_at"`
|
||||||
TTL time.Duration `json:"ttl"`
|
TTL time.Duration `json:"ttl"`
|
||||||
AccessCount int64 `json:"access_count"`
|
AccessCount int64 `json:"access_count"`
|
||||||
LastAccessedAt time.Time `json:"last_accessed_at"`
|
LastAccessedAt time.Time `json:"last_accessed_at"`
|
||||||
Compressed bool `json:"compressed"`
|
Compressed bool `json:"compressed"`
|
||||||
OriginalSize int64 `json:"original_size"`
|
OriginalSize int64 `json:"original_size"`
|
||||||
CompressedSize int64 `json:"compressed_size"`
|
CompressedSize int64 `json:"compressed_size"`
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper methods
|
// Helper methods
|
||||||
|
|||||||
@@ -3,10 +3,8 @@ package storage
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"os"
|
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestLocalStorageCompression(t *testing.T) {
|
func TestLocalStorageCompression(t *testing.T) {
|
||||||
|
|||||||
@@ -2,71 +2,68 @@ package storage
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ContextStoreImpl is the main implementation of the ContextStore interface
|
// ContextStoreImpl is the main implementation of the ContextStore interface
|
||||||
// It coordinates between local storage, distributed storage, encryption, caching, and indexing
|
// It coordinates between local storage, distributed storage, encryption, caching, and indexing
|
||||||
type ContextStoreImpl struct {
|
type ContextStoreImpl struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
localStorage LocalStorage
|
localStorage LocalStorage
|
||||||
distributedStorage DistributedStorage
|
distributedStorage DistributedStorage
|
||||||
encryptedStorage EncryptedStorage
|
encryptedStorage EncryptedStorage
|
||||||
cacheManager CacheManager
|
cacheManager CacheManager
|
||||||
indexManager IndexManager
|
indexManager IndexManager
|
||||||
backupManager BackupManager
|
backupManager BackupManager
|
||||||
eventNotifier EventNotifier
|
eventNotifier EventNotifier
|
||||||
|
|
||||||
// Configuration
|
// Configuration
|
||||||
nodeID string
|
nodeID string
|
||||||
options *ContextStoreOptions
|
options *ContextStoreOptions
|
||||||
|
|
||||||
// Statistics and monitoring
|
// Statistics and monitoring
|
||||||
statistics *StorageStatistics
|
statistics *StorageStatistics
|
||||||
metricsCollector *MetricsCollector
|
metricsCollector *MetricsCollector
|
||||||
|
|
||||||
// Background processes
|
// Background processes
|
||||||
stopCh chan struct{}
|
stopCh chan struct{}
|
||||||
syncTicker *time.Ticker
|
syncTicker *time.Ticker
|
||||||
compactionTicker *time.Ticker
|
compactionTicker *time.Ticker
|
||||||
cleanupTicker *time.Ticker
|
cleanupTicker *time.Ticker
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContextStoreOptions configures the context store behavior
|
// ContextStoreOptions configures the context store behavior
|
||||||
type ContextStoreOptions struct {
|
type ContextStoreOptions struct {
|
||||||
// Storage configuration
|
// Storage configuration
|
||||||
PreferLocal bool `json:"prefer_local"`
|
PreferLocal bool `json:"prefer_local"`
|
||||||
AutoReplicate bool `json:"auto_replicate"`
|
AutoReplicate bool `json:"auto_replicate"`
|
||||||
DefaultReplicas int `json:"default_replicas"`
|
DefaultReplicas int `json:"default_replicas"`
|
||||||
EncryptionEnabled bool `json:"encryption_enabled"`
|
EncryptionEnabled bool `json:"encryption_enabled"`
|
||||||
CompressionEnabled bool `json:"compression_enabled"`
|
CompressionEnabled bool `json:"compression_enabled"`
|
||||||
|
|
||||||
// Caching configuration
|
// Caching configuration
|
||||||
CachingEnabled bool `json:"caching_enabled"`
|
CachingEnabled bool `json:"caching_enabled"`
|
||||||
CacheTTL time.Duration `json:"cache_ttl"`
|
CacheTTL time.Duration `json:"cache_ttl"`
|
||||||
CacheSize int64 `json:"cache_size"`
|
CacheSize int64 `json:"cache_size"`
|
||||||
|
|
||||||
// Indexing configuration
|
// Indexing configuration
|
||||||
IndexingEnabled bool `json:"indexing_enabled"`
|
IndexingEnabled bool `json:"indexing_enabled"`
|
||||||
IndexRefreshInterval time.Duration `json:"index_refresh_interval"`
|
IndexRefreshInterval time.Duration `json:"index_refresh_interval"`
|
||||||
|
|
||||||
// Background processes
|
// Background processes
|
||||||
SyncInterval time.Duration `json:"sync_interval"`
|
SyncInterval time.Duration `json:"sync_interval"`
|
||||||
CompactionInterval time.Duration `json:"compaction_interval"`
|
CompactionInterval time.Duration `json:"compaction_interval"`
|
||||||
CleanupInterval time.Duration `json:"cleanup_interval"`
|
CleanupInterval time.Duration `json:"cleanup_interval"`
|
||||||
|
|
||||||
// Performance tuning
|
// Performance tuning
|
||||||
BatchSize int `json:"batch_size"`
|
BatchSize int `json:"batch_size"`
|
||||||
MaxConcurrentOps int `json:"max_concurrent_ops"`
|
MaxConcurrentOps int `json:"max_concurrent_ops"`
|
||||||
OperationTimeout time.Duration `json:"operation_timeout"`
|
OperationTimeout time.Duration `json:"operation_timeout"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// MetricsCollector collects and aggregates storage metrics
|
// MetricsCollector collects and aggregates storage metrics
|
||||||
@@ -87,16 +84,16 @@ func DefaultContextStoreOptions() *ContextStoreOptions {
|
|||||||
EncryptionEnabled: true,
|
EncryptionEnabled: true,
|
||||||
CompressionEnabled: true,
|
CompressionEnabled: true,
|
||||||
CachingEnabled: true,
|
CachingEnabled: true,
|
||||||
CacheTTL: 24 * time.Hour,
|
CacheTTL: 24 * time.Hour,
|
||||||
CacheSize: 1024 * 1024 * 1024, // 1GB
|
CacheSize: 1024 * 1024 * 1024, // 1GB
|
||||||
IndexingEnabled: true,
|
IndexingEnabled: true,
|
||||||
IndexRefreshInterval: 5 * time.Minute,
|
IndexRefreshInterval: 5 * time.Minute,
|
||||||
SyncInterval: 10 * time.Minute,
|
SyncInterval: 10 * time.Minute,
|
||||||
CompactionInterval: 24 * time.Hour,
|
CompactionInterval: 24 * time.Hour,
|
||||||
CleanupInterval: 1 * time.Hour,
|
CleanupInterval: 1 * time.Hour,
|
||||||
BatchSize: 100,
|
BatchSize: 100,
|
||||||
MaxConcurrentOps: 10,
|
MaxConcurrentOps: 10,
|
||||||
OperationTimeout: 30 * time.Second,
|
OperationTimeout: 30 * time.Second,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -124,8 +121,8 @@ func NewContextStore(
|
|||||||
indexManager: indexManager,
|
indexManager: indexManager,
|
||||||
backupManager: backupManager,
|
backupManager: backupManager,
|
||||||
eventNotifier: eventNotifier,
|
eventNotifier: eventNotifier,
|
||||||
nodeID: nodeID,
|
nodeID: nodeID,
|
||||||
options: options,
|
options: options,
|
||||||
statistics: &StorageStatistics{
|
statistics: &StorageStatistics{
|
||||||
LastSyncTime: time.Now(),
|
LastSyncTime: time.Now(),
|
||||||
},
|
},
|
||||||
@@ -174,11 +171,11 @@ func (cs *ContextStoreImpl) StoreContext(
|
|||||||
} else {
|
} else {
|
||||||
// Store unencrypted
|
// Store unencrypted
|
||||||
storeOptions := &StoreOptions{
|
storeOptions := &StoreOptions{
|
||||||
Encrypt: false,
|
Encrypt: false,
|
||||||
Replicate: cs.options.AutoReplicate,
|
Replicate: cs.options.AutoReplicate,
|
||||||
Index: cs.options.IndexingEnabled,
|
Index: cs.options.IndexingEnabled,
|
||||||
Cache: cs.options.CachingEnabled,
|
Cache: cs.options.CachingEnabled,
|
||||||
Compress: cs.options.CompressionEnabled,
|
Compress: cs.options.CompressionEnabled,
|
||||||
}
|
}
|
||||||
storeErr = cs.localStorage.Store(ctx, storageKey, node, storeOptions)
|
storeErr = cs.localStorage.Store(ctx, storageKey, node, storeOptions)
|
||||||
}
|
}
|
||||||
@@ -216,8 +213,8 @@ func (cs *ContextStoreImpl) StoreContext(
|
|||||||
distOptions := &DistributedStoreOptions{
|
distOptions := &DistributedStoreOptions{
|
||||||
ReplicationFactor: cs.options.DefaultReplicas,
|
ReplicationFactor: cs.options.DefaultReplicas,
|
||||||
ConsistencyLevel: ConsistencyQuorum,
|
ConsistencyLevel: ConsistencyQuorum,
|
||||||
Timeout: cs.options.OperationTimeout,
|
Timeout: cs.options.OperationTimeout,
|
||||||
SyncMode: SyncAsync,
|
SyncMode: SyncAsync,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := cs.distributedStorage.Store(replicateCtx, storageKey, node, distOptions); err != nil {
|
if err := cs.distributedStorage.Store(replicateCtx, storageKey, node, distOptions); err != nil {
|
||||||
@@ -729,7 +726,7 @@ func (cs *ContextStoreImpl) Sync(ctx context.Context) error {
|
|||||||
Type: EventSynced,
|
Type: EventSynced,
|
||||||
Timestamp: time.Now(),
|
Timestamp: time.Now(),
|
||||||
Metadata: map[string]interface{}{
|
Metadata: map[string]interface{}{
|
||||||
"node_id": cs.nodeID,
|
"node_id": cs.nodeID,
|
||||||
"sync_time": time.Since(start),
|
"sync_time": time.Since(start),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,69 +8,68 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
"chorus/pkg/dht"
|
||||||
"chorus/pkg/types"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// DistributedStorageImpl implements the DistributedStorage interface
|
// DistributedStorageImpl implements the DistributedStorage interface
|
||||||
type DistributedStorageImpl struct {
|
type DistributedStorageImpl struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
dht dht.DHT
|
dht dht.DHT
|
||||||
nodeID string
|
nodeID string
|
||||||
metrics *DistributedStorageStats
|
metrics *DistributedStorageStats
|
||||||
replicas map[string][]string // key -> replica node IDs
|
replicas map[string][]string // key -> replica node IDs
|
||||||
heartbeat *HeartbeatManager
|
heartbeat *HeartbeatManager
|
||||||
consensus *ConsensusManager
|
consensus *ConsensusManager
|
||||||
options *DistributedStorageOptions
|
options *DistributedStorageOptions
|
||||||
}
|
}
|
||||||
|
|
||||||
// HeartbeatManager manages node heartbeats and health
|
// HeartbeatManager manages node heartbeats and health
|
||||||
type HeartbeatManager struct {
|
type HeartbeatManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
nodes map[string]*NodeHealth
|
nodes map[string]*NodeHealth
|
||||||
heartbeatInterval time.Duration
|
heartbeatInterval time.Duration
|
||||||
timeoutThreshold time.Duration
|
timeoutThreshold time.Duration
|
||||||
stopCh chan struct{}
|
stopCh chan struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
// NodeHealth tracks the health of a distributed storage node
|
// NodeHealth tracks the health of a distributed storage node
|
||||||
type NodeHealth struct {
|
type NodeHealth struct {
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
LastSeen time.Time `json:"last_seen"`
|
LastSeen time.Time `json:"last_seen"`
|
||||||
Latency time.Duration `json:"latency"`
|
Latency time.Duration `json:"latency"`
|
||||||
IsActive bool `json:"is_active"`
|
IsActive bool `json:"is_active"`
|
||||||
FailureCount int `json:"failure_count"`
|
FailureCount int `json:"failure_count"`
|
||||||
Load float64 `json:"load"`
|
Load float64 `json:"load"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConsensusManager handles consensus operations for distributed storage
|
// ConsensusManager handles consensus operations for distributed storage
|
||||||
type ConsensusManager struct {
|
type ConsensusManager struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
pendingOps map[string]*ConsensusOperation
|
pendingOps map[string]*ConsensusOperation
|
||||||
votingTimeout time.Duration
|
votingTimeout time.Duration
|
||||||
quorumSize int
|
quorumSize int
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConsensusOperation represents a distributed operation requiring consensus
|
// ConsensusOperation represents a distributed operation requiring consensus
|
||||||
type ConsensusOperation struct {
|
type ConsensusOperation struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Type string `json:"type"`
|
Type string `json:"type"`
|
||||||
Key string `json:"key"`
|
Key string `json:"key"`
|
||||||
Data interface{} `json:"data"`
|
Data interface{} `json:"data"`
|
||||||
Initiator string `json:"initiator"`
|
Initiator string `json:"initiator"`
|
||||||
Votes map[string]bool `json:"votes"`
|
Votes map[string]bool `json:"votes"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
Status ConsensusStatus `json:"status"`
|
Status ConsensusStatus `json:"status"`
|
||||||
Callback func(bool, error) `json:"-"`
|
Callback func(bool, error) `json:"-"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConsensusStatus represents the status of a consensus operation
|
// ConsensusStatus represents the status of a consensus operation
|
||||||
type ConsensusStatus string
|
type ConsensusStatus string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ConsensusPending ConsensusStatus = "pending"
|
ConsensusPending ConsensusStatus = "pending"
|
||||||
ConsensusApproved ConsensusStatus = "approved"
|
ConsensusApproved ConsensusStatus = "approved"
|
||||||
ConsensusRejected ConsensusStatus = "rejected"
|
ConsensusRejected ConsensusStatus = "rejected"
|
||||||
ConsensusTimeout ConsensusStatus = "timeout"
|
ConsensusTimeout ConsensusStatus = "timeout"
|
||||||
)
|
)
|
||||||
|
|
||||||
// NewDistributedStorage creates a new distributed storage implementation
|
// NewDistributedStorage creates a new distributed storage implementation
|
||||||
@@ -83,9 +82,9 @@ func NewDistributedStorage(
|
|||||||
options = &DistributedStoreOptions{
|
options = &DistributedStoreOptions{
|
||||||
ReplicationFactor: 3,
|
ReplicationFactor: 3,
|
||||||
ConsistencyLevel: ConsistencyQuorum,
|
ConsistencyLevel: ConsistencyQuorum,
|
||||||
Timeout: 30 * time.Second,
|
Timeout: 30 * time.Second,
|
||||||
PreferLocal: true,
|
PreferLocal: true,
|
||||||
SyncMode: SyncAsync,
|
SyncMode: SyncAsync,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -98,10 +97,10 @@ func NewDistributedStorage(
|
|||||||
LastRebalance: time.Now(),
|
LastRebalance: time.Now(),
|
||||||
},
|
},
|
||||||
heartbeat: &HeartbeatManager{
|
heartbeat: &HeartbeatManager{
|
||||||
nodes: make(map[string]*NodeHealth),
|
nodes: make(map[string]*NodeHealth),
|
||||||
heartbeatInterval: 30 * time.Second,
|
heartbeatInterval: 30 * time.Second,
|
||||||
timeoutThreshold: 90 * time.Second,
|
timeoutThreshold: 90 * time.Second,
|
||||||
stopCh: make(chan struct{}),
|
stopCh: make(chan struct{}),
|
||||||
},
|
},
|
||||||
consensus: &ConsensusManager{
|
consensus: &ConsensusManager{
|
||||||
pendingOps: make(map[string]*ConsensusOperation),
|
pendingOps: make(map[string]*ConsensusOperation),
|
||||||
@@ -125,8 +124,6 @@ func (ds *DistributedStorageImpl) Store(
|
|||||||
data interface{},
|
data interface{},
|
||||||
options *DistributedStoreOptions,
|
options *DistributedStoreOptions,
|
||||||
) error {
|
) error {
|
||||||
start := time.Now()
|
|
||||||
|
|
||||||
if options == nil {
|
if options == nil {
|
||||||
options = ds.options
|
options = ds.options
|
||||||
}
|
}
|
||||||
@@ -179,7 +176,7 @@ func (ds *DistributedStorageImpl) Retrieve(
|
|||||||
|
|
||||||
// Try local first if prefer local is enabled
|
// Try local first if prefer local is enabled
|
||||||
if ds.options.PreferLocal {
|
if ds.options.PreferLocal {
|
||||||
if localData, err := ds.dht.Get(key); err == nil {
|
if localData, err := ds.dht.GetValue(ctx, key); err == nil {
|
||||||
return ds.deserializeEntry(localData)
|
return ds.deserializeEntry(localData)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -226,25 +223,9 @@ func (ds *DistributedStorageImpl) Exists(
|
|||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
key string,
|
key string,
|
||||||
) (bool, error) {
|
) (bool, error) {
|
||||||
// Try local first
|
if _, err := ds.dht.GetValue(ctx, key); err == nil {
|
||||||
if ds.options.PreferLocal {
|
return true, nil
|
||||||
if exists, err := ds.dht.Exists(key); err == nil {
|
|
||||||
return exists, nil
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check replicas
|
|
||||||
replicas, err := ds.getReplicationNodes(key)
|
|
||||||
if err != nil {
|
|
||||||
return false, fmt.Errorf("failed to get replication nodes: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, nodeID := range replicas {
|
|
||||||
if exists, err := ds.checkExistsOnNode(ctx, nodeID, key); err == nil && exists {
|
|
||||||
return true, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return false, nil
|
return false, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -306,10 +287,7 @@ func (ds *DistributedStorageImpl) FindReplicas(
|
|||||||
|
|
||||||
// Sync synchronizes with other DHT nodes
|
// Sync synchronizes with other DHT nodes
|
||||||
func (ds *DistributedStorageImpl) Sync(ctx context.Context) error {
|
func (ds *DistributedStorageImpl) Sync(ctx context.Context) error {
|
||||||
start := time.Now()
|
ds.metrics.LastRebalance = time.Now()
|
||||||
defer func() {
|
|
||||||
ds.metrics.LastRebalance = time.Now()
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Get list of active nodes
|
// Get list of active nodes
|
||||||
activeNodes := ds.heartbeat.getActiveNodes()
|
activeNodes := ds.heartbeat.getActiveNodes()
|
||||||
@@ -346,7 +324,7 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
|
|||||||
healthyReplicas := int64(0)
|
healthyReplicas := int64(0)
|
||||||
underReplicated := int64(0)
|
underReplicated := int64(0)
|
||||||
|
|
||||||
for key, replicas := range ds.replicas {
|
for _, replicas := range ds.replicas {
|
||||||
totalReplicas += int64(len(replicas))
|
totalReplicas += int64(len(replicas))
|
||||||
healthy := 0
|
healthy := 0
|
||||||
for _, nodeID := range replicas {
|
for _, nodeID := range replicas {
|
||||||
@@ -371,14 +349,14 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
|
|||||||
|
|
||||||
// DistributedEntry represents a distributed storage entry
|
// DistributedEntry represents a distributed storage entry
|
||||||
type DistributedEntry struct {
|
type DistributedEntry struct {
|
||||||
Key string `json:"key"`
|
Key string `json:"key"`
|
||||||
Data []byte `json:"data"`
|
Data []byte `json:"data"`
|
||||||
ReplicationFactor int `json:"replication_factor"`
|
ReplicationFactor int `json:"replication_factor"`
|
||||||
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
|
ConsistencyLevel ConsistencyLevel `json:"consistency_level"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
UpdatedAt time.Time `json:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
Version int64 `json:"version"`
|
Version int64 `json:"version"`
|
||||||
Checksum string `json:"checksum"`
|
Checksum string `json:"checksum"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper methods implementation
|
// Helper methods implementation
|
||||||
@@ -405,13 +383,13 @@ func (ds *DistributedStorageImpl) selectReplicationNodes(key string, replication
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||||
// Store asynchronously on all nodes
|
// Store asynchronously on all nodes for SEC-SLURP-1.1a replication policy
|
||||||
errCh := make(chan error, len(nodes))
|
errCh := make(chan error, len(nodes))
|
||||||
|
|
||||||
for _, nodeID := range nodes {
|
for _, nodeID := range nodes {
|
||||||
go func(node string) {
|
go func(node string) {
|
||||||
err := ds.storeOnNode(ctx, node, entry)
|
err := ds.storeOnNode(ctx, node, entry)
|
||||||
errorCh <- err
|
errCh <- err
|
||||||
}(nodeID)
|
}(nodeID)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -445,13 +423,13 @@ func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *Dist
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||||
// Store synchronously on all nodes
|
// Store synchronously on all nodes per SEC-SLURP-1.1a durability target
|
||||||
errCh := make(chan error, len(nodes))
|
errCh := make(chan error, len(nodes))
|
||||||
|
|
||||||
for _, nodeID := range nodes {
|
for _, nodeID := range nodes {
|
||||||
go func(node string) {
|
go func(node string) {
|
||||||
err := ds.storeOnNode(ctx, node, entry)
|
err := ds.storeOnNode(ctx, node, entry)
|
||||||
errorCh <- err
|
errCh <- err
|
||||||
}(nodeID)
|
}(nodeID)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -476,14 +454,14 @@ func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *Distri
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (ds *DistributedStorageImpl) storeQuorum(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
func (ds *DistributedStorageImpl) storeQuorum(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||||
// Store on quorum of nodes
|
// Store on quorum of nodes per SEC-SLURP-1.1a availability guardrail
|
||||||
quorumSize := (len(nodes) / 2) + 1
|
quorumSize := (len(nodes) / 2) + 1
|
||||||
errCh := make(chan error, len(nodes))
|
errCh := make(chan error, len(nodes))
|
||||||
|
|
||||||
for _, nodeID := range nodes {
|
for _, nodeID := range nodes {
|
||||||
go func(node string) {
|
go func(node string) {
|
||||||
err := ds.storeOnNode(ctx, node, entry)
|
err := ds.storeOnNode(ctx, node, entry)
|
||||||
errorCh <- err
|
errCh <- err
|
||||||
}(nodeID)
|
}(nodeID)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/crypto"
|
"chorus/pkg/crypto"
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -19,25 +18,25 @@ type EncryptedStorageImpl struct {
|
|||||||
crypto crypto.RoleCrypto
|
crypto crypto.RoleCrypto
|
||||||
localStorage LocalStorage
|
localStorage LocalStorage
|
||||||
keyManager crypto.KeyManager
|
keyManager crypto.KeyManager
|
||||||
accessControl crypto.AccessController
|
accessControl crypto.StorageAccessController
|
||||||
auditLogger crypto.AuditLogger
|
auditLogger crypto.StorageAuditLogger
|
||||||
metrics *EncryptionMetrics
|
metrics *EncryptionMetrics
|
||||||
}
|
}
|
||||||
|
|
||||||
// EncryptionMetrics tracks encryption-related metrics
|
// EncryptionMetrics tracks encryption-related metrics
|
||||||
type EncryptionMetrics struct {
|
type EncryptionMetrics struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
EncryptOperations int64
|
EncryptOperations int64
|
||||||
DecryptOperations int64
|
DecryptOperations int64
|
||||||
KeyRotations int64
|
KeyRotations int64
|
||||||
AccessDenials int64
|
AccessDenials int64
|
||||||
EncryptionErrors int64
|
EncryptionErrors int64
|
||||||
DecryptionErrors int64
|
DecryptionErrors int64
|
||||||
LastKeyRotation time.Time
|
LastKeyRotation time.Time
|
||||||
AverageEncryptTime time.Duration
|
AverageEncryptTime time.Duration
|
||||||
AverageDecryptTime time.Duration
|
AverageDecryptTime time.Duration
|
||||||
ActiveEncryptionKeys int
|
ActiveEncryptionKeys int
|
||||||
ExpiredKeys int
|
ExpiredKeys int
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewEncryptedStorage creates a new encrypted storage implementation
|
// NewEncryptedStorage creates a new encrypted storage implementation
|
||||||
@@ -45,8 +44,8 @@ func NewEncryptedStorage(
|
|||||||
crypto crypto.RoleCrypto,
|
crypto crypto.RoleCrypto,
|
||||||
localStorage LocalStorage,
|
localStorage LocalStorage,
|
||||||
keyManager crypto.KeyManager,
|
keyManager crypto.KeyManager,
|
||||||
accessControl crypto.AccessController,
|
accessControl crypto.StorageAccessController,
|
||||||
auditLogger crypto.AuditLogger,
|
auditLogger crypto.StorageAuditLogger,
|
||||||
) *EncryptedStorageImpl {
|
) *EncryptedStorageImpl {
|
||||||
return &EncryptedStorageImpl{
|
return &EncryptedStorageImpl{
|
||||||
crypto: crypto,
|
crypto: crypto,
|
||||||
@@ -286,12 +285,11 @@ func (es *EncryptedStorageImpl) GetAccessRoles(
|
|||||||
return roles, nil
|
return roles, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// RotateKeys rotates encryption keys
|
// RotateKeys rotates encryption keys in line with SEC-SLURP-1.1 retention constraints
|
||||||
func (es *EncryptedStorageImpl) RotateKeys(
|
func (es *EncryptedStorageImpl) RotateKeys(
|
||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
maxAge time.Duration,
|
maxAge time.Duration,
|
||||||
) error {
|
) error {
|
||||||
start := time.Now()
|
|
||||||
defer func() {
|
defer func() {
|
||||||
es.metrics.mu.Lock()
|
es.metrics.mu.Lock()
|
||||||
es.metrics.KeyRotations++
|
es.metrics.KeyRotations++
|
||||||
|
|||||||
@@ -9,22 +9,23 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
"github.com/blevesearch/bleve/v2"
|
"github.com/blevesearch/bleve/v2"
|
||||||
"github.com/blevesearch/bleve/v2/analysis/analyzer/standard"
|
"github.com/blevesearch/bleve/v2/analysis/analyzer/standard"
|
||||||
"github.com/blevesearch/bleve/v2/analysis/lang/en"
|
"github.com/blevesearch/bleve/v2/analysis/lang/en"
|
||||||
"github.com/blevesearch/bleve/v2/mapping"
|
"github.com/blevesearch/bleve/v2/mapping"
|
||||||
"chorus/pkg/ucxl"
|
"github.com/blevesearch/bleve/v2/search/query"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// IndexManagerImpl implements the IndexManager interface using Bleve
|
// IndexManagerImpl implements the IndexManager interface using Bleve
|
||||||
type IndexManagerImpl struct {
|
type IndexManagerImpl struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
indexes map[string]bleve.Index
|
indexes map[string]bleve.Index
|
||||||
stats map[string]*IndexStatistics
|
stats map[string]*IndexStatistics
|
||||||
basePath string
|
basePath string
|
||||||
nodeID string
|
nodeID string
|
||||||
options *IndexManagerOptions
|
options *IndexManagerOptions
|
||||||
}
|
}
|
||||||
|
|
||||||
// IndexManagerOptions configures index manager behavior
|
// IndexManagerOptions configures index manager behavior
|
||||||
@@ -60,11 +61,11 @@ func NewIndexManager(basePath, nodeID string, options *IndexManagerOptions) (*In
|
|||||||
}
|
}
|
||||||
|
|
||||||
im := &IndexManagerImpl{
|
im := &IndexManagerImpl{
|
||||||
indexes: make(map[string]bleve.Index),
|
indexes: make(map[string]bleve.Index),
|
||||||
stats: make(map[string]*IndexStatistics),
|
stats: make(map[string]*IndexStatistics),
|
||||||
basePath: basePath,
|
basePath: basePath,
|
||||||
nodeID: nodeID,
|
nodeID: nodeID,
|
||||||
options: options,
|
options: options,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start background optimization if enabled
|
// Start background optimization if enabled
|
||||||
@@ -432,31 +433,31 @@ func (im *IndexManagerImpl) createIndexDocument(data interface{}) (map[string]in
|
|||||||
return doc, nil
|
return doc, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.SearchRequest, error) {
|
func (im *IndexManagerImpl) buildSearchRequest(searchQuery *SearchQuery) (*bleve.SearchRequest, error) {
|
||||||
// Build Bleve search request from our search query
|
// Build Bleve search request from our search query (SEC-SLURP-1.1 search path)
|
||||||
var bleveQuery bleve.Query
|
var bleveQuery query.Query
|
||||||
|
|
||||||
if query.Query == "" {
|
if searchQuery.Query == "" {
|
||||||
// Match all query
|
// Match all query
|
||||||
bleveQuery = bleve.NewMatchAllQuery()
|
bleveQuery = bleve.NewMatchAllQuery()
|
||||||
} else {
|
} else {
|
||||||
// Text search query
|
// Text search query
|
||||||
if query.FuzzyMatch {
|
if searchQuery.FuzzyMatch {
|
||||||
// Use fuzzy query
|
// Use fuzzy query
|
||||||
bleveQuery = bleve.NewFuzzyQuery(query.Query)
|
bleveQuery = bleve.NewFuzzyQuery(searchQuery.Query)
|
||||||
} else {
|
} else {
|
||||||
// Use match query for better scoring
|
// Use match query for better scoring
|
||||||
bleveQuery = bleve.NewMatchQuery(query.Query)
|
bleveQuery = bleve.NewMatchQuery(searchQuery.Query)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add filters
|
// Add filters
|
||||||
var conjuncts []bleve.Query
|
var conjuncts []query.Query
|
||||||
conjuncts = append(conjuncts, bleveQuery)
|
conjuncts = append(conjuncts, bleveQuery)
|
||||||
|
|
||||||
// Technology filters
|
// Technology filters
|
||||||
if len(query.Technologies) > 0 {
|
if len(searchQuery.Technologies) > 0 {
|
||||||
for _, tech := range query.Technologies {
|
for _, tech := range searchQuery.Technologies {
|
||||||
techQuery := bleve.NewTermQuery(tech)
|
techQuery := bleve.NewTermQuery(tech)
|
||||||
techQuery.SetField("technologies_facet")
|
techQuery.SetField("technologies_facet")
|
||||||
conjuncts = append(conjuncts, techQuery)
|
conjuncts = append(conjuncts, techQuery)
|
||||||
@@ -464,8 +465,8 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Tag filters
|
// Tag filters
|
||||||
if len(query.Tags) > 0 {
|
if len(searchQuery.Tags) > 0 {
|
||||||
for _, tag := range query.Tags {
|
for _, tag := range searchQuery.Tags {
|
||||||
tagQuery := bleve.NewTermQuery(tag)
|
tagQuery := bleve.NewTermQuery(tag)
|
||||||
tagQuery.SetField("tags_facet")
|
tagQuery.SetField("tags_facet")
|
||||||
conjuncts = append(conjuncts, tagQuery)
|
conjuncts = append(conjuncts, tagQuery)
|
||||||
@@ -481,18 +482,18 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
|||||||
searchRequest := bleve.NewSearchRequest(bleveQuery)
|
searchRequest := bleve.NewSearchRequest(bleveQuery)
|
||||||
|
|
||||||
// Set result options
|
// Set result options
|
||||||
if query.Limit > 0 && query.Limit <= im.options.MaxResults {
|
if searchQuery.Limit > 0 && searchQuery.Limit <= im.options.MaxResults {
|
||||||
searchRequest.Size = query.Limit
|
searchRequest.Size = searchQuery.Limit
|
||||||
} else {
|
} else {
|
||||||
searchRequest.Size = im.options.MaxResults
|
searchRequest.Size = im.options.MaxResults
|
||||||
}
|
}
|
||||||
|
|
||||||
if query.Offset > 0 {
|
if searchQuery.Offset > 0 {
|
||||||
searchRequest.From = query.Offset
|
searchRequest.From = searchQuery.Offset
|
||||||
}
|
}
|
||||||
|
|
||||||
// Enable highlighting if requested
|
// Enable highlighting if requested
|
||||||
if query.HighlightTerms && im.options.EnableHighlighting {
|
if searchQuery.HighlightTerms && im.options.EnableHighlighting {
|
||||||
searchRequest.Highlight = bleve.NewHighlight()
|
searchRequest.Highlight = bleve.NewHighlight()
|
||||||
searchRequest.Highlight.AddField("content")
|
searchRequest.Highlight.AddField("content")
|
||||||
searchRequest.Highlight.AddField("summary")
|
searchRequest.Highlight.AddField("summary")
|
||||||
@@ -500,9 +501,9 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Add facets if requested
|
// Add facets if requested
|
||||||
if len(query.Facets) > 0 && im.options.EnableFaceting {
|
if len(searchQuery.Facets) > 0 && im.options.EnableFaceting {
|
||||||
searchRequest.Facets = make(bleve.FacetsRequest)
|
searchRequest.Facets = make(bleve.FacetsRequest)
|
||||||
for _, facet := range query.Facets {
|
for _, facet := range searchQuery.Facets {
|
||||||
switch facet {
|
switch facet {
|
||||||
case "technologies":
|
case "technologies":
|
||||||
searchRequest.Facets["technologies"] = bleve.NewFacetRequest("technologies_facet", 10)
|
searchRequest.Facets["technologies"] = bleve.NewFacetRequest("technologies_facet", 10)
|
||||||
@@ -535,7 +536,7 @@ func (im *IndexManagerImpl) convertSearchResults(
|
|||||||
searchHit := &SearchResult{
|
searchHit := &SearchResult{
|
||||||
MatchScore: hit.Score,
|
MatchScore: hit.Score,
|
||||||
MatchedFields: make([]string, 0),
|
MatchedFields: make([]string, 0),
|
||||||
Highlights: make(map[string][]string),
|
Highlights: make(map[string][]string),
|
||||||
Rank: i + 1,
|
Rank: i + 1,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -558,8 +559,8 @@ func (im *IndexManagerImpl) convertSearchResults(
|
|||||||
|
|
||||||
// Parse UCXL address
|
// Parse UCXL address
|
||||||
if ucxlStr, ok := hit.Fields["ucxl_address"].(string); ok {
|
if ucxlStr, ok := hit.Fields["ucxl_address"].(string); ok {
|
||||||
if addr, err := ucxl.ParseAddress(ucxlStr); err == nil {
|
if addr, err := ucxl.Parse(ucxlStr); err == nil {
|
||||||
contextNode.UCXLAddress = addr
|
contextNode.UCXLAddress = *addr
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -572,8 +573,10 @@ func (im *IndexManagerImpl) convertSearchResults(
|
|||||||
results.Facets = make(map[string]map[string]int)
|
results.Facets = make(map[string]map[string]int)
|
||||||
for facetName, facetResult := range searchResult.Facets {
|
for facetName, facetResult := range searchResult.Facets {
|
||||||
facetCounts := make(map[string]int)
|
facetCounts := make(map[string]int)
|
||||||
for _, term := range facetResult.Terms {
|
if facetResult.Terms != nil {
|
||||||
facetCounts[term.Term] = term.Count
|
for _, term := range facetResult.Terms.Terms() {
|
||||||
|
facetCounts[term.Term] = term.Count
|
||||||
|
}
|
||||||
}
|
}
|
||||||
results.Facets[facetName] = facetCounts
|
results.Facets[facetName] = facetCounts
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,9 +4,8 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ContextStore provides the main interface for context storage and retrieval
|
// ContextStore provides the main interface for context storage and retrieval
|
||||||
@@ -270,35 +269,35 @@ type EventHandler func(event *StorageEvent) error
|
|||||||
|
|
||||||
// StorageEvent represents a storage operation event
|
// StorageEvent represents a storage operation event
|
||||||
type StorageEvent struct {
|
type StorageEvent struct {
|
||||||
Type EventType `json:"type"` // Event type
|
Type EventType `json:"type"` // Event type
|
||||||
Key string `json:"key"` // Storage key
|
Key string `json:"key"` // Storage key
|
||||||
Data interface{} `json:"data"` // Event data
|
Data interface{} `json:"data"` // Event data
|
||||||
Timestamp time.Time `json:"timestamp"` // When event occurred
|
Timestamp time.Time `json:"timestamp"` // When event occurred
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// Transaction represents a storage transaction
|
// Transaction represents a storage transaction
|
||||||
type Transaction struct {
|
type Transaction struct {
|
||||||
ID string `json:"id"` // Transaction ID
|
ID string `json:"id"` // Transaction ID
|
||||||
StartTime time.Time `json:"start_time"` // When transaction started
|
StartTime time.Time `json:"start_time"` // When transaction started
|
||||||
Operations []*TransactionOperation `json:"operations"` // Transaction operations
|
Operations []*TransactionOperation `json:"operations"` // Transaction operations
|
||||||
Status TransactionStatus `json:"status"` // Transaction status
|
Status TransactionStatus `json:"status"` // Transaction status
|
||||||
}
|
}
|
||||||
|
|
||||||
// TransactionOperation represents a single operation in a transaction
|
// TransactionOperation represents a single operation in a transaction
|
||||||
type TransactionOperation struct {
|
type TransactionOperation struct {
|
||||||
Type string `json:"type"` // Operation type
|
Type string `json:"type"` // Operation type
|
||||||
Key string `json:"key"` // Storage key
|
Key string `json:"key"` // Storage key
|
||||||
Data interface{} `json:"data"` // Operation data
|
Data interface{} `json:"data"` // Operation data
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Operation metadata
|
Metadata map[string]interface{} `json:"metadata"` // Operation metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// TransactionStatus represents transaction status
|
// TransactionStatus represents transaction status
|
||||||
type TransactionStatus string
|
type TransactionStatus string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
TransactionActive TransactionStatus = "active"
|
TransactionActive TransactionStatus = "active"
|
||||||
TransactionCommitted TransactionStatus = "committed"
|
TransactionCommitted TransactionStatus = "committed"
|
||||||
TransactionRolledBack TransactionStatus = "rolled_back"
|
TransactionRolledBack TransactionStatus = "rolled_back"
|
||||||
TransactionFailed TransactionStatus = "failed"
|
TransactionFailed TransactionStatus = "failed"
|
||||||
)
|
)
|
||||||
@@ -33,12 +33,12 @@ type LocalStorageImpl struct {
|
|||||||
|
|
||||||
// LocalStorageOptions configures local storage behavior
|
// LocalStorageOptions configures local storage behavior
|
||||||
type LocalStorageOptions struct {
|
type LocalStorageOptions struct {
|
||||||
Compression bool `json:"compression"` // Enable compression
|
Compression bool `json:"compression"` // Enable compression
|
||||||
CacheSize int `json:"cache_size"` // Cache size in MB
|
CacheSize int `json:"cache_size"` // Cache size in MB
|
||||||
WriteBuffer int `json:"write_buffer"` // Write buffer size in MB
|
WriteBuffer int `json:"write_buffer"` // Write buffer size in MB
|
||||||
MaxOpenFiles int `json:"max_open_files"` // Maximum open files
|
MaxOpenFiles int `json:"max_open_files"` // Maximum open files
|
||||||
BlockSize int `json:"block_size"` // Block size in KB
|
BlockSize int `json:"block_size"` // Block size in KB
|
||||||
SyncWrites bool `json:"sync_writes"` // Synchronous writes
|
SyncWrites bool `json:"sync_writes"` // Synchronous writes
|
||||||
CompactionInterval time.Duration `json:"compaction_interval"` // Auto-compaction interval
|
CompactionInterval time.Duration `json:"compaction_interval"` // Auto-compaction interval
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -46,11 +46,11 @@ type LocalStorageOptions struct {
|
|||||||
func DefaultLocalStorageOptions() *LocalStorageOptions {
|
func DefaultLocalStorageOptions() *LocalStorageOptions {
|
||||||
return &LocalStorageOptions{
|
return &LocalStorageOptions{
|
||||||
Compression: true,
|
Compression: true,
|
||||||
CacheSize: 64, // 64MB cache
|
CacheSize: 64, // 64MB cache
|
||||||
WriteBuffer: 16, // 16MB write buffer
|
WriteBuffer: 16, // 16MB write buffer
|
||||||
MaxOpenFiles: 1000,
|
MaxOpenFiles: 1000,
|
||||||
BlockSize: 4, // 4KB blocks
|
BlockSize: 4, // 4KB blocks
|
||||||
SyncWrites: false,
|
SyncWrites: false,
|
||||||
CompactionInterval: 24 * time.Hour,
|
CompactionInterval: 24 * time.Hour,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -135,6 +135,7 @@ func (ls *LocalStorageImpl) Store(
|
|||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
Metadata: make(map[string]interface{}),
|
Metadata: make(map[string]interface{}),
|
||||||
}
|
}
|
||||||
|
entry.Checksum = ls.computeChecksum(dataBytes)
|
||||||
|
|
||||||
// Apply options
|
// Apply options
|
||||||
if options != nil {
|
if options != nil {
|
||||||
@@ -179,6 +180,7 @@ func (ls *LocalStorageImpl) Store(
|
|||||||
if entry.Compressed {
|
if entry.Compressed {
|
||||||
ls.metrics.CompressedSize += entry.CompressedSize
|
ls.metrics.CompressedSize += entry.CompressedSize
|
||||||
}
|
}
|
||||||
|
ls.updateFileMetricsLocked()
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -231,6 +233,14 @@ func (ls *LocalStorageImpl) Retrieve(ctx context.Context, key string) (interface
|
|||||||
dataBytes = decompressedData
|
dataBytes = decompressedData
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Verify integrity against stored checksum (SEC-SLURP-1.1a requirement)
|
||||||
|
if entry.Checksum != "" {
|
||||||
|
computed := ls.computeChecksum(dataBytes)
|
||||||
|
if computed != entry.Checksum {
|
||||||
|
return nil, fmt.Errorf("data integrity check failed for key %s", key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Deserialize data
|
// Deserialize data
|
||||||
var result interface{}
|
var result interface{}
|
||||||
if err := json.Unmarshal(dataBytes, &result); err != nil {
|
if err := json.Unmarshal(dataBytes, &result); err != nil {
|
||||||
@@ -260,6 +270,7 @@ func (ls *LocalStorageImpl) Delete(ctx context.Context, key string) error {
|
|||||||
if entryBytes != nil {
|
if entryBytes != nil {
|
||||||
ls.metrics.TotalSize -= int64(len(entryBytes))
|
ls.metrics.TotalSize -= int64(len(entryBytes))
|
||||||
}
|
}
|
||||||
|
ls.updateFileMetricsLocked()
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -397,6 +408,7 @@ type StorageEntry struct {
|
|||||||
Compressed bool `json:"compressed"`
|
Compressed bool `json:"compressed"`
|
||||||
OriginalSize int64 `json:"original_size"`
|
OriginalSize int64 `json:"original_size"`
|
||||||
CompressedSize int64 `json:"compressed_size"`
|
CompressedSize int64 `json:"compressed_size"`
|
||||||
|
Checksum string `json:"checksum"`
|
||||||
AccessLevel string `json:"access_level"`
|
AccessLevel string `json:"access_level"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
@@ -434,6 +446,42 @@ func (ls *LocalStorageImpl) compress(data []byte) ([]byte, error) {
|
|||||||
return compressed, nil
|
return compressed, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (ls *LocalStorageImpl) computeChecksum(data []byte) string {
|
||||||
|
// Compute SHA-256 checksum to satisfy SEC-SLURP-1.1a integrity tracking
|
||||||
|
digest := sha256.Sum256(data)
|
||||||
|
return fmt.Sprintf("%x", digest)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ls *LocalStorageImpl) updateFileMetricsLocked() {
|
||||||
|
// Refresh filesystem metrics using io/fs traversal (SEC-SLURP-1.1a durability telemetry)
|
||||||
|
var fileCount int64
|
||||||
|
var aggregateSize int64
|
||||||
|
|
||||||
|
walkErr := fs.WalkDir(os.DirFS(ls.basePath), ".", func(path string, d fs.DirEntry, err error) error {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if d.IsDir() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
fileCount++
|
||||||
|
if info, infoErr := d.Info(); infoErr == nil {
|
||||||
|
aggregateSize += info.Size()
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
if walkErr != nil {
|
||||||
|
fmt.Printf("filesystem metrics refresh failed: %v\n", walkErr)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ls.metrics.TotalFiles = fileCount
|
||||||
|
if aggregateSize > 0 {
|
||||||
|
ls.metrics.TotalSize = aggregateSize
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
|
func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
|
||||||
// Create gzip reader
|
// Create gzip reader
|
||||||
reader, err := gzip.NewReader(bytes.NewReader(data))
|
reader, err := gzip.NewReader(bytes.NewReader(data))
|
||||||
@@ -498,11 +546,11 @@ func (ls *LocalStorageImpl) GetCompressionStats() (*CompressionStats, error) {
|
|||||||
defer ls.mu.RUnlock()
|
defer ls.mu.RUnlock()
|
||||||
|
|
||||||
stats := &CompressionStats{
|
stats := &CompressionStats{
|
||||||
TotalEntries: 0,
|
TotalEntries: 0,
|
||||||
CompressedEntries: 0,
|
CompressedEntries: 0,
|
||||||
TotalSize: ls.metrics.TotalSize,
|
TotalSize: ls.metrics.TotalSize,
|
||||||
CompressedSize: ls.metrics.CompressedSize,
|
CompressedSize: ls.metrics.CompressedSize,
|
||||||
CompressionRatio: 0.0,
|
CompressionRatio: 0.0,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Iterate through all entries to get accurate stats
|
// Iterate through all entries to get accurate stats
|
||||||
@@ -599,11 +647,11 @@ func (ls *LocalStorageImpl) OptimizeStorage(ctx context.Context, compressThresho
|
|||||||
|
|
||||||
// CompressionStats holds compression statistics
|
// CompressionStats holds compression statistics
|
||||||
type CompressionStats struct {
|
type CompressionStats struct {
|
||||||
TotalEntries int64 `json:"total_entries"`
|
TotalEntries int64 `json:"total_entries"`
|
||||||
CompressedEntries int64 `json:"compressed_entries"`
|
CompressedEntries int64 `json:"compressed_entries"`
|
||||||
TotalSize int64 `json:"total_size"`
|
TotalSize int64 `json:"total_size"`
|
||||||
CompressedSize int64 `json:"compressed_size"`
|
CompressedSize int64 `json:"compressed_size"`
|
||||||
CompressionRatio float64 `json:"compression_ratio"`
|
CompressionRatio float64 `json:"compression_ratio"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close closes the local storage
|
// Close closes the local storage
|
||||||
|
|||||||
@@ -14,77 +14,77 @@ import (
|
|||||||
|
|
||||||
// MonitoringSystem provides comprehensive monitoring for the storage system
|
// MonitoringSystem provides comprehensive monitoring for the storage system
|
||||||
type MonitoringSystem struct {
|
type MonitoringSystem struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
nodeID string
|
nodeID string
|
||||||
metrics *StorageMetrics
|
metrics *StorageMetrics
|
||||||
alerts *AlertManager
|
alerts *AlertManager
|
||||||
healthChecker *HealthChecker
|
healthChecker *HealthChecker
|
||||||
performanceProfiler *PerformanceProfiler
|
performanceProfiler *PerformanceProfiler
|
||||||
logger *StructuredLogger
|
logger *StructuredLogger
|
||||||
notifications chan *MonitoringEvent
|
notifications chan *MonitoringEvent
|
||||||
stopCh chan struct{}
|
stopCh chan struct{}
|
||||||
}
|
}
|
||||||
|
|
||||||
// StorageMetrics contains all Prometheus metrics for storage operations
|
// StorageMetrics contains all Prometheus metrics for storage operations
|
||||||
type StorageMetrics struct {
|
type StorageMetrics struct {
|
||||||
// Operation counters
|
// Operation counters
|
||||||
StoreOperations prometheus.Counter
|
StoreOperations prometheus.Counter
|
||||||
RetrieveOperations prometheus.Counter
|
RetrieveOperations prometheus.Counter
|
||||||
DeleteOperations prometheus.Counter
|
DeleteOperations prometheus.Counter
|
||||||
UpdateOperations prometheus.Counter
|
UpdateOperations prometheus.Counter
|
||||||
SearchOperations prometheus.Counter
|
SearchOperations prometheus.Counter
|
||||||
BatchOperations prometheus.Counter
|
BatchOperations prometheus.Counter
|
||||||
|
|
||||||
// Error counters
|
// Error counters
|
||||||
StoreErrors prometheus.Counter
|
StoreErrors prometheus.Counter
|
||||||
RetrieveErrors prometheus.Counter
|
RetrieveErrors prometheus.Counter
|
||||||
EncryptionErrors prometheus.Counter
|
EncryptionErrors prometheus.Counter
|
||||||
DecryptionErrors prometheus.Counter
|
DecryptionErrors prometheus.Counter
|
||||||
ReplicationErrors prometheus.Counter
|
ReplicationErrors prometheus.Counter
|
||||||
CacheErrors prometheus.Counter
|
CacheErrors prometheus.Counter
|
||||||
IndexErrors prometheus.Counter
|
IndexErrors prometheus.Counter
|
||||||
|
|
||||||
// Latency histograms
|
// Latency histograms
|
||||||
StoreLatency prometheus.Histogram
|
StoreLatency prometheus.Histogram
|
||||||
RetrieveLatency prometheus.Histogram
|
RetrieveLatency prometheus.Histogram
|
||||||
EncryptionLatency prometheus.Histogram
|
EncryptionLatency prometheus.Histogram
|
||||||
DecryptionLatency prometheus.Histogram
|
DecryptionLatency prometheus.Histogram
|
||||||
ReplicationLatency prometheus.Histogram
|
ReplicationLatency prometheus.Histogram
|
||||||
SearchLatency prometheus.Histogram
|
SearchLatency prometheus.Histogram
|
||||||
|
|
||||||
// Cache metrics
|
// Cache metrics
|
||||||
CacheHits prometheus.Counter
|
CacheHits prometheus.Counter
|
||||||
CacheMisses prometheus.Counter
|
CacheMisses prometheus.Counter
|
||||||
CacheEvictions prometheus.Counter
|
CacheEvictions prometheus.Counter
|
||||||
CacheSize prometheus.Gauge
|
CacheSize prometheus.Gauge
|
||||||
|
|
||||||
// Storage size metrics
|
// Storage size metrics
|
||||||
LocalStorageSize prometheus.Gauge
|
LocalStorageSize prometheus.Gauge
|
||||||
DistributedStorageSize prometheus.Gauge
|
DistributedStorageSize prometheus.Gauge
|
||||||
CompressedStorageSize prometheus.Gauge
|
CompressedStorageSize prometheus.Gauge
|
||||||
IndexStorageSize prometheus.Gauge
|
IndexStorageSize prometheus.Gauge
|
||||||
|
|
||||||
// Replication metrics
|
// Replication metrics
|
||||||
ReplicationFactor prometheus.Gauge
|
ReplicationFactor prometheus.Gauge
|
||||||
HealthyReplicas prometheus.Gauge
|
HealthyReplicas prometheus.Gauge
|
||||||
UnderReplicated prometheus.Gauge
|
UnderReplicated prometheus.Gauge
|
||||||
ReplicationLag prometheus.Histogram
|
ReplicationLag prometheus.Histogram
|
||||||
|
|
||||||
// Encryption metrics
|
// Encryption metrics
|
||||||
EncryptedContexts prometheus.Gauge
|
EncryptedContexts prometheus.Gauge
|
||||||
KeyRotations prometheus.Counter
|
KeyRotations prometheus.Counter
|
||||||
AccessDenials prometheus.Counter
|
AccessDenials prometheus.Counter
|
||||||
ActiveKeys prometheus.Gauge
|
ActiveKeys prometheus.Gauge
|
||||||
|
|
||||||
// Performance metrics
|
// Performance metrics
|
||||||
Throughput prometheus.Gauge
|
Throughput prometheus.Gauge
|
||||||
ConcurrentOperations prometheus.Gauge
|
ConcurrentOperations prometheus.Gauge
|
||||||
QueueDepth prometheus.Gauge
|
QueueDepth prometheus.Gauge
|
||||||
|
|
||||||
// Health metrics
|
// Health metrics
|
||||||
StorageHealth prometheus.Gauge
|
StorageHealth prometheus.Gauge
|
||||||
NodeConnectivity prometheus.Gauge
|
NodeConnectivity prometheus.Gauge
|
||||||
SyncLatency prometheus.Histogram
|
SyncLatency prometheus.Histogram
|
||||||
}
|
}
|
||||||
|
|
||||||
// AlertManager handles storage-related alerts and notifications
|
// AlertManager handles storage-related alerts and notifications
|
||||||
@@ -97,18 +97,96 @@ type AlertManager struct {
|
|||||||
maxHistory int
|
maxHistory int
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (am *AlertManager) severityRank(severity AlertSeverity) int {
|
||||||
|
switch severity {
|
||||||
|
case SeverityCritical:
|
||||||
|
return 4
|
||||||
|
case SeverityError:
|
||||||
|
return 3
|
||||||
|
case SeverityWarning:
|
||||||
|
return 2
|
||||||
|
case SeverityInfo:
|
||||||
|
return 1
|
||||||
|
default:
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetActiveAlerts returns sorted active alerts (SEC-SLURP-1.1 monitoring path)
|
||||||
|
func (am *AlertManager) GetActiveAlerts() []*Alert {
|
||||||
|
am.mu.RLock()
|
||||||
|
defer am.mu.RUnlock()
|
||||||
|
|
||||||
|
if len(am.activealerts) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
alerts := make([]*Alert, 0, len(am.activealerts))
|
||||||
|
for _, alert := range am.activealerts {
|
||||||
|
alerts = append(alerts, alert)
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Slice(alerts, func(i, j int) bool {
|
||||||
|
iRank := am.severityRank(alerts[i].Severity)
|
||||||
|
jRank := am.severityRank(alerts[j].Severity)
|
||||||
|
if iRank == jRank {
|
||||||
|
return alerts[i].StartTime.After(alerts[j].StartTime)
|
||||||
|
}
|
||||||
|
return iRank > jRank
|
||||||
|
})
|
||||||
|
|
||||||
|
return alerts
|
||||||
|
}
|
||||||
|
|
||||||
|
// Snapshot marshals monitoring state for UCXL persistence (SEC-SLURP-1.1a telemetry)
|
||||||
|
func (ms *MonitoringSystem) Snapshot(ctx context.Context) (string, error) {
|
||||||
|
ms.mu.RLock()
|
||||||
|
defer ms.mu.RUnlock()
|
||||||
|
|
||||||
|
if ms.alerts == nil {
|
||||||
|
return "", fmt.Errorf("alert manager not initialised")
|
||||||
|
}
|
||||||
|
|
||||||
|
active := ms.alerts.GetActiveAlerts()
|
||||||
|
alertPayload := make([]map[string]interface{}, 0, len(active))
|
||||||
|
for _, alert := range active {
|
||||||
|
alertPayload = append(alertPayload, map[string]interface{}{
|
||||||
|
"id": alert.ID,
|
||||||
|
"name": alert.Name,
|
||||||
|
"severity": alert.Severity,
|
||||||
|
"message": fmt.Sprintf("%s (threshold %.2f)", alert.Description, alert.Threshold),
|
||||||
|
"labels": alert.Labels,
|
||||||
|
"started_at": alert.StartTime,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
snapshot := map[string]interface{}{
|
||||||
|
"node_id": ms.nodeID,
|
||||||
|
"generated_at": time.Now().UTC(),
|
||||||
|
"alert_count": len(active),
|
||||||
|
"alerts": alertPayload,
|
||||||
|
}
|
||||||
|
|
||||||
|
encoded, err := json.MarshalIndent(snapshot, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to marshal monitoring snapshot: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return string(encoded), nil
|
||||||
|
}
|
||||||
|
|
||||||
// AlertRule defines conditions for triggering alerts
|
// AlertRule defines conditions for triggering alerts
|
||||||
type AlertRule struct {
|
type AlertRule struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
Metric string `json:"metric"`
|
Metric string `json:"metric"`
|
||||||
Condition string `json:"condition"` // >, <, ==, !=, etc.
|
Condition string `json:"condition"` // >, <, ==, !=, etc.
|
||||||
Threshold float64 `json:"threshold"`
|
Threshold float64 `json:"threshold"`
|
||||||
Duration time.Duration `json:"duration"`
|
Duration time.Duration `json:"duration"`
|
||||||
Severity AlertSeverity `json:"severity"`
|
Severity AlertSeverity `json:"severity"`
|
||||||
Labels map[string]string `json:"labels"`
|
Labels map[string]string `json:"labels"`
|
||||||
Enabled bool `json:"enabled"`
|
Enabled bool `json:"enabled"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Alert represents an active or resolved alert
|
// Alert represents an active or resolved alert
|
||||||
@@ -163,30 +241,30 @@ type HealthChecker struct {
|
|||||||
|
|
||||||
// HealthCheck defines a single health check
|
// HealthCheck defines a single health check
|
||||||
type HealthCheck struct {
|
type HealthCheck struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
Checker func(ctx context.Context) HealthResult `json:"-"`
|
Checker func(ctx context.Context) HealthResult `json:"-"`
|
||||||
Interval time.Duration `json:"interval"`
|
Interval time.Duration `json:"interval"`
|
||||||
Timeout time.Duration `json:"timeout"`
|
Timeout time.Duration `json:"timeout"`
|
||||||
Enabled bool `json:"enabled"`
|
Enabled bool `json:"enabled"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// HealthResult represents the result of a health check
|
// HealthResult represents the result of a health check
|
||||||
type HealthResult struct {
|
type HealthResult struct {
|
||||||
Healthy bool `json:"healthy"`
|
Healthy bool `json:"healthy"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
Latency time.Duration `json:"latency"`
|
Latency time.Duration `json:"latency"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// SystemHealth represents the overall health of the storage system
|
// SystemHealth represents the overall health of the storage system
|
||||||
type SystemHealth struct {
|
type SystemHealth struct {
|
||||||
OverallStatus HealthStatus `json:"overall_status"`
|
OverallStatus HealthStatus `json:"overall_status"`
|
||||||
Components map[string]HealthResult `json:"components"`
|
Components map[string]HealthResult `json:"components"`
|
||||||
LastUpdate time.Time `json:"last_update"`
|
LastUpdate time.Time `json:"last_update"`
|
||||||
Uptime time.Duration `json:"uptime"`
|
Uptime time.Duration `json:"uptime"`
|
||||||
StartTime time.Time `json:"start_time"`
|
StartTime time.Time `json:"start_time"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// HealthStatus represents system health status
|
// HealthStatus represents system health status
|
||||||
@@ -200,82 +278,82 @@ const (
|
|||||||
|
|
||||||
// PerformanceProfiler analyzes storage performance patterns
|
// PerformanceProfiler analyzes storage performance patterns
|
||||||
type PerformanceProfiler struct {
|
type PerformanceProfiler struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
operationProfiles map[string]*OperationProfile
|
operationProfiles map[string]*OperationProfile
|
||||||
resourceUsage *ResourceUsage
|
resourceUsage *ResourceUsage
|
||||||
bottlenecks []*Bottleneck
|
bottlenecks []*Bottleneck
|
||||||
recommendations []*PerformanceRecommendation
|
recommendations []*PerformanceRecommendation
|
||||||
}
|
}
|
||||||
|
|
||||||
// OperationProfile contains performance analysis for a specific operation type
|
// OperationProfile contains performance analysis for a specific operation type
|
||||||
type OperationProfile struct {
|
type OperationProfile struct {
|
||||||
Operation string `json:"operation"`
|
Operation string `json:"operation"`
|
||||||
TotalOperations int64 `json:"total_operations"`
|
TotalOperations int64 `json:"total_operations"`
|
||||||
AverageLatency time.Duration `json:"average_latency"`
|
AverageLatency time.Duration `json:"average_latency"`
|
||||||
P50Latency time.Duration `json:"p50_latency"`
|
P50Latency time.Duration `json:"p50_latency"`
|
||||||
P95Latency time.Duration `json:"p95_latency"`
|
P95Latency time.Duration `json:"p95_latency"`
|
||||||
P99Latency time.Duration `json:"p99_latency"`
|
P99Latency time.Duration `json:"p99_latency"`
|
||||||
Throughput float64 `json:"throughput"`
|
Throughput float64 `json:"throughput"`
|
||||||
ErrorRate float64 `json:"error_rate"`
|
ErrorRate float64 `json:"error_rate"`
|
||||||
LatencyHistory []time.Duration `json:"-"`
|
LatencyHistory []time.Duration `json:"-"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ResourceUsage tracks resource consumption
|
// ResourceUsage tracks resource consumption
|
||||||
type ResourceUsage struct {
|
type ResourceUsage struct {
|
||||||
CPUUsage float64 `json:"cpu_usage"`
|
CPUUsage float64 `json:"cpu_usage"`
|
||||||
MemoryUsage int64 `json:"memory_usage"`
|
MemoryUsage int64 `json:"memory_usage"`
|
||||||
DiskUsage int64 `json:"disk_usage"`
|
DiskUsage int64 `json:"disk_usage"`
|
||||||
NetworkIn int64 `json:"network_in"`
|
NetworkIn int64 `json:"network_in"`
|
||||||
NetworkOut int64 `json:"network_out"`
|
NetworkOut int64 `json:"network_out"`
|
||||||
OpenFiles int `json:"open_files"`
|
OpenFiles int `json:"open_files"`
|
||||||
Goroutines int `json:"goroutines"`
|
Goroutines int `json:"goroutines"`
|
||||||
LastUpdated time.Time `json:"last_updated"`
|
LastUpdated time.Time `json:"last_updated"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Bottleneck represents a performance bottleneck
|
// Bottleneck represents a performance bottleneck
|
||||||
type Bottleneck struct {
|
type Bottleneck struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Type string `json:"type"` // cpu, memory, disk, network, etc.
|
Type string `json:"type"` // cpu, memory, disk, network, etc.
|
||||||
Component string `json:"component"`
|
Component string `json:"component"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
Severity AlertSeverity `json:"severity"`
|
Severity AlertSeverity `json:"severity"`
|
||||||
Impact float64 `json:"impact"`
|
Impact float64 `json:"impact"`
|
||||||
DetectedAt time.Time `json:"detected_at"`
|
DetectedAt time.Time `json:"detected_at"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// PerformanceRecommendation suggests optimizations
|
// PerformanceRecommendation suggests optimizations
|
||||||
type PerformanceRecommendation struct {
|
type PerformanceRecommendation struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Type string `json:"type"`
|
Type string `json:"type"`
|
||||||
Title string `json:"title"`
|
Title string `json:"title"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
Priority int `json:"priority"`
|
Priority int `json:"priority"`
|
||||||
Impact string `json:"impact"`
|
Impact string `json:"impact"`
|
||||||
Effort string `json:"effort"`
|
Effort string `json:"effort"`
|
||||||
GeneratedAt time.Time `json:"generated_at"`
|
GeneratedAt time.Time `json:"generated_at"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// MonitoringEvent represents a monitoring system event
|
// MonitoringEvent represents a monitoring system event
|
||||||
type MonitoringEvent struct {
|
type MonitoringEvent struct {
|
||||||
Type string `json:"type"`
|
Type string `json:"type"`
|
||||||
Level string `json:"level"`
|
Level string `json:"level"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
Component string `json:"component"`
|
Component string `json:"component"`
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// StructuredLogger provides structured logging for storage operations
|
// StructuredLogger provides structured logging for storage operations
|
||||||
type StructuredLogger struct {
|
type StructuredLogger struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
level LogLevel
|
level LogLevel
|
||||||
output LogOutput
|
output LogOutput
|
||||||
formatter LogFormatter
|
formatter LogFormatter
|
||||||
buffer []*LogEntry
|
buffer []*LogEntry
|
||||||
maxBuffer int
|
maxBuffer int
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -303,27 +381,27 @@ type LogFormatter interface {
|
|||||||
|
|
||||||
// LogEntry represents a single log entry
|
// LogEntry represents a single log entry
|
||||||
type LogEntry struct {
|
type LogEntry struct {
|
||||||
Level LogLevel `json:"level"`
|
Level LogLevel `json:"level"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
Component string `json:"component"`
|
Component string `json:"component"`
|
||||||
Operation string `json:"operation"`
|
Operation string `json:"operation"`
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Fields map[string]interface{} `json:"fields"`
|
Fields map[string]interface{} `json:"fields"`
|
||||||
Error error `json:"error,omitempty"`
|
Error error `json:"error,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewMonitoringSystem creates a new monitoring system
|
// NewMonitoringSystem creates a new monitoring system
|
||||||
func NewMonitoringSystem(nodeID string) *MonitoringSystem {
|
func NewMonitoringSystem(nodeID string) *MonitoringSystem {
|
||||||
ms := &MonitoringSystem{
|
ms := &MonitoringSystem{
|
||||||
nodeID: nodeID,
|
nodeID: nodeID,
|
||||||
metrics: initializeMetrics(nodeID),
|
metrics: initializeMetrics(nodeID),
|
||||||
alerts: newAlertManager(),
|
alerts: newAlertManager(),
|
||||||
healthChecker: newHealthChecker(),
|
healthChecker: newHealthChecker(),
|
||||||
performanceProfiler: newPerformanceProfiler(),
|
performanceProfiler: newPerformanceProfiler(),
|
||||||
logger: newStructuredLogger(),
|
logger: newStructuredLogger(),
|
||||||
notifications: make(chan *MonitoringEvent, 1000),
|
notifications: make(chan *MonitoringEvent, 1000),
|
||||||
stopCh: make(chan struct{}),
|
stopCh: make(chan struct{}),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start monitoring goroutines
|
// Start monitoring goroutines
|
||||||
@@ -592,21 +670,21 @@ func (ms *MonitoringSystem) analyzePerformance() {
|
|||||||
|
|
||||||
func newAlertManager() *AlertManager {
|
func newAlertManager() *AlertManager {
|
||||||
return &AlertManager{
|
return &AlertManager{
|
||||||
rules: make([]*AlertRule, 0),
|
rules: make([]*AlertRule, 0),
|
||||||
activealerts: make(map[string]*Alert),
|
activealerts: make(map[string]*Alert),
|
||||||
notifiers: make([]AlertNotifier, 0),
|
notifiers: make([]AlertNotifier, 0),
|
||||||
history: make([]*Alert, 0),
|
history: make([]*Alert, 0),
|
||||||
maxHistory: 1000,
|
maxHistory: 1000,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func newHealthChecker() *HealthChecker {
|
func newHealthChecker() *HealthChecker {
|
||||||
return &HealthChecker{
|
return &HealthChecker{
|
||||||
checks: make(map[string]HealthCheck),
|
checks: make(map[string]HealthCheck),
|
||||||
status: &SystemHealth{
|
status: &SystemHealth{
|
||||||
OverallStatus: HealthHealthy,
|
OverallStatus: HealthHealthy,
|
||||||
Components: make(map[string]HealthResult),
|
Components: make(map[string]HealthResult),
|
||||||
StartTime: time.Now(),
|
StartTime: time.Now(),
|
||||||
},
|
},
|
||||||
checkInterval: 1 * time.Minute,
|
checkInterval: 1 * time.Minute,
|
||||||
timeout: 30 * time.Second,
|
timeout: 30 * time.Second,
|
||||||
@@ -664,8 +742,8 @@ func (ms *MonitoringSystem) GetMonitoringStats() (*MonitoringStats, error) {
|
|||||||
defer ms.mu.RUnlock()
|
defer ms.mu.RUnlock()
|
||||||
|
|
||||||
stats := &MonitoringStats{
|
stats := &MonitoringStats{
|
||||||
NodeID: ms.nodeID,
|
NodeID: ms.nodeID,
|
||||||
Timestamp: time.Now(),
|
Timestamp: time.Now(),
|
||||||
HealthStatus: ms.healthChecker.status.OverallStatus,
|
HealthStatus: ms.healthChecker.status.OverallStatus,
|
||||||
ActiveAlerts: len(ms.alerts.activealerts),
|
ActiveAlerts: len(ms.alerts.activealerts),
|
||||||
Bottlenecks: len(ms.performanceProfiler.bottlenecks),
|
Bottlenecks: len(ms.performanceProfiler.bottlenecks),
|
||||||
|
|||||||
@@ -3,9 +3,8 @@ package storage
|
|||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DatabaseSchema defines the complete schema for encrypted context storage
|
// DatabaseSchema defines the complete schema for encrypted context storage
|
||||||
@@ -14,325 +13,325 @@ import (
|
|||||||
// ContextRecord represents the main context storage record
|
// ContextRecord represents the main context storage record
|
||||||
type ContextRecord struct {
|
type ContextRecord struct {
|
||||||
// Primary identification
|
// Primary identification
|
||||||
ID string `json:"id" db:"id"` // Unique record ID
|
ID string `json:"id" db:"id"` // Unique record ID
|
||||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` // UCXL address
|
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"` // UCXL address
|
||||||
Path string `json:"path" db:"path"` // File system path
|
Path string `json:"path" db:"path"` // File system path
|
||||||
PathHash string `json:"path_hash" db:"path_hash"` // Hash of path for indexing
|
PathHash string `json:"path_hash" db:"path_hash"` // Hash of path for indexing
|
||||||
|
|
||||||
// Core context data
|
// Core context data
|
||||||
Summary string `json:"summary" db:"summary"`
|
Summary string `json:"summary" db:"summary"`
|
||||||
Purpose string `json:"purpose" db:"purpose"`
|
Purpose string `json:"purpose" db:"purpose"`
|
||||||
Technologies []byte `json:"technologies" db:"technologies"` // JSON array
|
Technologies []byte `json:"technologies" db:"technologies"` // JSON array
|
||||||
Tags []byte `json:"tags" db:"tags"` // JSON array
|
Tags []byte `json:"tags" db:"tags"` // JSON array
|
||||||
Insights []byte `json:"insights" db:"insights"` // JSON array
|
Insights []byte `json:"insights" db:"insights"` // JSON array
|
||||||
|
|
||||||
// Hierarchy control
|
// Hierarchy control
|
||||||
OverridesParent bool `json:"overrides_parent" db:"overrides_parent"`
|
OverridesParent bool `json:"overrides_parent" db:"overrides_parent"`
|
||||||
ContextSpecificity int `json:"context_specificity" db:"context_specificity"`
|
ContextSpecificity int `json:"context_specificity" db:"context_specificity"`
|
||||||
AppliesToChildren bool `json:"applies_to_children" db:"applies_to_children"`
|
AppliesToChildren bool `json:"applies_to_children" db:"applies_to_children"`
|
||||||
|
|
||||||
// Quality metrics
|
// Quality metrics
|
||||||
RAGConfidence float64 `json:"rag_confidence" db:"rag_confidence"`
|
RAGConfidence float64 `json:"rag_confidence" db:"rag_confidence"`
|
||||||
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
|
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
|
||||||
ValidationScore float64 `json:"validation_score" db:"validation_score"`
|
ValidationScore float64 `json:"validation_score" db:"validation_score"`
|
||||||
|
|
||||||
// Versioning
|
// Versioning
|
||||||
Version int64 `json:"version" db:"version"`
|
Version int64 `json:"version" db:"version"`
|
||||||
ParentVersion *int64 `json:"parent_version" db:"parent_version"`
|
ParentVersion *int64 `json:"parent_version" db:"parent_version"`
|
||||||
ContextHash string `json:"context_hash" db:"context_hash"`
|
ContextHash string `json:"context_hash" db:"context_hash"`
|
||||||
|
|
||||||
// Temporal metadata
|
// Temporal metadata
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||||
GeneratedAt time.Time `json:"generated_at" db:"generated_at"`
|
GeneratedAt time.Time `json:"generated_at" db:"generated_at"`
|
||||||
LastAccessedAt *time.Time `json:"last_accessed_at" db:"last_accessed_at"`
|
LastAccessedAt *time.Time `json:"last_accessed_at" db:"last_accessed_at"`
|
||||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||||
|
|
||||||
// Storage metadata
|
// Storage metadata
|
||||||
StorageType string `json:"storage_type" db:"storage_type"` // local, distributed, hybrid
|
StorageType string `json:"storage_type" db:"storage_type"` // local, distributed, hybrid
|
||||||
CompressionType string `json:"compression_type" db:"compression_type"`
|
CompressionType string `json:"compression_type" db:"compression_type"`
|
||||||
EncryptionLevel int `json:"encryption_level" db:"encryption_level"`
|
EncryptionLevel int `json:"encryption_level" db:"encryption_level"`
|
||||||
ReplicationFactor int `json:"replication_factor" db:"replication_factor"`
|
ReplicationFactor int `json:"replication_factor" db:"replication_factor"`
|
||||||
Checksum string `json:"checksum" db:"checksum"`
|
Checksum string `json:"checksum" db:"checksum"`
|
||||||
DataSize int64 `json:"data_size" db:"data_size"`
|
DataSize int64 `json:"data_size" db:"data_size"`
|
||||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// EncryptedContextRecord represents role-based encrypted context storage
|
// EncryptedContextRecord represents role-based encrypted context storage
|
||||||
type EncryptedContextRecord struct {
|
type EncryptedContextRecord struct {
|
||||||
// Primary keys
|
// Primary keys
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
ContextID string `json:"context_id" db:"context_id"` // FK to ContextRecord
|
ContextID string `json:"context_id" db:"context_id"` // FK to ContextRecord
|
||||||
Role string `json:"role" db:"role"`
|
Role string `json:"role" db:"role"`
|
||||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||||
|
|
||||||
// Encryption details
|
// Encryption details
|
||||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
|
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
|
||||||
EncryptedData []byte `json:"encrypted_data" db:"encrypted_data"`
|
EncryptedData []byte `json:"encrypted_data" db:"encrypted_data"`
|
||||||
KeyFingerprint string `json:"key_fingerprint" db:"key_fingerprint"`
|
KeyFingerprint string `json:"key_fingerprint" db:"key_fingerprint"`
|
||||||
EncryptionAlgo string `json:"encryption_algo" db:"encryption_algo"`
|
EncryptionAlgo string `json:"encryption_algo" db:"encryption_algo"`
|
||||||
KeyVersion int `json:"key_version" db:"key_version"`
|
KeyVersion int `json:"key_version" db:"key_version"`
|
||||||
|
|
||||||
// Data integrity
|
// Data integrity
|
||||||
DataChecksum string `json:"data_checksum" db:"data_checksum"`
|
DataChecksum string `json:"data_checksum" db:"data_checksum"`
|
||||||
EncryptionHash string `json:"encryption_hash" db:"encryption_hash"`
|
EncryptionHash string `json:"encryption_hash" db:"encryption_hash"`
|
||||||
|
|
||||||
// Temporal data
|
// Temporal data
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||||
LastDecryptedAt *time.Time `json:"last_decrypted_at" db:"last_decrypted_at"`
|
LastDecryptedAt *time.Time `json:"last_decrypted_at" db:"last_decrypted_at"`
|
||||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||||
|
|
||||||
// Access tracking
|
// Access tracking
|
||||||
AccessCount int64 `json:"access_count" db:"access_count"`
|
AccessCount int64 `json:"access_count" db:"access_count"`
|
||||||
LastAccessedBy string `json:"last_accessed_by" db:"last_accessed_by"`
|
LastAccessedBy string `json:"last_accessed_by" db:"last_accessed_by"`
|
||||||
AccessHistory []byte `json:"access_history" db:"access_history"` // JSON access log
|
AccessHistory []byte `json:"access_history" db:"access_history"` // JSON access log
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContextHierarchyRecord represents hierarchical relationships between contexts
|
// ContextHierarchyRecord represents hierarchical relationships between contexts
|
||||||
type ContextHierarchyRecord struct {
|
type ContextHierarchyRecord struct {
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
ParentAddress ucxl.Address `json:"parent_address" db:"parent_address"`
|
ParentAddress ucxl.Address `json:"parent_address" db:"parent_address"`
|
||||||
ChildAddress ucxl.Address `json:"child_address" db:"child_address"`
|
ChildAddress ucxl.Address `json:"child_address" db:"child_address"`
|
||||||
ParentPath string `json:"parent_path" db:"parent_path"`
|
ParentPath string `json:"parent_path" db:"parent_path"`
|
||||||
ChildPath string `json:"child_path" db:"child_path"`
|
ChildPath string `json:"child_path" db:"child_path"`
|
||||||
|
|
||||||
// Relationship metadata
|
// Relationship metadata
|
||||||
RelationshipType string `json:"relationship_type" db:"relationship_type"` // parent, sibling, dependency
|
RelationshipType string `json:"relationship_type" db:"relationship_type"` // parent, sibling, dependency
|
||||||
InheritanceWeight float64 `json:"inheritance_weight" db:"inheritance_weight"`
|
InheritanceWeight float64 `json:"inheritance_weight" db:"inheritance_weight"`
|
||||||
OverrideStrength int `json:"override_strength" db:"override_strength"`
|
OverrideStrength int `json:"override_strength" db:"override_strength"`
|
||||||
Distance int `json:"distance" db:"distance"` // Hierarchy depth distance
|
Distance int `json:"distance" db:"distance"` // Hierarchy depth distance
|
||||||
|
|
||||||
// Temporal tracking
|
// Temporal tracking
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
ValidatedAt time.Time `json:"validated_at" db:"validated_at"`
|
ValidatedAt time.Time `json:"validated_at" db:"validated_at"`
|
||||||
LastResolvedAt *time.Time `json:"last_resolved_at" db:"last_resolved_at"`
|
LastResolvedAt *time.Time `json:"last_resolved_at" db:"last_resolved_at"`
|
||||||
|
|
||||||
// Resolution statistics
|
// Resolution statistics
|
||||||
ResolutionCount int64 `json:"resolution_count" db:"resolution_count"`
|
ResolutionCount int64 `json:"resolution_count" db:"resolution_count"`
|
||||||
ResolutionTime float64 `json:"resolution_time" db:"resolution_time"` // Average ms
|
ResolutionTime float64 `json:"resolution_time" db:"resolution_time"` // Average ms
|
||||||
}
|
}
|
||||||
|
|
||||||
// DecisionHopRecord represents temporal decision analysis storage
|
// DecisionHopRecord represents temporal decision analysis storage
|
||||||
type DecisionHopRecord struct {
|
type DecisionHopRecord struct {
|
||||||
// Primary identification
|
// Primary identification
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
DecisionID string `json:"decision_id" db:"decision_id"`
|
DecisionID string `json:"decision_id" db:"decision_id"`
|
||||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||||
ContextVersion int64 `json:"context_version" db:"context_version"`
|
ContextVersion int64 `json:"context_version" db:"context_version"`
|
||||||
|
|
||||||
// Decision metadata
|
// Decision metadata
|
||||||
ChangeReason string `json:"change_reason" db:"change_reason"`
|
ChangeReason string `json:"change_reason" db:"change_reason"`
|
||||||
DecisionMaker string `json:"decision_maker" db:"decision_maker"`
|
DecisionMaker string `json:"decision_maker" db:"decision_maker"`
|
||||||
DecisionRationale string `json:"decision_rationale" db:"decision_rationale"`
|
DecisionRationale string `json:"decision_rationale" db:"decision_rationale"`
|
||||||
ImpactScope string `json:"impact_scope" db:"impact_scope"`
|
ImpactScope string `json:"impact_scope" db:"impact_scope"`
|
||||||
ConfidenceLevel float64 `json:"confidence_level" db:"confidence_level"`
|
ConfidenceLevel float64 `json:"confidence_level" db:"confidence_level"`
|
||||||
|
|
||||||
// Context evolution
|
// Context evolution
|
||||||
PreviousHash string `json:"previous_hash" db:"previous_hash"`
|
PreviousHash string `json:"previous_hash" db:"previous_hash"`
|
||||||
CurrentHash string `json:"current_hash" db:"current_hash"`
|
CurrentHash string `json:"current_hash" db:"current_hash"`
|
||||||
ContextDelta []byte `json:"context_delta" db:"context_delta"` // JSON diff
|
ContextDelta []byte `json:"context_delta" db:"context_delta"` // JSON diff
|
||||||
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
|
StalenessScore float64 `json:"staleness_score" db:"staleness_score"`
|
||||||
|
|
||||||
// Temporal data
|
// Temporal data
|
||||||
Timestamp time.Time `json:"timestamp" db:"timestamp"`
|
Timestamp time.Time `json:"timestamp" db:"timestamp"`
|
||||||
PreviousDecisionTime *time.Time `json:"previous_decision_time" db:"previous_decision_time"`
|
PreviousDecisionTime *time.Time `json:"previous_decision_time" db:"previous_decision_time"`
|
||||||
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
|
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
|
||||||
|
|
||||||
// External references
|
// External references
|
||||||
ExternalRefs []byte `json:"external_refs" db:"external_refs"` // JSON array
|
ExternalRefs []byte `json:"external_refs" db:"external_refs"` // JSON array
|
||||||
CommitHash string `json:"commit_hash" db:"commit_hash"`
|
CommitHash string `json:"commit_hash" db:"commit_hash"`
|
||||||
TicketID string `json:"ticket_id" db:"ticket_id"`
|
TicketID string `json:"ticket_id" db:"ticket_id"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// DecisionInfluenceRecord represents decision influence relationships
|
// DecisionInfluenceRecord represents decision influence relationships
|
||||||
type DecisionInfluenceRecord struct {
|
type DecisionInfluenceRecord struct {
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
SourceDecisionID string `json:"source_decision_id" db:"source_decision_id"`
|
SourceDecisionID string `json:"source_decision_id" db:"source_decision_id"`
|
||||||
TargetDecisionID string `json:"target_decision_id" db:"target_decision_id"`
|
TargetDecisionID string `json:"target_decision_id" db:"target_decision_id"`
|
||||||
SourceAddress ucxl.Address `json:"source_address" db:"source_address"`
|
SourceAddress ucxl.Address `json:"source_address" db:"source_address"`
|
||||||
TargetAddress ucxl.Address `json:"target_address" db:"target_address"`
|
TargetAddress ucxl.Address `json:"target_address" db:"target_address"`
|
||||||
|
|
||||||
// Influence metrics
|
// Influence metrics
|
||||||
InfluenceStrength float64 `json:"influence_strength" db:"influence_strength"`
|
InfluenceStrength float64 `json:"influence_strength" db:"influence_strength"`
|
||||||
InfluenceType string `json:"influence_type" db:"influence_type"` // direct, indirect, cascading
|
InfluenceType string `json:"influence_type" db:"influence_type"` // direct, indirect, cascading
|
||||||
PropagationDelay float64 `json:"propagation_delay" db:"propagation_delay"` // hours
|
PropagationDelay float64 `json:"propagation_delay" db:"propagation_delay"` // hours
|
||||||
HopDistance int `json:"hop_distance" db:"hop_distance"`
|
HopDistance int `json:"hop_distance" db:"hop_distance"`
|
||||||
|
|
||||||
// Path analysis
|
// Path analysis
|
||||||
ShortestPath []byte `json:"shortest_path" db:"shortest_path"` // JSON path array
|
ShortestPath []byte `json:"shortest_path" db:"shortest_path"` // JSON path array
|
||||||
AlternatePaths []byte `json:"alternate_paths" db:"alternate_paths"` // JSON paths
|
AlternatePaths []byte `json:"alternate_paths" db:"alternate_paths"` // JSON paths
|
||||||
PathConfidence float64 `json:"path_confidence" db:"path_confidence"`
|
PathConfidence float64 `json:"path_confidence" db:"path_confidence"`
|
||||||
|
|
||||||
// Temporal tracking
|
// Temporal tracking
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
LastAnalyzedAt time.Time `json:"last_analyzed_at" db:"last_analyzed_at"`
|
LastAnalyzedAt time.Time `json:"last_analyzed_at" db:"last_analyzed_at"`
|
||||||
ValidatedAt *time.Time `json:"validated_at" db:"validated_at"`
|
ValidatedAt *time.Time `json:"validated_at" db:"validated_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AccessControlRecord represents role-based access control metadata
|
// AccessControlRecord represents role-based access control metadata
|
||||||
type AccessControlRecord struct {
|
type AccessControlRecord struct {
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||||
Role string `json:"role" db:"role"`
|
Role string `json:"role" db:"role"`
|
||||||
Permissions []byte `json:"permissions" db:"permissions"` // JSON permissions array
|
Permissions []byte `json:"permissions" db:"permissions"` // JSON permissions array
|
||||||
|
|
||||||
// Access levels
|
// Access levels
|
||||||
ReadAccess bool `json:"read_access" db:"read_access"`
|
ReadAccess bool `json:"read_access" db:"read_access"`
|
||||||
WriteAccess bool `json:"write_access" db:"write_access"`
|
WriteAccess bool `json:"write_access" db:"write_access"`
|
||||||
DeleteAccess bool `json:"delete_access" db:"delete_access"`
|
DeleteAccess bool `json:"delete_access" db:"delete_access"`
|
||||||
AdminAccess bool `json:"admin_access" db:"admin_access"`
|
AdminAccess bool `json:"admin_access" db:"admin_access"`
|
||||||
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
|
AccessLevel slurpContext.RoleAccessLevel `json:"access_level" db:"access_level"`
|
||||||
|
|
||||||
// Constraints
|
// Constraints
|
||||||
TimeConstraints []byte `json:"time_constraints" db:"time_constraints"` // JSON time rules
|
TimeConstraints []byte `json:"time_constraints" db:"time_constraints"` // JSON time rules
|
||||||
IPConstraints []byte `json:"ip_constraints" db:"ip_constraints"` // JSON IP rules
|
IPConstraints []byte `json:"ip_constraints" db:"ip_constraints"` // JSON IP rules
|
||||||
ContextFilters []byte `json:"context_filters" db:"context_filters"` // JSON filter rules
|
ContextFilters []byte `json:"context_filters" db:"context_filters"` // JSON filter rules
|
||||||
|
|
||||||
// Audit trail
|
// Audit trail
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
CreatedBy string `json:"created_by" db:"created_by"`
|
CreatedBy string `json:"created_by" db:"created_by"`
|
||||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||||
UpdatedBy string `json:"updated_by" db:"updated_by"`
|
UpdatedBy string `json:"updated_by" db:"updated_by"`
|
||||||
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
ExpiresAt *time.Time `json:"expires_at" db:"expires_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContextIndexRecord represents search index entries for contexts
|
// ContextIndexRecord represents search index entries for contexts
|
||||||
type ContextIndexRecord struct {
|
type ContextIndexRecord struct {
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||||
IndexName string `json:"index_name" db:"index_name"`
|
IndexName string `json:"index_name" db:"index_name"`
|
||||||
|
|
||||||
// Indexed content
|
// Indexed content
|
||||||
Tokens []byte `json:"tokens" db:"tokens"` // JSON token array
|
Tokens []byte `json:"tokens" db:"tokens"` // JSON token array
|
||||||
NGrams []byte `json:"ngrams" db:"ngrams"` // JSON n-gram array
|
NGrams []byte `json:"ngrams" db:"ngrams"` // JSON n-gram array
|
||||||
SemanticVector []byte `json:"semantic_vector" db:"semantic_vector"` // Embedding vector
|
SemanticVector []byte `json:"semantic_vector" db:"semantic_vector"` // Embedding vector
|
||||||
|
|
||||||
// Search metadata
|
// Search metadata
|
||||||
IndexWeight float64 `json:"index_weight" db:"index_weight"`
|
IndexWeight float64 `json:"index_weight" db:"index_weight"`
|
||||||
BoostFactor float64 `json:"boost_factor" db:"boost_factor"`
|
BoostFactor float64 `json:"boost_factor" db:"boost_factor"`
|
||||||
Language string `json:"language" db:"language"`
|
Language string `json:"language" db:"language"`
|
||||||
ContentType string `json:"content_type" db:"content_type"`
|
ContentType string `json:"content_type" db:"content_type"`
|
||||||
|
|
||||||
// Quality metrics
|
// Quality metrics
|
||||||
RelevanceScore float64 `json:"relevance_score" db:"relevance_score"`
|
RelevanceScore float64 `json:"relevance_score" db:"relevance_score"`
|
||||||
FreshnessScore float64 `json:"freshness_score" db:"freshness_score"`
|
FreshnessScore float64 `json:"freshness_score" db:"freshness_score"`
|
||||||
PopularityScore float64 `json:"popularity_score" db:"popularity_score"`
|
PopularityScore float64 `json:"popularity_score" db:"popularity_score"`
|
||||||
|
|
||||||
// Temporal tracking
|
// Temporal tracking
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||||
LastReindexed time.Time `json:"last_reindexed" db:"last_reindexed"`
|
LastReindexed time.Time `json:"last_reindexed" db:"last_reindexed"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// CacheEntryRecord represents cached context data
|
// CacheEntryRecord represents cached context data
|
||||||
type CacheEntryRecord struct {
|
type CacheEntryRecord struct {
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
CacheKey string `json:"cache_key" db:"cache_key"`
|
CacheKey string `json:"cache_key" db:"cache_key"`
|
||||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||||
Role string `json:"role" db:"role"`
|
Role string `json:"role" db:"role"`
|
||||||
|
|
||||||
// Cached data
|
// Cached data
|
||||||
CachedData []byte `json:"cached_data" db:"cached_data"`
|
CachedData []byte `json:"cached_data" db:"cached_data"`
|
||||||
DataHash string `json:"data_hash" db:"data_hash"`
|
DataHash string `json:"data_hash" db:"data_hash"`
|
||||||
Compressed bool `json:"compressed" db:"compressed"`
|
Compressed bool `json:"compressed" db:"compressed"`
|
||||||
OriginalSize int64 `json:"original_size" db:"original_size"`
|
OriginalSize int64 `json:"original_size" db:"original_size"`
|
||||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||||
|
|
||||||
// Cache metadata
|
// Cache metadata
|
||||||
TTL int64 `json:"ttl" db:"ttl"` // seconds
|
TTL int64 `json:"ttl" db:"ttl"` // seconds
|
||||||
Priority int `json:"priority" db:"priority"`
|
Priority int `json:"priority" db:"priority"`
|
||||||
AccessCount int64 `json:"access_count" db:"access_count"`
|
AccessCount int64 `json:"access_count" db:"access_count"`
|
||||||
HitCount int64 `json:"hit_count" db:"hit_count"`
|
HitCount int64 `json:"hit_count" db:"hit_count"`
|
||||||
|
|
||||||
// Temporal data
|
// Temporal data
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
LastAccessedAt time.Time `json:"last_accessed_at" db:"last_accessed_at"`
|
LastAccessedAt time.Time `json:"last_accessed_at" db:"last_accessed_at"`
|
||||||
LastHitAt *time.Time `json:"last_hit_at" db:"last_hit_at"`
|
LastHitAt *time.Time `json:"last_hit_at" db:"last_hit_at"`
|
||||||
ExpiresAt time.Time `json:"expires_at" db:"expires_at"`
|
ExpiresAt time.Time `json:"expires_at" db:"expires_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// BackupRecord represents backup metadata
|
// BackupRecord represents backup metadata
|
||||||
type BackupRecord struct {
|
type BackupRecord struct {
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
BackupID string `json:"backup_id" db:"backup_id"`
|
BackupID string `json:"backup_id" db:"backup_id"`
|
||||||
Name string `json:"name" db:"name"`
|
Name string `json:"name" db:"name"`
|
||||||
Destination string `json:"destination" db:"destination"`
|
Destination string `json:"destination" db:"destination"`
|
||||||
|
|
||||||
// Backup content
|
// Backup content
|
||||||
ContextCount int64 `json:"context_count" db:"context_count"`
|
ContextCount int64 `json:"context_count" db:"context_count"`
|
||||||
DataSize int64 `json:"data_size" db:"data_size"`
|
DataSize int64 `json:"data_size" db:"data_size"`
|
||||||
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
CompressedSize int64 `json:"compressed_size" db:"compressed_size"`
|
||||||
Checksum string `json:"checksum" db:"checksum"`
|
Checksum string `json:"checksum" db:"checksum"`
|
||||||
|
|
||||||
// Backup metadata
|
// Backup metadata
|
||||||
IncludesIndexes bool `json:"includes_indexes" db:"includes_indexes"`
|
IncludesIndexes bool `json:"includes_indexes" db:"includes_indexes"`
|
||||||
IncludesCache bool `json:"includes_cache" db:"includes_cache"`
|
IncludesCache bool `json:"includes_cache" db:"includes_cache"`
|
||||||
Encrypted bool `json:"encrypted" db:"encrypted"`
|
Encrypted bool `json:"encrypted" db:"encrypted"`
|
||||||
Incremental bool `json:"incremental" db:"incremental"`
|
Incremental bool `json:"incremental" db:"incremental"`
|
||||||
ParentBackupID string `json:"parent_backup_id" db:"parent_backup_id"`
|
ParentBackupID string `json:"parent_backup_id" db:"parent_backup_id"`
|
||||||
|
|
||||||
// Status tracking
|
// Status tracking
|
||||||
Status BackupStatus `json:"status" db:"status"`
|
Status BackupStatus `json:"status" db:"status"`
|
||||||
Progress float64 `json:"progress" db:"progress"`
|
Progress float64 `json:"progress" db:"progress"`
|
||||||
ErrorMessage string `json:"error_message" db:"error_message"`
|
ErrorMessage string `json:"error_message" db:"error_message"`
|
||||||
|
|
||||||
// Temporal data
|
// Temporal data
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
StartedAt *time.Time `json:"started_at" db:"started_at"`
|
StartedAt *time.Time `json:"started_at" db:"started_at"`
|
||||||
CompletedAt *time.Time `json:"completed_at" db:"completed_at"`
|
CompletedAt *time.Time `json:"completed_at" db:"completed_at"`
|
||||||
RetentionUntil time.Time `json:"retention_until" db:"retention_until"`
|
RetentionUntil time.Time `json:"retention_until" db:"retention_until"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// MetricsRecord represents storage performance metrics
|
// MetricsRecord represents storage performance metrics
|
||||||
type MetricsRecord struct {
|
type MetricsRecord struct {
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
MetricType string `json:"metric_type" db:"metric_type"` // storage, encryption, cache, etc.
|
MetricType string `json:"metric_type" db:"metric_type"` // storage, encryption, cache, etc.
|
||||||
NodeID string `json:"node_id" db:"node_id"`
|
NodeID string `json:"node_id" db:"node_id"`
|
||||||
|
|
||||||
// Metric data
|
// Metric data
|
||||||
MetricName string `json:"metric_name" db:"metric_name"`
|
MetricName string `json:"metric_name" db:"metric_name"`
|
||||||
MetricValue float64 `json:"metric_value" db:"metric_value"`
|
MetricValue float64 `json:"metric_value" db:"metric_value"`
|
||||||
MetricUnit string `json:"metric_unit" db:"metric_unit"`
|
MetricUnit string `json:"metric_unit" db:"metric_unit"`
|
||||||
Tags []byte `json:"tags" db:"tags"` // JSON tag object
|
Tags []byte `json:"tags" db:"tags"` // JSON tag object
|
||||||
|
|
||||||
// Aggregation data
|
// Aggregation data
|
||||||
AggregationType string `json:"aggregation_type" db:"aggregation_type"` // avg, sum, count, etc.
|
AggregationType string `json:"aggregation_type" db:"aggregation_type"` // avg, sum, count, etc.
|
||||||
TimeWindow int64 `json:"time_window" db:"time_window"` // seconds
|
TimeWindow int64 `json:"time_window" db:"time_window"` // seconds
|
||||||
SampleCount int64 `json:"sample_count" db:"sample_count"`
|
SampleCount int64 `json:"sample_count" db:"sample_count"`
|
||||||
|
|
||||||
// Temporal tracking
|
// Temporal tracking
|
||||||
Timestamp time.Time `json:"timestamp" db:"timestamp"`
|
Timestamp time.Time `json:"timestamp" db:"timestamp"`
|
||||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContextEvolutionRecord tracks how contexts evolve over time
|
// ContextEvolutionRecord tracks how contexts evolve over time
|
||||||
type ContextEvolutionRecord struct {
|
type ContextEvolutionRecord struct {
|
||||||
ID string `json:"id" db:"id"`
|
ID string `json:"id" db:"id"`
|
||||||
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
UCXLAddress ucxl.Address `json:"ucxl_address" db:"ucxl_address"`
|
||||||
FromVersion int64 `json:"from_version" db:"from_version"`
|
FromVersion int64 `json:"from_version" db:"from_version"`
|
||||||
ToVersion int64 `json:"to_version" db:"to_version"`
|
ToVersion int64 `json:"to_version" db:"to_version"`
|
||||||
|
|
||||||
// Evolution analysis
|
// Evolution analysis
|
||||||
EvolutionType string `json:"evolution_type" db:"evolution_type"` // enhancement, refactor, fix, etc.
|
EvolutionType string `json:"evolution_type" db:"evolution_type"` // enhancement, refactor, fix, etc.
|
||||||
SimilarityScore float64 `json:"similarity_score" db:"similarity_score"`
|
SimilarityScore float64 `json:"similarity_score" db:"similarity_score"`
|
||||||
ChangesMagnitude float64 `json:"changes_magnitude" db:"changes_magnitude"`
|
ChangesMagnitude float64 `json:"changes_magnitude" db:"changes_magnitude"`
|
||||||
SemanticDrift float64 `json:"semantic_drift" db:"semantic_drift"`
|
SemanticDrift float64 `json:"semantic_drift" db:"semantic_drift"`
|
||||||
|
|
||||||
// Change details
|
// Change details
|
||||||
ChangedFields []byte `json:"changed_fields" db:"changed_fields"` // JSON array
|
ChangedFields []byte `json:"changed_fields" db:"changed_fields"` // JSON array
|
||||||
FieldDeltas []byte `json:"field_deltas" db:"field_deltas"` // JSON delta object
|
FieldDeltas []byte `json:"field_deltas" db:"field_deltas"` // JSON delta object
|
||||||
ImpactAnalysis []byte `json:"impact_analysis" db:"impact_analysis"` // JSON analysis
|
ImpactAnalysis []byte `json:"impact_analysis" db:"impact_analysis"` // JSON analysis
|
||||||
|
|
||||||
// Quality assessment
|
// Quality assessment
|
||||||
QualityImprovement float64 `json:"quality_improvement" db:"quality_improvement"`
|
QualityImprovement float64 `json:"quality_improvement" db:"quality_improvement"`
|
||||||
ConfidenceChange float64 `json:"confidence_change" db:"confidence_change"`
|
ConfidenceChange float64 `json:"confidence_change" db:"confidence_change"`
|
||||||
ValidationPassed bool `json:"validation_passed" db:"validation_passed"`
|
ValidationPassed bool `json:"validation_passed" db:"validation_passed"`
|
||||||
|
|
||||||
// Temporal tracking
|
// Temporal tracking
|
||||||
EvolutionTime time.Time `json:"evolution_time" db:"evolution_time"`
|
EvolutionTime time.Time `json:"evolution_time" db:"evolution_time"`
|
||||||
AnalyzedAt time.Time `json:"analyzed_at" db:"analyzed_at"`
|
AnalyzedAt time.Time `json:"analyzed_at" db:"analyzed_at"`
|
||||||
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
|
ProcessingTime float64 `json:"processing_time" db:"processing_time"` // ms
|
||||||
}
|
}
|
||||||
|
|
||||||
// Schema validation and creation functions
|
// Schema validation and creation functions
|
||||||
|
|||||||
@@ -283,32 +283,42 @@ type IndexStatistics struct {
|
|||||||
|
|
||||||
// BackupConfig represents backup configuration
|
// BackupConfig represents backup configuration
|
||||||
type BackupConfig struct {
|
type BackupConfig struct {
|
||||||
Name string `json:"name"` // Backup name
|
Name string `json:"name"` // Backup name
|
||||||
Destination string `json:"destination"` // Backup destination
|
Destination string `json:"destination"` // Backup destination
|
||||||
IncludeIndexes bool `json:"include_indexes"` // Include search indexes
|
IncludeIndexes bool `json:"include_indexes"` // Include search indexes
|
||||||
IncludeCache bool `json:"include_cache"` // Include cache data
|
IncludeCache bool `json:"include_cache"` // Include cache data
|
||||||
Compression bool `json:"compression"` // Enable compression
|
Compression bool `json:"compression"` // Enable compression
|
||||||
Encryption bool `json:"encryption"` // Enable encryption
|
Encryption bool `json:"encryption"` // Enable encryption
|
||||||
EncryptionKey string `json:"encryption_key"` // Encryption key
|
EncryptionKey string `json:"encryption_key"` // Encryption key
|
||||||
Incremental bool `json:"incremental"` // Incremental backup
|
Incremental bool `json:"incremental"` // Incremental backup
|
||||||
Retention time.Duration `json:"retention"` // Backup retention period
|
ParentBackupID string `json:"parent_backup_id"` // Parent backup reference
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Retention time.Duration `json:"retention"` // Backup retention period
|
||||||
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// BackupInfo represents information about a backup
|
// BackupInfo represents information about a backup
|
||||||
type BackupInfo struct {
|
type BackupInfo struct {
|
||||||
ID string `json:"id"` // Backup ID
|
ID string `json:"id"` // Backup ID
|
||||||
Name string `json:"name"` // Backup name
|
BackupID string `json:"backup_id"` // Legacy identifier
|
||||||
CreatedAt time.Time `json:"created_at"` // Creation time
|
Name string `json:"name"` // Backup name
|
||||||
Size int64 `json:"size"` // Backup size
|
Destination string `json:"destination"` // Destination path
|
||||||
CompressedSize int64 `json:"compressed_size"` // Compressed size
|
CreatedAt time.Time `json:"created_at"` // Creation time
|
||||||
ContextCount int64 `json:"context_count"` // Number of contexts
|
Size int64 `json:"size"` // Backup size
|
||||||
Encrypted bool `json:"encrypted"` // Whether encrypted
|
CompressedSize int64 `json:"compressed_size"` // Compressed size
|
||||||
Incremental bool `json:"incremental"` // Whether incremental
|
DataSize int64 `json:"data_size"` // Total data size
|
||||||
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
|
ContextCount int64 `json:"context_count"` // Number of contexts
|
||||||
Checksum string `json:"checksum"` // Backup checksum
|
Encrypted bool `json:"encrypted"` // Whether encrypted
|
||||||
Status BackupStatus `json:"status"` // Backup status
|
Incremental bool `json:"incremental"` // Whether incremental
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
|
||||||
|
IncludesIndexes bool `json:"includes_indexes"` // Include indexes
|
||||||
|
IncludesCache bool `json:"includes_cache"` // Include cache data
|
||||||
|
Checksum string `json:"checksum"` // Backup checksum
|
||||||
|
Status BackupStatus `json:"status"` // Backup status
|
||||||
|
Progress float64 `json:"progress"` // Completion progress 0-1
|
||||||
|
ErrorMessage string `json:"error_message"` // Last error message
|
||||||
|
RetentionUntil time.Time `json:"retention_until"` // Retention deadline
|
||||||
|
CompletedAt *time.Time `json:"completed_at"` // Completion time
|
||||||
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// BackupStatus represents backup status
|
// BackupStatus represents backup status
|
||||||
|
|||||||
@@ -5,7 +5,9 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
"chorus/pkg/slurp/storage"
|
"chorus/pkg/slurp/storage"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// TemporalGraphFactory creates and configures temporal graph components
|
// TemporalGraphFactory creates and configures temporal graph components
|
||||||
@@ -17,9 +19,9 @@ type TemporalGraphFactory struct {
|
|||||||
// TemporalConfig represents configuration for the temporal graph system
|
// TemporalConfig represents configuration for the temporal graph system
|
||||||
type TemporalConfig struct {
|
type TemporalConfig struct {
|
||||||
// Core graph settings
|
// Core graph settings
|
||||||
MaxDepth int `json:"max_depth"`
|
MaxDepth int `json:"max_depth"`
|
||||||
StalenessWeights *StalenessWeights `json:"staleness_weights"`
|
StalenessWeights *StalenessWeights `json:"staleness_weights"`
|
||||||
CacheTimeout time.Duration `json:"cache_timeout"`
|
CacheTimeout time.Duration `json:"cache_timeout"`
|
||||||
|
|
||||||
// Analysis settings
|
// Analysis settings
|
||||||
InfluenceAnalysisConfig *InfluenceAnalysisConfig `json:"influence_analysis_config"`
|
InfluenceAnalysisConfig *InfluenceAnalysisConfig `json:"influence_analysis_config"`
|
||||||
@@ -27,34 +29,34 @@ type TemporalConfig struct {
|
|||||||
QueryConfig *QueryConfig `json:"query_config"`
|
QueryConfig *QueryConfig `json:"query_config"`
|
||||||
|
|
||||||
// Persistence settings
|
// Persistence settings
|
||||||
PersistenceConfig *PersistenceConfig `json:"persistence_config"`
|
PersistenceConfig *PersistenceConfig `json:"persistence_config"`
|
||||||
|
|
||||||
// Performance settings
|
// Performance settings
|
||||||
EnableCaching bool `json:"enable_caching"`
|
EnableCaching bool `json:"enable_caching"`
|
||||||
EnableCompression bool `json:"enable_compression"`
|
EnableCompression bool `json:"enable_compression"`
|
||||||
EnableMetrics bool `json:"enable_metrics"`
|
EnableMetrics bool `json:"enable_metrics"`
|
||||||
|
|
||||||
// Debug settings
|
// Debug settings
|
||||||
EnableDebugLogging bool `json:"enable_debug_logging"`
|
EnableDebugLogging bool `json:"enable_debug_logging"`
|
||||||
EnableValidation bool `json:"enable_validation"`
|
EnableValidation bool `json:"enable_validation"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// InfluenceAnalysisConfig represents configuration for influence analysis
|
// InfluenceAnalysisConfig represents configuration for influence analysis
|
||||||
type InfluenceAnalysisConfig struct {
|
type InfluenceAnalysisConfig struct {
|
||||||
DampingFactor float64 `json:"damping_factor"`
|
DampingFactor float64 `json:"damping_factor"`
|
||||||
MaxIterations int `json:"max_iterations"`
|
MaxIterations int `json:"max_iterations"`
|
||||||
ConvergenceThreshold float64 `json:"convergence_threshold"`
|
ConvergenceThreshold float64 `json:"convergence_threshold"`
|
||||||
CacheValidDuration time.Duration `json:"cache_valid_duration"`
|
CacheValidDuration time.Duration `json:"cache_valid_duration"`
|
||||||
EnableCentralityMetrics bool `json:"enable_centrality_metrics"`
|
EnableCentralityMetrics bool `json:"enable_centrality_metrics"`
|
||||||
EnableCommunityDetection bool `json:"enable_community_detection"`
|
EnableCommunityDetection bool `json:"enable_community_detection"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NavigationConfig represents configuration for decision navigation
|
// NavigationConfig represents configuration for decision navigation
|
||||||
type NavigationConfig struct {
|
type NavigationConfig struct {
|
||||||
MaxNavigationHistory int `json:"max_navigation_history"`
|
MaxNavigationHistory int `json:"max_navigation_history"`
|
||||||
BookmarkRetention time.Duration `json:"bookmark_retention"`
|
BookmarkRetention time.Duration `json:"bookmark_retention"`
|
||||||
SessionTimeout time.Duration `json:"session_timeout"`
|
SessionTimeout time.Duration `json:"session_timeout"`
|
||||||
EnablePathCaching bool `json:"enable_path_caching"`
|
EnablePathCaching bool `json:"enable_path_caching"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// QueryConfig represents configuration for decision-hop queries
|
// QueryConfig represents configuration for decision-hop queries
|
||||||
@@ -68,17 +70,17 @@ type QueryConfig struct {
|
|||||||
|
|
||||||
// TemporalGraphSystem represents the complete temporal graph system
|
// TemporalGraphSystem represents the complete temporal graph system
|
||||||
type TemporalGraphSystem struct {
|
type TemporalGraphSystem struct {
|
||||||
Graph TemporalGraph
|
Graph TemporalGraph
|
||||||
Navigator DecisionNavigator
|
Navigator DecisionNavigator
|
||||||
InfluenceAnalyzer InfluenceAnalyzer
|
InfluenceAnalyzer InfluenceAnalyzer
|
||||||
StalenessDetector StalenessDetector
|
StalenessDetector StalenessDetector
|
||||||
ConflictDetector ConflictDetector
|
ConflictDetector ConflictDetector
|
||||||
PatternAnalyzer PatternAnalyzer
|
PatternAnalyzer PatternAnalyzer
|
||||||
VersionManager VersionManager
|
VersionManager VersionManager
|
||||||
HistoryManager HistoryManager
|
HistoryManager HistoryManager
|
||||||
MetricsCollector MetricsCollector
|
MetricsCollector MetricsCollector
|
||||||
QuerySystem *querySystemImpl
|
QuerySystem *querySystemImpl
|
||||||
PersistenceManager *persistenceManagerImpl
|
PersistenceManager *persistenceManagerImpl
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewTemporalGraphFactory creates a new temporal graph factory
|
// NewTemporalGraphFactory creates a new temporal graph factory
|
||||||
@@ -135,17 +137,17 @@ func (tgf *TemporalGraphFactory) CreateTemporalGraphSystem(
|
|||||||
metricsCollector := NewMetricsCollector(graph)
|
metricsCollector := NewMetricsCollector(graph)
|
||||||
|
|
||||||
system := &TemporalGraphSystem{
|
system := &TemporalGraphSystem{
|
||||||
Graph: graph,
|
Graph: graph,
|
||||||
Navigator: navigator,
|
Navigator: navigator,
|
||||||
InfluenceAnalyzer: analyzer,
|
InfluenceAnalyzer: analyzer,
|
||||||
StalenessDetector: detector,
|
StalenessDetector: detector,
|
||||||
ConflictDetector: conflictDetector,
|
ConflictDetector: conflictDetector,
|
||||||
PatternAnalyzer: patternAnalyzer,
|
PatternAnalyzer: patternAnalyzer,
|
||||||
VersionManager: versionManager,
|
VersionManager: versionManager,
|
||||||
HistoryManager: historyManager,
|
HistoryManager: historyManager,
|
||||||
MetricsCollector: metricsCollector,
|
MetricsCollector: metricsCollector,
|
||||||
QuerySystem: querySystem,
|
QuerySystem: querySystem,
|
||||||
PersistenceManager: persistenceManager,
|
PersistenceManager: persistenceManager,
|
||||||
}
|
}
|
||||||
|
|
||||||
return system, nil
|
return system, nil
|
||||||
@@ -190,11 +192,11 @@ func DefaultTemporalConfig() *TemporalConfig {
|
|||||||
CacheTimeout: time.Minute * 15,
|
CacheTimeout: time.Minute * 15,
|
||||||
|
|
||||||
InfluenceAnalysisConfig: &InfluenceAnalysisConfig{
|
InfluenceAnalysisConfig: &InfluenceAnalysisConfig{
|
||||||
DampingFactor: 0.85,
|
DampingFactor: 0.85,
|
||||||
MaxIterations: 100,
|
MaxIterations: 100,
|
||||||
ConvergenceThreshold: 1e-6,
|
ConvergenceThreshold: 1e-6,
|
||||||
CacheValidDuration: time.Minute * 30,
|
CacheValidDuration: time.Minute * 30,
|
||||||
EnableCentralityMetrics: true,
|
EnableCentralityMetrics: true,
|
||||||
EnableCommunityDetection: true,
|
EnableCommunityDetection: true,
|
||||||
},
|
},
|
||||||
|
|
||||||
@@ -214,24 +216,24 @@ func DefaultTemporalConfig() *TemporalConfig {
|
|||||||
},
|
},
|
||||||
|
|
||||||
PersistenceConfig: &PersistenceConfig{
|
PersistenceConfig: &PersistenceConfig{
|
||||||
EnableLocalStorage: true,
|
EnableLocalStorage: true,
|
||||||
EnableDistributedStorage: true,
|
EnableDistributedStorage: true,
|
||||||
EnableEncryption: true,
|
EnableEncryption: true,
|
||||||
EncryptionRoles: []string{"analyst", "architect", "developer"},
|
EncryptionRoles: []string{"analyst", "architect", "developer"},
|
||||||
SyncInterval: time.Minute * 15,
|
SyncInterval: time.Minute * 15,
|
||||||
ConflictResolutionStrategy: "latest_wins",
|
ConflictResolutionStrategy: "latest_wins",
|
||||||
EnableAutoSync: true,
|
EnableAutoSync: true,
|
||||||
MaxSyncRetries: 3,
|
MaxSyncRetries: 3,
|
||||||
BatchSize: 50,
|
BatchSize: 50,
|
||||||
FlushInterval: time.Second * 30,
|
FlushInterval: time.Second * 30,
|
||||||
EnableWriteBuffer: true,
|
EnableWriteBuffer: true,
|
||||||
EnableAutoBackup: true,
|
EnableAutoBackup: true,
|
||||||
BackupInterval: time.Hour * 6,
|
BackupInterval: time.Hour * 6,
|
||||||
RetainBackupCount: 10,
|
RetainBackupCount: 10,
|
||||||
KeyPrefix: "temporal_graph",
|
KeyPrefix: "temporal_graph",
|
||||||
NodeKeyPattern: "temporal_graph/nodes/%s",
|
NodeKeyPattern: "temporal_graph/nodes/%s",
|
||||||
GraphKeyPattern: "temporal_graph/graph/%s",
|
GraphKeyPattern: "temporal_graph/graph/%s",
|
||||||
MetadataKeyPattern: "temporal_graph/metadata/%s",
|
MetadataKeyPattern: "temporal_graph/metadata/%s",
|
||||||
},
|
},
|
||||||
|
|
||||||
EnableCaching: true,
|
EnableCaching: true,
|
||||||
@@ -308,11 +310,11 @@ func (cd *conflictDetectorImpl) ValidateDecisionSequence(ctx context.Context, ad
|
|||||||
func (cd *conflictDetectorImpl) ResolveTemporalConflict(ctx context.Context, conflict *TemporalConflict) (*ConflictResolution, error) {
|
func (cd *conflictDetectorImpl) ResolveTemporalConflict(ctx context.Context, conflict *TemporalConflict) (*ConflictResolution, error) {
|
||||||
// Implementation would resolve specific temporal conflicts
|
// Implementation would resolve specific temporal conflicts
|
||||||
return &ConflictResolution{
|
return &ConflictResolution{
|
||||||
ConflictID: conflict.ID,
|
ConflictID: conflict.ID,
|
||||||
Resolution: "auto_resolved",
|
ResolutionMethod: "auto_resolved",
|
||||||
ResolvedAt: time.Now(),
|
ResolvedAt: time.Now(),
|
||||||
ResolvedBy: "system",
|
ResolvedBy: "system",
|
||||||
Confidence: 0.8,
|
Confidence: 0.8,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -539,13 +541,13 @@ func (mc *metricsCollectorImpl) GetInfluenceMetrics(ctx context.Context) (*Influ
|
|||||||
func (mc *metricsCollectorImpl) GetQualityMetrics(ctx context.Context) (*QualityMetrics, error) {
|
func (mc *metricsCollectorImpl) GetQualityMetrics(ctx context.Context) (*QualityMetrics, error) {
|
||||||
// Implementation would get temporal data quality metrics
|
// Implementation would get temporal data quality metrics
|
||||||
return &QualityMetrics{
|
return &QualityMetrics{
|
||||||
DataCompleteness: 1.0,
|
DataCompleteness: 1.0,
|
||||||
DataConsistency: 1.0,
|
DataConsistency: 1.0,
|
||||||
DataAccuracy: 1.0,
|
DataAccuracy: 1.0,
|
||||||
AverageConfidence: 0.8,
|
AverageConfidence: 0.8,
|
||||||
ConflictsDetected: 0,
|
ConflictsDetected: 0,
|
||||||
ConflictsResolved: 0,
|
ConflictsResolved: 0,
|
||||||
LastQualityCheck: time.Now(),
|
LastQualityCheck: time.Now(),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -9,9 +9,9 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
"chorus/pkg/slurp/storage"
|
"chorus/pkg/slurp/storage"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// temporalGraphImpl implements the TemporalGraph interface
|
// temporalGraphImpl implements the TemporalGraph interface
|
||||||
@@ -22,23 +22,23 @@ type temporalGraphImpl struct {
|
|||||||
storage storage.ContextStore
|
storage storage.ContextStore
|
||||||
|
|
||||||
// In-memory graph structures for fast access
|
// In-memory graph structures for fast access
|
||||||
nodes map[string]*TemporalNode // nodeID -> TemporalNode
|
nodes map[string]*TemporalNode // nodeID -> TemporalNode
|
||||||
addressToNodes map[string][]*TemporalNode // address -> list of temporal nodes
|
addressToNodes map[string][]*TemporalNode // address -> list of temporal nodes
|
||||||
influences map[string][]string // nodeID -> list of influenced nodeIDs
|
influences map[string][]string // nodeID -> list of influenced nodeIDs
|
||||||
influencedBy map[string][]string // nodeID -> list of influencer nodeIDs
|
influencedBy map[string][]string // nodeID -> list of influencer nodeIDs
|
||||||
|
|
||||||
// Decision tracking
|
// Decision tracking
|
||||||
decisions map[string]*DecisionMetadata // decisionID -> DecisionMetadata
|
decisions map[string]*DecisionMetadata // decisionID -> DecisionMetadata
|
||||||
decisionToNodes map[string][]*TemporalNode // decisionID -> list of affected nodes
|
decisionToNodes map[string][]*TemporalNode // decisionID -> list of affected nodes
|
||||||
|
|
||||||
// Performance optimization
|
// Performance optimization
|
||||||
pathCache map[string][]*DecisionStep // cache for decision paths
|
pathCache map[string][]*DecisionStep // cache for decision paths
|
||||||
metricsCache map[string]interface{} // cache for expensive metrics
|
metricsCache map[string]interface{} // cache for expensive metrics
|
||||||
cacheTimeout time.Duration
|
cacheTimeout time.Duration
|
||||||
lastCacheClean time.Time
|
lastCacheClean time.Time
|
||||||
|
|
||||||
// Configuration
|
// Configuration
|
||||||
maxDepth int // Maximum depth for path finding
|
maxDepth int // Maximum depth for path finding
|
||||||
stalenessWeight *StalenessWeights
|
stalenessWeight *StalenessWeights
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -80,24 +80,24 @@ func (tg *temporalGraphImpl) CreateInitialContext(ctx context.Context, address u
|
|||||||
|
|
||||||
// Create temporal node
|
// Create temporal node
|
||||||
temporalNode := &TemporalNode{
|
temporalNode := &TemporalNode{
|
||||||
ID: nodeID,
|
ID: nodeID,
|
||||||
UCXLAddress: address,
|
UCXLAddress: address,
|
||||||
Version: 1,
|
Version: 1,
|
||||||
Context: contextData,
|
Context: contextData,
|
||||||
Timestamp: time.Now(),
|
Timestamp: time.Now(),
|
||||||
DecisionID: fmt.Sprintf("initial-%s", creator),
|
DecisionID: fmt.Sprintf("initial-%s", creator),
|
||||||
ChangeReason: ReasonInitialCreation,
|
ChangeReason: ReasonInitialCreation,
|
||||||
ParentNode: nil,
|
ParentNode: nil,
|
||||||
ContextHash: tg.calculateContextHash(contextData),
|
ContextHash: tg.calculateContextHash(contextData),
|
||||||
Confidence: contextData.RAGConfidence,
|
Confidence: contextData.RAGConfidence,
|
||||||
Staleness: 0.0,
|
Staleness: 0.0,
|
||||||
Influences: make([]ucxl.Address, 0),
|
Influences: make([]ucxl.Address, 0),
|
||||||
InfluencedBy: make([]ucxl.Address, 0),
|
InfluencedBy: make([]ucxl.Address, 0),
|
||||||
ValidatedBy: []string{creator},
|
ValidatedBy: []string{creator},
|
||||||
LastValidated: time.Now(),
|
LastValidated: time.Now(),
|
||||||
ImpactScope: ImpactLocal,
|
ImpactScope: ImpactLocal,
|
||||||
PropagatedTo: make([]ucxl.Address, 0),
|
PropagatedTo: make([]ucxl.Address, 0),
|
||||||
Metadata: make(map[string]interface{}),
|
Metadata: make(map[string]interface{}),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Store in memory structures
|
// Store in memory structures
|
||||||
@@ -111,15 +111,15 @@ func (tg *temporalGraphImpl) CreateInitialContext(ctx context.Context, address u
|
|||||||
|
|
||||||
// Store decision metadata
|
// Store decision metadata
|
||||||
decisionMeta := &DecisionMetadata{
|
decisionMeta := &DecisionMetadata{
|
||||||
ID: temporalNode.DecisionID,
|
ID: temporalNode.DecisionID,
|
||||||
Maker: creator,
|
Maker: creator,
|
||||||
Rationale: "Initial context creation",
|
Rationale: "Initial context creation",
|
||||||
Scope: ImpactLocal,
|
Scope: ImpactLocal,
|
||||||
ConfidenceLevel: contextData.RAGConfidence,
|
ConfidenceLevel: contextData.RAGConfidence,
|
||||||
ExternalRefs: make([]string, 0),
|
ExternalRefs: make([]string, 0),
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
ImplementationStatus: "complete",
|
ImplementationStatus: "complete",
|
||||||
Metadata: make(map[string]interface{}),
|
Metadata: make(map[string]interface{}),
|
||||||
}
|
}
|
||||||
tg.decisions[temporalNode.DecisionID] = decisionMeta
|
tg.decisions[temporalNode.DecisionID] = decisionMeta
|
||||||
tg.decisionToNodes[temporalNode.DecisionID] = []*TemporalNode{temporalNode}
|
tg.decisionToNodes[temporalNode.DecisionID] = []*TemporalNode{temporalNode}
|
||||||
@@ -156,24 +156,24 @@ func (tg *temporalGraphImpl) EvolveContext(ctx context.Context, address ucxl.Add
|
|||||||
|
|
||||||
// Create new temporal node
|
// Create new temporal node
|
||||||
temporalNode := &TemporalNode{
|
temporalNode := &TemporalNode{
|
||||||
ID: nodeID,
|
ID: nodeID,
|
||||||
UCXLAddress: address,
|
UCXLAddress: address,
|
||||||
Version: newVersion,
|
Version: newVersion,
|
||||||
Context: newContext,
|
Context: newContext,
|
||||||
Timestamp: time.Now(),
|
Timestamp: time.Now(),
|
||||||
DecisionID: decision.ID,
|
DecisionID: decision.ID,
|
||||||
ChangeReason: reason,
|
ChangeReason: reason,
|
||||||
ParentNode: &latestNode.ID,
|
ParentNode: &latestNode.ID,
|
||||||
ContextHash: tg.calculateContextHash(newContext),
|
ContextHash: tg.calculateContextHash(newContext),
|
||||||
Confidence: newContext.RAGConfidence,
|
Confidence: newContext.RAGConfidence,
|
||||||
Staleness: 0.0, // New version, not stale
|
Staleness: 0.0, // New version, not stale
|
||||||
Influences: make([]ucxl.Address, 0),
|
Influences: make([]ucxl.Address, 0),
|
||||||
InfluencedBy: make([]ucxl.Address, 0),
|
InfluencedBy: make([]ucxl.Address, 0),
|
||||||
ValidatedBy: []string{decision.Maker},
|
ValidatedBy: []string{decision.Maker},
|
||||||
LastValidated: time.Now(),
|
LastValidated: time.Now(),
|
||||||
ImpactScope: decision.Scope,
|
ImpactScope: decision.Scope,
|
||||||
PropagatedTo: make([]ucxl.Address, 0),
|
PropagatedTo: make([]ucxl.Address, 0),
|
||||||
Metadata: make(map[string]interface{}),
|
Metadata: make(map[string]interface{}),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Copy influence relationships from parent
|
// Copy influence relationships from parent
|
||||||
@@ -534,7 +534,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
|
|||||||
return nil, fmt.Errorf("from node not found: %w", err)
|
return nil, fmt.Errorf("from node not found: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
toNode, err := tg.getLatestNodeUnsafe(to)
|
_, err := tg.getLatestNodeUnsafe(to)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("to node not found: %w", err)
|
return nil, fmt.Errorf("to node not found: %w", err)
|
||||||
}
|
}
|
||||||
@@ -620,8 +620,8 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
|
|||||||
MostInfluentialDecisions: make([]*InfluentialDecision, 0),
|
MostInfluentialDecisions: make([]*InfluentialDecision, 0),
|
||||||
DecisionClusters: make([]*DecisionCluster, 0),
|
DecisionClusters: make([]*DecisionCluster, 0),
|
||||||
Patterns: make([]*DecisionPattern, 0),
|
Patterns: make([]*DecisionPattern, 0),
|
||||||
Anomalies: make([]*AnomalousDecision, 0),
|
Anomalies: make([]*AnomalousDecision, 0),
|
||||||
AnalyzedAt: time.Now(),
|
AnalyzedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate decision velocity
|
// Calculate decision velocity
|
||||||
@@ -652,18 +652,18 @@ func (tg *temporalGraphImpl) AnalyzeDecisionPatterns(ctx context.Context) (*Deci
|
|||||||
// Find most influential decisions (simplified)
|
// Find most influential decisions (simplified)
|
||||||
influenceScores := make(map[string]float64)
|
influenceScores := make(map[string]float64)
|
||||||
for nodeID, node := range tg.nodes {
|
for nodeID, node := range tg.nodes {
|
||||||
score := float64(len(tg.influences[nodeID])) * 1.0 // Direct influences
|
score := float64(len(tg.influences[nodeID])) * 1.0 // Direct influences
|
||||||
score += float64(len(tg.influencedBy[nodeID])) * 0.5 // Being influenced
|
score += float64(len(tg.influencedBy[nodeID])) * 0.5 // Being influenced
|
||||||
influenceScores[nodeID] = score
|
influenceScores[nodeID] = score
|
||||||
|
|
||||||
if score > 3.0 { // Threshold for "influential"
|
if score > 3.0 { // Threshold for "influential"
|
||||||
influential := &InfluentialDecision{
|
influential := &InfluentialDecision{
|
||||||
Address: node.UCXLAddress,
|
Address: node.UCXLAddress,
|
||||||
DecisionHop: node.Version,
|
DecisionHop: node.Version,
|
||||||
InfluenceScore: score,
|
InfluenceScore: score,
|
||||||
AffectedContexts: node.Influences,
|
AffectedContexts: node.Influences,
|
||||||
DecisionMetadata: tg.decisions[node.DecisionID],
|
DecisionMetadata: tg.decisions[node.DecisionID],
|
||||||
InfluenceReasons: []string{"high_connectivity", "multiple_influences"},
|
InfluenceReasons: []string{"high_connectivity", "multiple_influences"},
|
||||||
}
|
}
|
||||||
analysis.MostInfluentialDecisions = append(analysis.MostInfluentialDecisions, influential)
|
analysis.MostInfluentialDecisions = append(analysis.MostInfluentialDecisions, influential)
|
||||||
}
|
}
|
||||||
@@ -869,8 +869,8 @@ func (tg *temporalGraphImpl) calculateStaleness(node *TemporalNode, changedNode
|
|||||||
|
|
||||||
return math.Min(
|
return math.Min(
|
||||||
tg.stalenessWeight.TimeWeight*timeWeight+
|
tg.stalenessWeight.TimeWeight*timeWeight+
|
||||||
tg.stalenessWeight.InfluenceWeight*influenceWeight+
|
tg.stalenessWeight.InfluenceWeight*influenceWeight+
|
||||||
tg.stalenessWeight.ImportanceWeight*impactWeight, 1.0)
|
tg.stalenessWeight.ImportanceWeight*impactWeight, 1.0)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (tg *temporalGraphImpl) clearCacheForAddress(address ucxl.Address) {
|
func (tg *temporalGraphImpl) clearCacheForAddress(address ucxl.Address) {
|
||||||
|
|||||||
@@ -210,13 +210,13 @@ func (ia *influenceAnalyzerImpl) FindInfluentialDecisions(ctx context.Context, l
|
|||||||
impact := ia.analyzeDecisionImpactInternal(node)
|
impact := ia.analyzeDecisionImpactInternal(node)
|
||||||
|
|
||||||
decision := &InfluentialDecision{
|
decision := &InfluentialDecision{
|
||||||
Address: node.UCXLAddress,
|
Address: node.UCXLAddress,
|
||||||
DecisionHop: node.Version,
|
DecisionHop: node.Version,
|
||||||
InfluenceScore: nodeScore.score,
|
InfluenceScore: nodeScore.score,
|
||||||
AffectedContexts: node.Influences,
|
AffectedContexts: node.Influences,
|
||||||
DecisionMetadata: ia.graph.decisions[node.DecisionID],
|
DecisionMetadata: ia.graph.decisions[node.DecisionID],
|
||||||
ImpactAnalysis: impact,
|
ImpactAnalysis: impact,
|
||||||
InfluenceReasons: ia.getInfluenceReasons(node, nodeScore.score),
|
InfluenceReasons: ia.getInfluenceReasons(node, nodeScore.score),
|
||||||
}
|
}
|
||||||
|
|
||||||
influential = append(influential, decision)
|
influential = append(influential, decision)
|
||||||
@@ -899,7 +899,6 @@ func (ia *influenceAnalyzerImpl) findShortestPathLength(fromID, toID string) int
|
|||||||
|
|
||||||
func (ia *influenceAnalyzerImpl) getNodeCentrality(nodeID string) float64 {
|
func (ia *influenceAnalyzerImpl) getNodeCentrality(nodeID string) float64 {
|
||||||
// Simple centrality based on degree
|
// Simple centrality based on degree
|
||||||
influences := len(ia.graph.influences[nodeID])
|
|
||||||
influencedBy := len(ia.graph.influencedBy[nodeID])
|
influencedBy := len(ia.graph.influencedBy[nodeID])
|
||||||
totalNodes := len(ia.graph.nodes)
|
totalNodes := len(ia.graph.nodes)
|
||||||
|
|
||||||
|
|||||||
@@ -27,22 +27,22 @@ type decisionNavigatorImpl struct {
|
|||||||
|
|
||||||
// NavigationSession represents a navigation session
|
// NavigationSession represents a navigation session
|
||||||
type NavigationSession struct {
|
type NavigationSession struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
UserID string `json:"user_id"`
|
UserID string `json:"user_id"`
|
||||||
StartedAt time.Time `json:"started_at"`
|
StartedAt time.Time `json:"started_at"`
|
||||||
LastActivity time.Time `json:"last_activity"`
|
LastActivity time.Time `json:"last_activity"`
|
||||||
CurrentPosition ucxl.Address `json:"current_position"`
|
CurrentPosition ucxl.Address `json:"current_position"`
|
||||||
History []*DecisionStep `json:"history"`
|
History []*DecisionStep `json:"history"`
|
||||||
Bookmarks []string `json:"bookmarks"`
|
Bookmarks []string `json:"bookmarks"`
|
||||||
Preferences *NavPreferences `json:"preferences"`
|
Preferences *NavPreferences `json:"preferences"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NavPreferences represents navigation preferences
|
// NavPreferences represents navigation preferences
|
||||||
type NavPreferences struct {
|
type NavPreferences struct {
|
||||||
MaxHops int `json:"max_hops"`
|
MaxHops int `json:"max_hops"`
|
||||||
PreferRecentDecisions bool `json:"prefer_recent_decisions"`
|
PreferRecentDecisions bool `json:"prefer_recent_decisions"`
|
||||||
FilterByConfidence float64 `json:"filter_by_confidence"`
|
FilterByConfidence float64 `json:"filter_by_confidence"`
|
||||||
IncludeStaleContexts bool `json:"include_stale_contexts"`
|
IncludeStaleContexts bool `json:"include_stale_contexts"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewDecisionNavigator creates a new decision navigator
|
// NewDecisionNavigator creates a new decision navigator
|
||||||
@@ -50,7 +50,7 @@ func NewDecisionNavigator(graph *temporalGraphImpl) DecisionNavigator {
|
|||||||
return &decisionNavigatorImpl{
|
return &decisionNavigatorImpl{
|
||||||
graph: graph,
|
graph: graph,
|
||||||
navigationSessions: make(map[string]*NavigationSession),
|
navigationSessions: make(map[string]*NavigationSession),
|
||||||
bookmarks: make(map[string]*DecisionBookmark),
|
bookmarks: make(map[string]*DecisionBookmark),
|
||||||
maxNavigationHistory: 100,
|
maxNavigationHistory: 100,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -169,14 +169,14 @@ func (dn *decisionNavigatorImpl) FindStaleContexts(ctx context.Context, stalenes
|
|||||||
for _, node := range dn.graph.nodes {
|
for _, node := range dn.graph.nodes {
|
||||||
if node.Staleness >= stalenessThreshold {
|
if node.Staleness >= stalenessThreshold {
|
||||||
staleness := &StaleContext{
|
staleness := &StaleContext{
|
||||||
UCXLAddress: node.UCXLAddress,
|
UCXLAddress: node.UCXLAddress,
|
||||||
TemporalNode: node,
|
TemporalNode: node,
|
||||||
StalenessScore: node.Staleness,
|
StalenessScore: node.Staleness,
|
||||||
LastUpdated: node.Timestamp,
|
LastUpdated: node.Timestamp,
|
||||||
Reasons: dn.getStalenessReasons(node),
|
Reasons: dn.getStalenessReasons(node),
|
||||||
SuggestedActions: dn.getSuggestedActions(node),
|
SuggestedActions: dn.getSuggestedActions(node),
|
||||||
RelatedChanges: dn.getRelatedChanges(node),
|
RelatedChanges: dn.getRelatedChanges(node),
|
||||||
Priority: dn.calculateStalePriority(node),
|
Priority: dn.calculateStalePriority(node),
|
||||||
}
|
}
|
||||||
staleContexts = append(staleContexts, staleness)
|
staleContexts = append(staleContexts, staleness)
|
||||||
}
|
}
|
||||||
@@ -252,7 +252,7 @@ func (dn *decisionNavigatorImpl) ResetNavigation(ctx context.Context, address uc
|
|||||||
defer dn.mu.Unlock()
|
defer dn.mu.Unlock()
|
||||||
|
|
||||||
// Clear any navigation sessions for this address
|
// Clear any navigation sessions for this address
|
||||||
for sessionID, session := range dn.navigationSessions {
|
for _, session := range dn.navigationSessions {
|
||||||
if session.CurrentPosition.String() == address.String() {
|
if session.CurrentPosition.String() == address.String() {
|
||||||
// Reset to latest version
|
// Reset to latest version
|
||||||
latestNode, err := dn.graph.getLatestNodeUnsafe(address)
|
latestNode, err := dn.graph.getLatestNodeUnsafe(address)
|
||||||
|
|||||||
@@ -7,8 +7,8 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/slurp/storage"
|
"chorus/pkg/slurp/storage"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// persistenceManagerImpl handles persistence and synchronization of temporal graph data
|
// persistenceManagerImpl handles persistence and synchronization of temporal graph data
|
||||||
@@ -35,65 +35,65 @@ type persistenceManagerImpl struct {
|
|||||||
conflictResolver ConflictResolver
|
conflictResolver ConflictResolver
|
||||||
|
|
||||||
// Performance optimization
|
// Performance optimization
|
||||||
batchSize int
|
batchSize int
|
||||||
writeBuffer []*TemporalNode
|
writeBuffer []*TemporalNode
|
||||||
bufferMutex sync.Mutex
|
bufferMutex sync.Mutex
|
||||||
flushInterval time.Duration
|
flushInterval time.Duration
|
||||||
lastFlush time.Time
|
lastFlush time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// PersistenceConfig represents configuration for temporal graph persistence
|
// PersistenceConfig represents configuration for temporal graph persistence
|
||||||
type PersistenceConfig struct {
|
type PersistenceConfig struct {
|
||||||
// Storage settings
|
// Storage settings
|
||||||
EnableLocalStorage bool `json:"enable_local_storage"`
|
EnableLocalStorage bool `json:"enable_local_storage"`
|
||||||
EnableDistributedStorage bool `json:"enable_distributed_storage"`
|
EnableDistributedStorage bool `json:"enable_distributed_storage"`
|
||||||
EnableEncryption bool `json:"enable_encryption"`
|
EnableEncryption bool `json:"enable_encryption"`
|
||||||
EncryptionRoles []string `json:"encryption_roles"`
|
EncryptionRoles []string `json:"encryption_roles"`
|
||||||
|
|
||||||
// Synchronization settings
|
// Synchronization settings
|
||||||
SyncInterval time.Duration `json:"sync_interval"`
|
SyncInterval time.Duration `json:"sync_interval"`
|
||||||
ConflictResolutionStrategy string `json:"conflict_resolution_strategy"`
|
ConflictResolutionStrategy string `json:"conflict_resolution_strategy"`
|
||||||
EnableAutoSync bool `json:"enable_auto_sync"`
|
EnableAutoSync bool `json:"enable_auto_sync"`
|
||||||
MaxSyncRetries int `json:"max_sync_retries"`
|
MaxSyncRetries int `json:"max_sync_retries"`
|
||||||
|
|
||||||
// Performance settings
|
// Performance settings
|
||||||
BatchSize int `json:"batch_size"`
|
BatchSize int `json:"batch_size"`
|
||||||
FlushInterval time.Duration `json:"flush_interval"`
|
FlushInterval time.Duration `json:"flush_interval"`
|
||||||
EnableWriteBuffer bool `json:"enable_write_buffer"`
|
EnableWriteBuffer bool `json:"enable_write_buffer"`
|
||||||
|
|
||||||
// Backup settings
|
// Backup settings
|
||||||
EnableAutoBackup bool `json:"enable_auto_backup"`
|
EnableAutoBackup bool `json:"enable_auto_backup"`
|
||||||
BackupInterval time.Duration `json:"backup_interval"`
|
BackupInterval time.Duration `json:"backup_interval"`
|
||||||
RetainBackupCount int `json:"retain_backup_count"`
|
RetainBackupCount int `json:"retain_backup_count"`
|
||||||
|
|
||||||
// Storage keys and patterns
|
// Storage keys and patterns
|
||||||
KeyPrefix string `json:"key_prefix"`
|
KeyPrefix string `json:"key_prefix"`
|
||||||
NodeKeyPattern string `json:"node_key_pattern"`
|
NodeKeyPattern string `json:"node_key_pattern"`
|
||||||
GraphKeyPattern string `json:"graph_key_pattern"`
|
GraphKeyPattern string `json:"graph_key_pattern"`
|
||||||
MetadataKeyPattern string `json:"metadata_key_pattern"`
|
MetadataKeyPattern string `json:"metadata_key_pattern"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// PendingChange represents a change waiting to be synchronized
|
// PendingChange represents a change waiting to be synchronized
|
||||||
type PendingChange struct {
|
type PendingChange struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
Type ChangeType `json:"type"`
|
Type ChangeType `json:"type"`
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
Data interface{} `json:"data"`
|
Data interface{} `json:"data"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Retries int `json:"retries"`
|
Retries int `json:"retries"`
|
||||||
LastError string `json:"last_error"`
|
LastError string `json:"last_error"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ChangeType represents the type of change to be synchronized
|
// ChangeType represents the type of change to be synchronized
|
||||||
type ChangeType string
|
type ChangeType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ChangeTypeNodeCreated ChangeType = "node_created"
|
ChangeTypeNodeCreated ChangeType = "node_created"
|
||||||
ChangeTypeNodeUpdated ChangeType = "node_updated"
|
ChangeTypeNodeUpdated ChangeType = "node_updated"
|
||||||
ChangeTypeNodeDeleted ChangeType = "node_deleted"
|
ChangeTypeNodeDeleted ChangeType = "node_deleted"
|
||||||
ChangeTypeGraphUpdated ChangeType = "graph_updated"
|
ChangeTypeGraphUpdated ChangeType = "graph_updated"
|
||||||
ChangeTypeInfluenceAdded ChangeType = "influence_added"
|
ChangeTypeInfluenceAdded ChangeType = "influence_added"
|
||||||
ChangeTypeInfluenceRemoved ChangeType = "influence_removed"
|
ChangeTypeInfluenceRemoved ChangeType = "influence_removed"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -105,39 +105,39 @@ type ConflictResolver interface {
|
|||||||
|
|
||||||
// GraphSnapshot represents a snapshot of the temporal graph for synchronization
|
// GraphSnapshot represents a snapshot of the temporal graph for synchronization
|
||||||
type GraphSnapshot struct {
|
type GraphSnapshot struct {
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Nodes map[string]*TemporalNode `json:"nodes"`
|
Nodes map[string]*TemporalNode `json:"nodes"`
|
||||||
Influences map[string][]string `json:"influences"`
|
Influences map[string][]string `json:"influences"`
|
||||||
InfluencedBy map[string][]string `json:"influenced_by"`
|
InfluencedBy map[string][]string `json:"influenced_by"`
|
||||||
Decisions map[string]*DecisionMetadata `json:"decisions"`
|
Decisions map[string]*DecisionMetadata `json:"decisions"`
|
||||||
Metadata *GraphMetadata `json:"metadata"`
|
Metadata *GraphMetadata `json:"metadata"`
|
||||||
Checksum string `json:"checksum"`
|
Checksum string `json:"checksum"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// GraphMetadata represents metadata about the temporal graph
|
// GraphMetadata represents metadata about the temporal graph
|
||||||
type GraphMetadata struct {
|
type GraphMetadata struct {
|
||||||
Version int `json:"version"`
|
Version int `json:"version"`
|
||||||
LastModified time.Time `json:"last_modified"`
|
LastModified time.Time `json:"last_modified"`
|
||||||
NodeCount int `json:"node_count"`
|
NodeCount int `json:"node_count"`
|
||||||
EdgeCount int `json:"edge_count"`
|
EdgeCount int `json:"edge_count"`
|
||||||
DecisionCount int `json:"decision_count"`
|
DecisionCount int `json:"decision_count"`
|
||||||
CreatedBy string `json:"created_by"`
|
CreatedBy string `json:"created_by"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// SyncResult represents the result of a synchronization operation
|
// SyncResult represents the result of a synchronization operation
|
||||||
type SyncResult struct {
|
type SyncResult struct {
|
||||||
StartTime time.Time `json:"start_time"`
|
StartTime time.Time `json:"start_time"`
|
||||||
EndTime time.Time `json:"end_time"`
|
EndTime time.Time `json:"end_time"`
|
||||||
Duration time.Duration `json:"duration"`
|
Duration time.Duration `json:"duration"`
|
||||||
NodesProcessed int `json:"nodes_processed"`
|
NodesProcessed int `json:"nodes_processed"`
|
||||||
NodesCreated int `json:"nodes_created"`
|
NodesCreated int `json:"nodes_created"`
|
||||||
NodesUpdated int `json:"nodes_updated"`
|
NodesUpdated int `json:"nodes_updated"`
|
||||||
NodesDeleted int `json:"nodes_deleted"`
|
NodesDeleted int `json:"nodes_deleted"`
|
||||||
ConflictsFound int `json:"conflicts_found"`
|
ConflictsFound int `json:"conflicts_found"`
|
||||||
ConflictsResolved int `json:"conflicts_resolved"`
|
ConflictsResolved int `json:"conflicts_resolved"`
|
||||||
Errors []string `json:"errors"`
|
Errors []string `json:"errors"`
|
||||||
Success bool `json:"success"`
|
Success bool `json:"success"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewPersistenceManager creates a new persistence manager
|
// NewPersistenceManager creates a new persistence manager
|
||||||
@@ -289,17 +289,9 @@ func (pm *persistenceManagerImpl) BackupGraph(ctx context.Context) error {
|
|||||||
return fmt.Errorf("failed to create snapshot: %w", err)
|
return fmt.Errorf("failed to create snapshot: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Serialize snapshot
|
|
||||||
data, err := json.Marshal(snapshot)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to serialize snapshot: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create backup configuration
|
// Create backup configuration
|
||||||
backupConfig := &storage.BackupConfig{
|
backupConfig := &storage.BackupConfig{
|
||||||
Type: "temporal_graph",
|
Name: "temporal_graph",
|
||||||
Description: "Temporal graph backup",
|
|
||||||
Tags: []string{"temporal", "graph", "decision"},
|
|
||||||
Metadata: map[string]interface{}{
|
Metadata: map[string]interface{}{
|
||||||
"node_count": snapshot.Metadata.NodeCount,
|
"node_count": snapshot.Metadata.NodeCount,
|
||||||
"edge_count": snapshot.Metadata.EdgeCount,
|
"edge_count": snapshot.Metadata.EdgeCount,
|
||||||
@@ -356,17 +348,15 @@ func (pm *persistenceManagerImpl) flushWriteBuffer() error {
|
|||||||
|
|
||||||
// Create batch store request
|
// Create batch store request
|
||||||
batch := &storage.BatchStoreRequest{
|
batch := &storage.BatchStoreRequest{
|
||||||
Operations: make([]*storage.BatchStoreOperation, len(pm.writeBuffer)),
|
Contexts: make([]*storage.ContextStoreItem, len(pm.writeBuffer)),
|
||||||
|
Roles: pm.config.EncryptionRoles,
|
||||||
|
FailOnError: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
for i, node := range pm.writeBuffer {
|
for i, node := range pm.writeBuffer {
|
||||||
key := pm.generateNodeKey(node)
|
batch.Contexts[i] = &storage.ContextStoreItem{
|
||||||
|
Context: node,
|
||||||
batch.Operations[i] = &storage.BatchStoreOperation{
|
Roles: pm.config.EncryptionRoles,
|
||||||
Type: "store",
|
|
||||||
Key: key,
|
|
||||||
Data: node,
|
|
||||||
Roles: pm.config.EncryptionRoles,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -734,10 +724,10 @@ func (pm *persistenceManagerImpl) resolveConflict(ctx context.Context, conflict
|
|||||||
}
|
}
|
||||||
|
|
||||||
return &ConflictResolution{
|
return &ConflictResolution{
|
||||||
ConflictID: conflict.NodeID,
|
ConflictID: conflict.NodeID,
|
||||||
Resolution: "merged",
|
Resolution: "merged",
|
||||||
ResolvedData: resolvedNode,
|
ResolvedData: resolvedNode,
|
||||||
ResolvedAt: time.Now(),
|
ResolvedAt: time.Now(),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -834,28 +824,14 @@ func (pm *persistenceManagerImpl) syncRemoteToLocal(ctx context.Context, remote,
|
|||||||
// Supporting types for conflict resolution
|
// Supporting types for conflict resolution
|
||||||
|
|
||||||
type SyncConflict struct {
|
type SyncConflict struct {
|
||||||
Type ConflictType `json:"type"`
|
Type ConflictType `json:"type"`
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
LocalData interface{} `json:"local_data"`
|
LocalData interface{} `json:"local_data"`
|
||||||
RemoteData interface{} `json:"remote_data"`
|
RemoteData interface{} `json:"remote_data"`
|
||||||
Severity string `json:"severity"`
|
Severity string `json:"severity"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ConflictType string
|
// Default conflict resolver implementation
|
||||||
|
|
||||||
const (
|
|
||||||
ConflictTypeNodeMismatch ConflictType = "node_mismatch"
|
|
||||||
ConflictTypeInfluenceMismatch ConflictType = "influence_mismatch"
|
|
||||||
ConflictTypeMetadataMismatch ConflictType = "metadata_mismatch"
|
|
||||||
)
|
|
||||||
|
|
||||||
type ConflictResolution struct {
|
|
||||||
ConflictID string `json:"conflict_id"`
|
|
||||||
Resolution string `json:"resolution"`
|
|
||||||
ResolvedData interface{} `json:"resolved_data"`
|
|
||||||
ResolvedAt time.Time `json:"resolved_at"`
|
|
||||||
ResolvedBy string `json:"resolved_by"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Default conflict resolver implementation
|
// Default conflict resolver implementation
|
||||||
|
|
||||||
|
|||||||
@@ -17,45 +17,46 @@ import (
|
|||||||
// cascading context resolution with bounded depth traversal.
|
// cascading context resolution with bounded depth traversal.
|
||||||
type ContextNode struct {
|
type ContextNode struct {
|
||||||
// Identity and addressing
|
// Identity and addressing
|
||||||
ID string `json:"id"` // Unique identifier
|
ID string `json:"id"` // Unique identifier
|
||||||
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
|
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
|
||||||
Path string `json:"path"` // Filesystem path
|
Path string `json:"path"` // Filesystem path
|
||||||
|
|
||||||
// Core context information
|
// Core context information
|
||||||
Summary string `json:"summary"` // Brief description
|
Summary string `json:"summary"` // Brief description
|
||||||
Purpose string `json:"purpose"` // What this component does
|
Purpose string `json:"purpose"` // What this component does
|
||||||
Technologies []string `json:"technologies"` // Technologies used
|
Technologies []string `json:"technologies"` // Technologies used
|
||||||
Tags []string `json:"tags"` // Categorization tags
|
Tags []string `json:"tags"` // Categorization tags
|
||||||
Insights []string `json:"insights"` // Analytical insights
|
Insights []string `json:"insights"` // Analytical insights
|
||||||
|
|
||||||
// Hierarchy relationships
|
// Hierarchy relationships
|
||||||
Parent *string `json:"parent,omitempty"` // Parent context ID
|
Parent *string `json:"parent,omitempty"` // Parent context ID
|
||||||
Children []string `json:"children"` // Child context IDs
|
Children []string `json:"children"` // Child context IDs
|
||||||
Specificity int `json:"specificity"` // Specificity level (higher = more specific)
|
Specificity int `json:"specificity"` // Specificity level (higher = more specific)
|
||||||
|
|
||||||
// File metadata
|
// File metadata
|
||||||
FileType string `json:"file_type"` // File extension or type
|
FileType string `json:"file_type"` // File extension or type
|
||||||
Language *string `json:"language,omitempty"` // Programming language
|
Language *string `json:"language,omitempty"` // Programming language
|
||||||
Size *int64 `json:"size,omitempty"` // File size in bytes
|
Size *int64 `json:"size,omitempty"` // File size in bytes
|
||||||
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification time
|
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification time
|
||||||
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
|
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
|
||||||
|
|
||||||
// Resolution metadata
|
// Resolution metadata
|
||||||
CreatedBy string `json:"created_by"` // Who/what created this context
|
CreatedBy string `json:"created_by"` // Who/what created this context
|
||||||
CreatedAt time.Time `json:"created_at"` // When created
|
CreatedAt time.Time `json:"created_at"` // When created
|
||||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||||
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
|
UpdatedBy string `json:"updated_by"` // Who performed the last update
|
||||||
|
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
|
||||||
|
|
||||||
// Cascading behavior rules
|
// Cascading behavior rules
|
||||||
AppliesTo ContextScope `json:"applies_to"` // Scope of application
|
AppliesTo ContextScope `json:"applies_to"` // Scope of application
|
||||||
Overrides bool `json:"overrides"` // Whether this overrides parent context
|
Overrides bool `json:"overrides"` // Whether this overrides parent context
|
||||||
|
|
||||||
// Security and access control
|
// Security and access control
|
||||||
EncryptedFor []string `json:"encrypted_for"` // Roles that can access
|
EncryptedFor []string `json:"encrypted_for"` // Roles that can access
|
||||||
AccessLevel crypto.AccessLevel `json:"access_level"` // Access level required
|
AccessLevel crypto.AccessLevel `json:"access_level"` // Access level required
|
||||||
|
|
||||||
// Custom metadata
|
// Custom metadata
|
||||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// ResolvedContext represents the final resolved context for a UCXL address.
|
// ResolvedContext represents the final resolved context for a UCXL address.
|
||||||
@@ -64,27 +65,27 @@ type ContextNode struct {
|
|||||||
// information from multiple hierarchy levels and applying global contexts.
|
// information from multiple hierarchy levels and applying global contexts.
|
||||||
type ResolvedContext struct {
|
type ResolvedContext struct {
|
||||||
// Resolved context data
|
// Resolved context data
|
||||||
UCXLAddress string `json:"ucxl_address"` // Original UCXL address
|
UCXLAddress string `json:"ucxl_address"` // Original UCXL address
|
||||||
Summary string `json:"summary"` // Resolved summary
|
Summary string `json:"summary"` // Resolved summary
|
||||||
Purpose string `json:"purpose"` // Resolved purpose
|
Purpose string `json:"purpose"` // Resolved purpose
|
||||||
Technologies []string `json:"technologies"` // Merged technologies
|
Technologies []string `json:"technologies"` // Merged technologies
|
||||||
Tags []string `json:"tags"` // Merged tags
|
Tags []string `json:"tags"` // Merged tags
|
||||||
Insights []string `json:"insights"` // Merged insights
|
Insights []string `json:"insights"` // Merged insights
|
||||||
|
|
||||||
// File information
|
// File information
|
||||||
FileType string `json:"file_type"` // File type
|
FileType string `json:"file_type"` // File type
|
||||||
Language *string `json:"language,omitempty"` // Programming language
|
Language *string `json:"language,omitempty"` // Programming language
|
||||||
Size *int64 `json:"size,omitempty"` // File size
|
Size *int64 `json:"size,omitempty"` // File size
|
||||||
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification
|
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification
|
||||||
ContentHash *string `json:"content_hash,omitempty"` // Content hash
|
ContentHash *string `json:"content_hash,omitempty"` // Content hash
|
||||||
|
|
||||||
// Resolution metadata
|
// Resolution metadata
|
||||||
SourcePath string `json:"source_path"` // Primary source context path
|
SourcePath string `json:"source_path"` // Primary source context path
|
||||||
InheritanceChain []string `json:"inheritance_chain"` // Context inheritance chain
|
InheritanceChain []string `json:"inheritance_chain"` // Context inheritance chain
|
||||||
Confidence float64 `json:"confidence"` // Overall confidence (0-1)
|
Confidence float64 `json:"confidence"` // Overall confidence (0-1)
|
||||||
BoundedDepth int `json:"bounded_depth"` // Actual traversal depth used
|
BoundedDepth int `json:"bounded_depth"` // Actual traversal depth used
|
||||||
GlobalApplied bool `json:"global_applied"` // Whether global contexts were applied
|
GlobalApplied bool `json:"global_applied"` // Whether global contexts were applied
|
||||||
ResolvedAt time.Time `json:"resolved_at"` // When resolution occurred
|
ResolvedAt time.Time `json:"resolved_at"` // When resolution occurred
|
||||||
|
|
||||||
// Temporal information
|
// Temporal information
|
||||||
Version int `json:"version"` // Current version number
|
Version int `json:"version"` // Current version number
|
||||||
@@ -92,13 +93,13 @@ type ResolvedContext struct {
|
|||||||
EvolutionHistory []string `json:"evolution_history"` // Brief evolution history
|
EvolutionHistory []string `json:"evolution_history"` // Brief evolution history
|
||||||
|
|
||||||
// Access control
|
// Access control
|
||||||
AccessibleBy []string `json:"accessible_by"` // Roles that can access this
|
AccessibleBy []string `json:"accessible_by"` // Roles that can access this
|
||||||
EncryptionKeys []string `json:"encryption_keys"` // Keys used for encryption
|
EncryptionKeys []string `json:"encryption_keys"` // Keys used for encryption
|
||||||
|
|
||||||
// Performance metadata
|
// Performance metadata
|
||||||
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
|
ResolutionTime time.Duration `json:"resolution_time"` // Time taken to resolve
|
||||||
CacheHit bool `json:"cache_hit"` // Whether result was cached
|
CacheHit bool `json:"cache_hit"` // Whether result was cached
|
||||||
NodesTraversed int `json:"nodes_traversed"` // Number of hierarchy nodes traversed
|
NodesTraversed int `json:"nodes_traversed"` // Number of hierarchy nodes traversed
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContextScope defines the scope of a context node's application
|
// ContextScope defines the scope of a context node's application
|
||||||
@@ -117,23 +118,23 @@ const (
|
|||||||
// simple chronological progression.
|
// simple chronological progression.
|
||||||
type TemporalNode struct {
|
type TemporalNode struct {
|
||||||
// Node identity
|
// Node identity
|
||||||
ID string `json:"id"` // Unique temporal node ID
|
ID string `json:"id"` // Unique temporal node ID
|
||||||
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
|
UCXLAddress string `json:"ucxl_address"` // Associated UCXL address
|
||||||
Version int `json:"version"` // Version number (monotonic)
|
Version int `json:"version"` // Version number (monotonic)
|
||||||
|
|
||||||
// Context snapshot
|
// Context snapshot
|
||||||
Context ContextNode `json:"context"` // Context data at this point
|
Context ContextNode `json:"context"` // Context data at this point
|
||||||
|
|
||||||
// Temporal metadata
|
// Temporal metadata
|
||||||
Timestamp time.Time `json:"timestamp"` // When this version was created
|
Timestamp time.Time `json:"timestamp"` // When this version was created
|
||||||
DecisionID string `json:"decision_id"` // Associated decision identifier
|
DecisionID string `json:"decision_id"` // Associated decision identifier
|
||||||
ChangeReason ChangeReason `json:"change_reason"` // Why context changed
|
ChangeReason ChangeReason `json:"change_reason"` // Why context changed
|
||||||
ParentNode *string `json:"parent_node,omitempty"` // Previous version ID
|
ParentNode *string `json:"parent_node,omitempty"` // Previous version ID
|
||||||
|
|
||||||
// Evolution tracking
|
// Evolution tracking
|
||||||
ContextHash string `json:"context_hash"` // Hash of context content
|
ContextHash string `json:"context_hash"` // Hash of context content
|
||||||
Confidence float64 `json:"confidence"` // Confidence in this version (0-1)
|
Confidence float64 `json:"confidence"` // Confidence in this version (0-1)
|
||||||
Staleness float64 `json:"staleness"` // Staleness indicator (0-1)
|
Staleness float64 `json:"staleness"` // Staleness indicator (0-1)
|
||||||
|
|
||||||
// Decision graph relationships
|
// Decision graph relationships
|
||||||
Influences []string `json:"influences"` // UCXL addresses this influences
|
Influences []string `json:"influences"` // UCXL addresses this influences
|
||||||
@@ -144,11 +145,11 @@ type TemporalNode struct {
|
|||||||
LastValidated time.Time `json:"last_validated"` // When last validated
|
LastValidated time.Time `json:"last_validated"` // When last validated
|
||||||
|
|
||||||
// Change impact analysis
|
// Change impact analysis
|
||||||
ImpactScope ImpactScope `json:"impact_scope"` // Scope of change impact
|
ImpactScope ImpactScope `json:"impact_scope"` // Scope of change impact
|
||||||
PropagatedTo []string `json:"propagated_to"` // Addresses that received impact
|
PropagatedTo []string `json:"propagated_to"` // Addresses that received impact
|
||||||
|
|
||||||
// Custom temporal metadata
|
// Custom temporal metadata
|
||||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// DecisionMetadata represents metadata about a decision that changed context.
|
// DecisionMetadata represents metadata about a decision that changed context.
|
||||||
@@ -157,56 +158,56 @@ type TemporalNode struct {
|
|||||||
// representing why and how context evolved rather than just when.
|
// representing why and how context evolved rather than just when.
|
||||||
type DecisionMetadata struct {
|
type DecisionMetadata struct {
|
||||||
// Decision identity
|
// Decision identity
|
||||||
ID string `json:"id"` // Unique decision identifier
|
ID string `json:"id"` // Unique decision identifier
|
||||||
Maker string `json:"maker"` // Who/what made the decision
|
Maker string `json:"maker"` // Who/what made the decision
|
||||||
Rationale string `json:"rationale"` // Why the decision was made
|
Rationale string `json:"rationale"` // Why the decision was made
|
||||||
|
|
||||||
// Impact and scope
|
// Impact and scope
|
||||||
Scope ImpactScope `json:"scope"` // Scope of impact
|
Scope ImpactScope `json:"scope"` // Scope of impact
|
||||||
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in decision (0-1)
|
ConfidenceLevel float64 `json:"confidence_level"` // Confidence in decision (0-1)
|
||||||
|
|
||||||
// External references
|
// External references
|
||||||
ExternalRefs []string `json:"external_refs"` // External references (URLs, docs)
|
ExternalRefs []string `json:"external_refs"` // External references (URLs, docs)
|
||||||
GitCommit *string `json:"git_commit,omitempty"` // Associated git commit
|
GitCommit *string `json:"git_commit,omitempty"` // Associated git commit
|
||||||
IssueNumber *int `json:"issue_number,omitempty"` // Associated issue number
|
IssueNumber *int `json:"issue_number,omitempty"` // Associated issue number
|
||||||
PullRequestNumber *int `json:"pull_request,omitempty"` // Associated PR number
|
PullRequestNumber *int `json:"pull_request,omitempty"` // Associated PR number
|
||||||
|
|
||||||
// Timing information
|
// Timing information
|
||||||
CreatedAt time.Time `json:"created_at"` // When decision was made
|
CreatedAt time.Time `json:"created_at"` // When decision was made
|
||||||
EffectiveAt *time.Time `json:"effective_at,omitempty"` // When decision takes effect
|
EffectiveAt *time.Time `json:"effective_at,omitempty"` // When decision takes effect
|
||||||
ExpiresAt *time.Time `json:"expires_at,omitempty"` // When decision expires
|
ExpiresAt *time.Time `json:"expires_at,omitempty"` // When decision expires
|
||||||
|
|
||||||
// Decision quality
|
// Decision quality
|
||||||
ReviewedBy []string `json:"reviewed_by,omitempty"` // Who reviewed this decision
|
ReviewedBy []string `json:"reviewed_by,omitempty"` // Who reviewed this decision
|
||||||
ApprovedBy []string `json:"approved_by,omitempty"` // Who approved this decision
|
ApprovedBy []string `json:"approved_by,omitempty"` // Who approved this decision
|
||||||
|
|
||||||
// Implementation tracking
|
// Implementation tracking
|
||||||
ImplementationStatus string `json:"implementation_status"` // Status: planned, active, complete, cancelled
|
ImplementationStatus string `json:"implementation_status"` // Status: planned, active, complete, cancelled
|
||||||
ImplementationNotes string `json:"implementation_notes"` // Implementation details
|
ImplementationNotes string `json:"implementation_notes"` // Implementation details
|
||||||
|
|
||||||
// Custom metadata
|
// Custom metadata
|
||||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// ChangeReason represents why context changed
|
// ChangeReason represents why context changed
|
||||||
type ChangeReason string
|
type ChangeReason string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ReasonInitialCreation ChangeReason = "initial_creation" // First time context creation
|
ReasonInitialCreation ChangeReason = "initial_creation" // First time context creation
|
||||||
ReasonCodeChange ChangeReason = "code_change" // Code modification
|
ReasonCodeChange ChangeReason = "code_change" // Code modification
|
||||||
ReasonDesignDecision ChangeReason = "design_decision" // Design/architecture decision
|
ReasonDesignDecision ChangeReason = "design_decision" // Design/architecture decision
|
||||||
ReasonRefactoring ChangeReason = "refactoring" // Code refactoring
|
ReasonRefactoring ChangeReason = "refactoring" // Code refactoring
|
||||||
ReasonArchitectureChange ChangeReason = "architecture_change" // Major architecture change
|
ReasonArchitectureChange ChangeReason = "architecture_change" // Major architecture change
|
||||||
ReasonRequirementsChange ChangeReason = "requirements_change" // Requirements modification
|
ReasonRequirementsChange ChangeReason = "requirements_change" // Requirements modification
|
||||||
ReasonLearningEvolution ChangeReason = "learning_evolution" // Improved understanding
|
ReasonLearningEvolution ChangeReason = "learning_evolution" // Improved understanding
|
||||||
ReasonRAGEnhancement ChangeReason = "rag_enhancement" // RAG system enhancement
|
ReasonRAGEnhancement ChangeReason = "rag_enhancement" // RAG system enhancement
|
||||||
ReasonTeamInput ChangeReason = "team_input" // Team member input
|
ReasonTeamInput ChangeReason = "team_input" // Team member input
|
||||||
ReasonBugDiscovery ChangeReason = "bug_discovery" // Bug found that changes understanding
|
ReasonBugDiscovery ChangeReason = "bug_discovery" // Bug found that changes understanding
|
||||||
ReasonPerformanceInsight ChangeReason = "performance_insight" // Performance analysis insight
|
ReasonPerformanceInsight ChangeReason = "performance_insight" // Performance analysis insight
|
||||||
ReasonSecurityReview ChangeReason = "security_review" // Security analysis
|
ReasonSecurityReview ChangeReason = "security_review" // Security analysis
|
||||||
ReasonDependencyChange ChangeReason = "dependency_change" // Dependency update
|
ReasonDependencyChange ChangeReason = "dependency_change" // Dependency update
|
||||||
ReasonEnvironmentChange ChangeReason = "environment_change" // Environment configuration change
|
ReasonEnvironmentChange ChangeReason = "environment_change" // Environment configuration change
|
||||||
ReasonToolingUpdate ChangeReason = "tooling_update" // Development tooling update
|
ReasonToolingUpdate ChangeReason = "tooling_update" // Development tooling update
|
||||||
ReasonDocumentationUpdate ChangeReason = "documentation_update" // Documentation improvement
|
ReasonDocumentationUpdate ChangeReason = "documentation_update" // Documentation improvement
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -222,11 +223,11 @@ const (
|
|||||||
|
|
||||||
// DecisionPath represents a path between two decision points in the temporal graph
|
// DecisionPath represents a path between two decision points in the temporal graph
|
||||||
type DecisionPath struct {
|
type DecisionPath struct {
|
||||||
From string `json:"from"` // Starting UCXL address
|
From string `json:"from"` // Starting UCXL address
|
||||||
To string `json:"to"` // Ending UCXL address
|
To string `json:"to"` // Ending UCXL address
|
||||||
Steps []*DecisionStep `json:"steps"` // Path steps
|
Steps []*DecisionStep `json:"steps"` // Path steps
|
||||||
TotalHops int `json:"total_hops"` // Total decision hops
|
TotalHops int `json:"total_hops"` // Total decision hops
|
||||||
PathType string `json:"path_type"` // Type of path (direct, influence, etc.)
|
PathType string `json:"path_type"` // Type of path (direct, influence, etc.)
|
||||||
}
|
}
|
||||||
|
|
||||||
// DecisionStep represents a single step in a decision path
|
// DecisionStep represents a single step in a decision path
|
||||||
@@ -239,7 +240,7 @@ type DecisionStep struct {
|
|||||||
|
|
||||||
// DecisionTimeline represents the decision evolution timeline for a context
|
// DecisionTimeline represents the decision evolution timeline for a context
|
||||||
type DecisionTimeline struct {
|
type DecisionTimeline struct {
|
||||||
PrimaryAddress string `json:"primary_address"` // Main UCXL address
|
PrimaryAddress string `json:"primary_address"` // Main UCXL address
|
||||||
DecisionSequence []*DecisionTimelineEntry `json:"decision_sequence"` // Ordered by decision hops
|
DecisionSequence []*DecisionTimelineEntry `json:"decision_sequence"` // Ordered by decision hops
|
||||||
RelatedDecisions []*RelatedDecision `json:"related_decisions"` // Related decisions within hop limit
|
RelatedDecisions []*RelatedDecision `json:"related_decisions"` // Related decisions within hop limit
|
||||||
TotalDecisions int `json:"total_decisions"` // Total decisions in timeline
|
TotalDecisions int `json:"total_decisions"` // Total decisions in timeline
|
||||||
@@ -249,40 +250,40 @@ type DecisionTimeline struct {
|
|||||||
|
|
||||||
// DecisionTimelineEntry represents an entry in the decision timeline
|
// DecisionTimelineEntry represents an entry in the decision timeline
|
||||||
type DecisionTimelineEntry struct {
|
type DecisionTimelineEntry struct {
|
||||||
Version int `json:"version"` // Version number
|
Version int `json:"version"` // Version number
|
||||||
DecisionHop int `json:"decision_hop"` // Decision distance from initial
|
DecisionHop int `json:"decision_hop"` // Decision distance from initial
|
||||||
ChangeReason ChangeReason `json:"change_reason"` // Why it changed
|
ChangeReason ChangeReason `json:"change_reason"` // Why it changed
|
||||||
DecisionMaker string `json:"decision_maker"` // Who made the decision
|
DecisionMaker string `json:"decision_maker"` // Who made the decision
|
||||||
DecisionRationale string `json:"decision_rationale"` // Rationale for decision
|
DecisionRationale string `json:"decision_rationale"` // Rationale for decision
|
||||||
ConfidenceEvolution float64 `json:"confidence_evolution"` // Confidence at this point
|
ConfidenceEvolution float64 `json:"confidence_evolution"` // Confidence at this point
|
||||||
Timestamp time.Time `json:"timestamp"` // When decision occurred
|
Timestamp time.Time `json:"timestamp"` // When decision occurred
|
||||||
InfluencesCount int `json:"influences_count"` // Number of influenced addresses
|
InfluencesCount int `json:"influences_count"` // Number of influenced addresses
|
||||||
InfluencedByCount int `json:"influenced_by_count"` // Number of influencing addresses
|
InfluencedByCount int `json:"influenced_by_count"` // Number of influencing addresses
|
||||||
ImpactScope ImpactScope `json:"impact_scope"` // Scope of this decision
|
ImpactScope ImpactScope `json:"impact_scope"` // Scope of this decision
|
||||||
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata,omitempty"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// RelatedDecision represents a decision related through the influence graph
|
// RelatedDecision represents a decision related through the influence graph
|
||||||
type RelatedDecision struct {
|
type RelatedDecision struct {
|
||||||
Address string `json:"address"` // UCXL address
|
Address string `json:"address"` // UCXL address
|
||||||
DecisionHops int `json:"decision_hops"` // Hops from primary address
|
DecisionHops int `json:"decision_hops"` // Hops from primary address
|
||||||
LatestVersion int `json:"latest_version"` // Latest version number
|
LatestVersion int `json:"latest_version"` // Latest version number
|
||||||
ChangeReason ChangeReason `json:"change_reason"` // Latest change reason
|
ChangeReason ChangeReason `json:"change_reason"` // Latest change reason
|
||||||
DecisionMaker string `json:"decision_maker"` // Latest decision maker
|
DecisionMaker string `json:"decision_maker"` // Latest decision maker
|
||||||
Confidence float64 `json:"confidence"` // Current confidence
|
Confidence float64 `json:"confidence"` // Current confidence
|
||||||
LastDecisionTimestamp time.Time `json:"last_decision_timestamp"` // When last decision occurred
|
LastDecisionTimestamp time.Time `json:"last_decision_timestamp"` // When last decision occurred
|
||||||
RelationshipType string `json:"relationship_type"` // Type of relationship (influences, influenced_by)
|
RelationshipType string `json:"relationship_type"` // Type of relationship (influences, influenced_by)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TimelineAnalysis contains analysis metadata for decision timelines
|
// TimelineAnalysis contains analysis metadata for decision timelines
|
||||||
type TimelineAnalysis struct {
|
type TimelineAnalysis struct {
|
||||||
ChangeVelocity float64 `json:"change_velocity"` // Changes per unit time
|
ChangeVelocity float64 `json:"change_velocity"` // Changes per unit time
|
||||||
ConfidenceTrend string `json:"confidence_trend"` // increasing, decreasing, stable
|
ConfidenceTrend string `json:"confidence_trend"` // increasing, decreasing, stable
|
||||||
DominantChangeReasons []ChangeReason `json:"dominant_change_reasons"` // Most common reasons
|
DominantChangeReasons []ChangeReason `json:"dominant_change_reasons"` // Most common reasons
|
||||||
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
|
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
|
||||||
ImpactScopeDistribution map[ImpactScope]int `json:"impact_scope_distribution"` // Distribution of impact scopes
|
ImpactScopeDistribution map[ImpactScope]int `json:"impact_scope_distribution"` // Distribution of impact scopes
|
||||||
InfluenceNetworkSize int `json:"influence_network_size"` // Size of influence network
|
InfluenceNetworkSize int `json:"influence_network_size"` // Size of influence network
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
}
|
}
|
||||||
|
|
||||||
// NavigationDirection represents direction for temporal navigation
|
// NavigationDirection represents direction for temporal navigation
|
||||||
@@ -295,76 +296,76 @@ const (
|
|||||||
|
|
||||||
// StaleContext represents a potentially outdated context
|
// StaleContext represents a potentially outdated context
|
||||||
type StaleContext struct {
|
type StaleContext struct {
|
||||||
UCXLAddress string `json:"ucxl_address"` // Address of stale context
|
UCXLAddress string `json:"ucxl_address"` // Address of stale context
|
||||||
TemporalNode *TemporalNode `json:"temporal_node"` // Latest temporal node
|
TemporalNode *TemporalNode `json:"temporal_node"` // Latest temporal node
|
||||||
StalenessScore float64 `json:"staleness_score"` // Staleness score (0-1)
|
StalenessScore float64 `json:"staleness_score"` // Staleness score (0-1)
|
||||||
LastUpdated time.Time `json:"last_updated"` // When last updated
|
LastUpdated time.Time `json:"last_updated"` // When last updated
|
||||||
Reasons []string `json:"reasons"` // Reasons why considered stale
|
Reasons []string `json:"reasons"` // Reasons why considered stale
|
||||||
SuggestedActions []string `json:"suggested_actions"` // Suggested remediation actions
|
SuggestedActions []string `json:"suggested_actions"` // Suggested remediation actions
|
||||||
}
|
}
|
||||||
|
|
||||||
// GenerationOptions configures context generation behavior
|
// GenerationOptions configures context generation behavior
|
||||||
type GenerationOptions struct {
|
type GenerationOptions struct {
|
||||||
// Analysis options
|
// Analysis options
|
||||||
AnalyzeContent bool `json:"analyze_content"` // Analyze file content
|
AnalyzeContent bool `json:"analyze_content"` // Analyze file content
|
||||||
AnalyzeStructure bool `json:"analyze_structure"` // Analyze directory structure
|
AnalyzeStructure bool `json:"analyze_structure"` // Analyze directory structure
|
||||||
AnalyzeHistory bool `json:"analyze_history"` // Analyze git history
|
AnalyzeHistory bool `json:"analyze_history"` // Analyze git history
|
||||||
AnalyzeDependencies bool `json:"analyze_dependencies"` // Analyze dependencies
|
AnalyzeDependencies bool `json:"analyze_dependencies"` // Analyze dependencies
|
||||||
|
|
||||||
// Generation scope
|
// Generation scope
|
||||||
MaxDepth int `json:"max_depth"` // Maximum directory depth
|
MaxDepth int `json:"max_depth"` // Maximum directory depth
|
||||||
IncludePatterns []string `json:"include_patterns"` // File patterns to include
|
IncludePatterns []string `json:"include_patterns"` // File patterns to include
|
||||||
ExcludePatterns []string `json:"exclude_patterns"` // File patterns to exclude
|
ExcludePatterns []string `json:"exclude_patterns"` // File patterns to exclude
|
||||||
|
|
||||||
// Quality settings
|
// Quality settings
|
||||||
MinConfidence float64 `json:"min_confidence"` // Minimum confidence threshold
|
MinConfidence float64 `json:"min_confidence"` // Minimum confidence threshold
|
||||||
RequireValidation bool `json:"require_validation"` // Require human validation
|
RequireValidation bool `json:"require_validation"` // Require human validation
|
||||||
|
|
||||||
// External integration
|
// External integration
|
||||||
UseRAG bool `json:"use_rag"` // Use RAG for enhancement
|
UseRAG bool `json:"use_rag"` // Use RAG for enhancement
|
||||||
RAGEndpoint string `json:"rag_endpoint"` // RAG service endpoint
|
RAGEndpoint string `json:"rag_endpoint"` // RAG service endpoint
|
||||||
|
|
||||||
// Output options
|
// Output options
|
||||||
EncryptForRoles []string `json:"encrypt_for_roles"` // Roles to encrypt for
|
EncryptForRoles []string `json:"encrypt_for_roles"` // Roles to encrypt for
|
||||||
|
|
||||||
// Performance limits
|
// Performance limits
|
||||||
Timeout time.Duration `json:"timeout"` // Generation timeout
|
Timeout time.Duration `json:"timeout"` // Generation timeout
|
||||||
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
|
MaxFileSize int64 `json:"max_file_size"` // Maximum file size to analyze
|
||||||
|
|
||||||
// Custom options
|
// Custom options
|
||||||
CustomOptions map[string]interface{} `json:"custom_options,omitempty"` // Additional options
|
CustomOptions map[string]interface{} `json:"custom_options,omitempty"` // Additional options
|
||||||
}
|
}
|
||||||
|
|
||||||
// HierarchyStats represents statistics about hierarchy generation
|
// HierarchyStats represents statistics about hierarchy generation
|
||||||
type HierarchyStats struct {
|
type HierarchyStats struct {
|
||||||
NodesCreated int `json:"nodes_created"` // Number of nodes created
|
NodesCreated int `json:"nodes_created"` // Number of nodes created
|
||||||
NodesUpdated int `json:"nodes_updated"` // Number of nodes updated
|
NodesUpdated int `json:"nodes_updated"` // Number of nodes updated
|
||||||
FilesAnalyzed int `json:"files_analyzed"` // Number of files analyzed
|
FilesAnalyzed int `json:"files_analyzed"` // Number of files analyzed
|
||||||
DirectoriesScanned int `json:"directories_scanned"` // Number of directories scanned
|
DirectoriesScanned int `json:"directories_scanned"` // Number of directories scanned
|
||||||
GenerationTime time.Duration `json:"generation_time"` // Time taken for generation
|
GenerationTime time.Duration `json:"generation_time"` // Time taken for generation
|
||||||
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
|
AverageConfidence float64 `json:"average_confidence"` // Average confidence score
|
||||||
TotalSize int64 `json:"total_size"` // Total size of analyzed content
|
TotalSize int64 `json:"total_size"` // Total size of analyzed content
|
||||||
SkippedFiles int `json:"skipped_files"` // Number of files skipped
|
SkippedFiles int `json:"skipped_files"` // Number of files skipped
|
||||||
Errors []string `json:"errors"` // Generation errors
|
Errors []string `json:"errors"` // Generation errors
|
||||||
}
|
}
|
||||||
|
|
||||||
// ValidationResult represents the result of context validation
|
// ValidationResult represents the result of context validation
|
||||||
type ValidationResult struct {
|
type ValidationResult struct {
|
||||||
Valid bool `json:"valid"` // Whether context is valid
|
Valid bool `json:"valid"` // Whether context is valid
|
||||||
ConfidenceScore float64 `json:"confidence_score"` // Overall confidence (0-1)
|
ConfidenceScore float64 `json:"confidence_score"` // Overall confidence (0-1)
|
||||||
QualityScore float64 `json:"quality_score"` // Quality assessment (0-1)
|
QualityScore float64 `json:"quality_score"` // Quality assessment (0-1)
|
||||||
Issues []*ValidationIssue `json:"issues"` // Validation issues found
|
Issues []*ValidationIssue `json:"issues"` // Validation issues found
|
||||||
Suggestions []*ValidationSuggestion `json:"suggestions"` // Improvement suggestions
|
Suggestions []*ValidationSuggestion `json:"suggestions"` // Improvement suggestions
|
||||||
ValidatedAt time.Time `json:"validated_at"` // When validation occurred
|
ValidatedAt time.Time `json:"validated_at"` // When validation occurred
|
||||||
ValidatedBy string `json:"validated_by"` // Who/what performed validation
|
ValidatedBy string `json:"validated_by"` // Who/what performed validation
|
||||||
}
|
}
|
||||||
|
|
||||||
// ValidationIssue represents an issue found during validation
|
// ValidationIssue represents an issue found during validation
|
||||||
type ValidationIssue struct {
|
type ValidationIssue struct {
|
||||||
Severity string `json:"severity"` // error, warning, info
|
Severity string `json:"severity"` // error, warning, info
|
||||||
Message string `json:"message"` // Issue description
|
Message string `json:"message"` // Issue description
|
||||||
Field string `json:"field"` // Affected field
|
Field string `json:"field"` // Affected field
|
||||||
Suggestion string `json:"suggestion"` // How to fix
|
Suggestion string `json:"suggestion"` // How to fix
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -378,24 +379,24 @@ type ValidationSuggestion struct {
|
|||||||
|
|
||||||
// CostEstimate represents estimated resource cost for operations
|
// CostEstimate represents estimated resource cost for operations
|
||||||
type CostEstimate struct {
|
type CostEstimate struct {
|
||||||
CPUCost float64 `json:"cpu_cost"` // Estimated CPU cost
|
CPUCost float64 `json:"cpu_cost"` // Estimated CPU cost
|
||||||
MemoryCost float64 `json:"memory_cost"` // Estimated memory cost
|
MemoryCost float64 `json:"memory_cost"` // Estimated memory cost
|
||||||
StorageCost float64 `json:"storage_cost"` // Estimated storage cost
|
StorageCost float64 `json:"storage_cost"` // Estimated storage cost
|
||||||
TimeCost time.Duration `json:"time_cost"` // Estimated time cost
|
TimeCost time.Duration `json:"time_cost"` // Estimated time cost
|
||||||
TotalCost float64 `json:"total_cost"` // Total normalized cost
|
TotalCost float64 `json:"total_cost"` // Total normalized cost
|
||||||
CostBreakdown map[string]float64 `json:"cost_breakdown"` // Detailed cost breakdown
|
CostBreakdown map[string]float64 `json:"cost_breakdown"` // Detailed cost breakdown
|
||||||
}
|
}
|
||||||
|
|
||||||
// AnalysisResult represents the result of context analysis
|
// AnalysisResult represents the result of context analysis
|
||||||
type AnalysisResult struct {
|
type AnalysisResult struct {
|
||||||
QualityScore float64 `json:"quality_score"` // Overall quality (0-1)
|
QualityScore float64 `json:"quality_score"` // Overall quality (0-1)
|
||||||
ConsistencyScore float64 `json:"consistency_score"` // Consistency with hierarchy
|
ConsistencyScore float64 `json:"consistency_score"` // Consistency with hierarchy
|
||||||
CompletenessScore float64 `json:"completeness_score"` // Completeness assessment
|
CompletenessScore float64 `json:"completeness_score"` // Completeness assessment
|
||||||
AccuracyScore float64 `json:"accuracy_score"` // Accuracy assessment
|
AccuracyScore float64 `json:"accuracy_score"` // Accuracy assessment
|
||||||
Issues []*AnalysisIssue `json:"issues"` // Issues found
|
Issues []*AnalysisIssue `json:"issues"` // Issues found
|
||||||
Strengths []string `json:"strengths"` // Context strengths
|
Strengths []string `json:"strengths"` // Context strengths
|
||||||
Improvements []*Suggestion `json:"improvements"` // Improvement suggestions
|
Improvements []*Suggestion `json:"improvements"` // Improvement suggestions
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis occurred
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis occurred
|
||||||
}
|
}
|
||||||
|
|
||||||
// AnalysisIssue represents an issue found during analysis
|
// AnalysisIssue represents an issue found during analysis
|
||||||
@@ -418,86 +419,86 @@ type Suggestion struct {
|
|||||||
|
|
||||||
// Pattern represents a detected context pattern
|
// Pattern represents a detected context pattern
|
||||||
type Pattern struct {
|
type Pattern struct {
|
||||||
ID string `json:"id"` // Pattern identifier
|
ID string `json:"id"` // Pattern identifier
|
||||||
Name string `json:"name"` // Pattern name
|
Name string `json:"name"` // Pattern name
|
||||||
Description string `json:"description"` // Pattern description
|
Description string `json:"description"` // Pattern description
|
||||||
MatchCriteria map[string]interface{} `json:"match_criteria"` // Criteria for matching
|
MatchCriteria map[string]interface{} `json:"match_criteria"` // Criteria for matching
|
||||||
Confidence float64 `json:"confidence"` // Pattern confidence (0-1)
|
Confidence float64 `json:"confidence"` // Pattern confidence (0-1)
|
||||||
Frequency int `json:"frequency"` // How often pattern appears
|
Frequency int `json:"frequency"` // How often pattern appears
|
||||||
Examples []string `json:"examples"` // Example contexts that match
|
Examples []string `json:"examples"` // Example contexts that match
|
||||||
CreatedAt time.Time `json:"created_at"` // When pattern was detected
|
CreatedAt time.Time `json:"created_at"` // When pattern was detected
|
||||||
}
|
}
|
||||||
|
|
||||||
// PatternMatch represents a match between context and pattern
|
// PatternMatch represents a match between context and pattern
|
||||||
type PatternMatch struct {
|
type PatternMatch struct {
|
||||||
PatternID string `json:"pattern_id"` // ID of matched pattern
|
PatternID string `json:"pattern_id"` // ID of matched pattern
|
||||||
MatchScore float64 `json:"match_score"` // How well it matches (0-1)
|
MatchScore float64 `json:"match_score"` // How well it matches (0-1)
|
||||||
MatchedFields []string `json:"matched_fields"` // Which fields matched
|
MatchedFields []string `json:"matched_fields"` // Which fields matched
|
||||||
Confidence float64 `json:"confidence"` // Confidence in match
|
Confidence float64 `json:"confidence"` // Confidence in match
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContextPattern represents a registered context pattern template
|
// ContextPattern represents a registered context pattern template
|
||||||
type ContextPattern struct {
|
type ContextPattern struct {
|
||||||
ID string `json:"id"` // Pattern identifier
|
ID string `json:"id"` // Pattern identifier
|
||||||
Name string `json:"name"` // Human-readable name
|
Name string `json:"name"` // Human-readable name
|
||||||
Description string `json:"description"` // Pattern description
|
Description string `json:"description"` // Pattern description
|
||||||
Template *ContextNode `json:"template"` // Template for matching
|
Template *ContextNode `json:"template"` // Template for matching
|
||||||
Criteria map[string]interface{} `json:"criteria"` // Matching criteria
|
Criteria map[string]interface{} `json:"criteria"` // Matching criteria
|
||||||
Priority int `json:"priority"` // Pattern priority
|
Priority int `json:"priority"` // Pattern priority
|
||||||
CreatedBy string `json:"created_by"` // Who created pattern
|
CreatedBy string `json:"created_by"` // Who created pattern
|
||||||
CreatedAt time.Time `json:"created_at"` // When created
|
CreatedAt time.Time `json:"created_at"` // When created
|
||||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||||
UsageCount int `json:"usage_count"` // How often used
|
UsageCount int `json:"usage_count"` // How often used
|
||||||
}
|
}
|
||||||
|
|
||||||
// Inconsistency represents a detected inconsistency in the context hierarchy
|
// Inconsistency represents a detected inconsistency in the context hierarchy
|
||||||
type Inconsistency struct {
|
type Inconsistency struct {
|
||||||
Type string `json:"type"` // Type of inconsistency
|
Type string `json:"type"` // Type of inconsistency
|
||||||
Description string `json:"description"` // Description of the issue
|
Description string `json:"description"` // Description of the issue
|
||||||
AffectedNodes []string `json:"affected_nodes"` // Nodes involved
|
AffectedNodes []string `json:"affected_nodes"` // Nodes involved
|
||||||
Severity string `json:"severity"` // Severity level
|
Severity string `json:"severity"` // Severity level
|
||||||
Suggestion string `json:"suggestion"` // How to resolve
|
Suggestion string `json:"suggestion"` // How to resolve
|
||||||
DetectedAt time.Time `json:"detected_at"` // When detected
|
DetectedAt time.Time `json:"detected_at"` // When detected
|
||||||
}
|
}
|
||||||
|
|
||||||
// SearchQuery represents a context search query
|
// SearchQuery represents a context search query
|
||||||
type SearchQuery struct {
|
type SearchQuery struct {
|
||||||
// Query terms
|
// Query terms
|
||||||
Query string `json:"query"` // Main search query
|
Query string `json:"query"` // Main search query
|
||||||
Tags []string `json:"tags"` // Required tags
|
Tags []string `json:"tags"` // Required tags
|
||||||
Technologies []string `json:"technologies"` // Required technologies
|
Technologies []string `json:"technologies"` // Required technologies
|
||||||
FileTypes []string `json:"file_types"` // File types to include
|
FileTypes []string `json:"file_types"` // File types to include
|
||||||
|
|
||||||
// Filters
|
// Filters
|
||||||
MinConfidence float64 `json:"min_confidence"` // Minimum confidence
|
MinConfidence float64 `json:"min_confidence"` // Minimum confidence
|
||||||
MaxAge *time.Duration `json:"max_age"` // Maximum age
|
MaxAge *time.Duration `json:"max_age"` // Maximum age
|
||||||
Roles []string `json:"roles"` // Required access roles
|
Roles []string `json:"roles"` // Required access roles
|
||||||
|
|
||||||
// Scope
|
// Scope
|
||||||
Scope []string `json:"scope"` // Paths to search within
|
Scope []string `json:"scope"` // Paths to search within
|
||||||
ExcludeScope []string `json:"exclude_scope"` // Paths to exclude
|
ExcludeScope []string `json:"exclude_scope"` // Paths to exclude
|
||||||
|
|
||||||
// Result options
|
// Result options
|
||||||
Limit int `json:"limit"` // Maximum results
|
Limit int `json:"limit"` // Maximum results
|
||||||
Offset int `json:"offset"` // Result offset
|
Offset int `json:"offset"` // Result offset
|
||||||
SortBy string `json:"sort_by"` // Sort field
|
SortBy string `json:"sort_by"` // Sort field
|
||||||
SortOrder string `json:"sort_order"` // asc, desc
|
SortOrder string `json:"sort_order"` // asc, desc
|
||||||
|
|
||||||
// Advanced options
|
// Advanced options
|
||||||
FuzzyMatch bool `json:"fuzzy_match"` // Enable fuzzy matching
|
FuzzyMatch bool `json:"fuzzy_match"` // Enable fuzzy matching
|
||||||
IncludeStale bool `json:"include_stale"` // Include stale contexts
|
IncludeStale bool `json:"include_stale"` // Include stale contexts
|
||||||
TemporalFilter *TemporalFilter `json:"temporal_filter"` // Temporal filtering
|
TemporalFilter *TemporalFilter `json:"temporal_filter"` // Temporal filtering
|
||||||
}
|
}
|
||||||
|
|
||||||
// TemporalFilter represents temporal filtering options
|
// TemporalFilter represents temporal filtering options
|
||||||
type TemporalFilter struct {
|
type TemporalFilter struct {
|
||||||
FromTime *time.Time `json:"from_time"` // Start time
|
FromTime *time.Time `json:"from_time"` // Start time
|
||||||
ToTime *time.Time `json:"to_time"` // End time
|
ToTime *time.Time `json:"to_time"` // End time
|
||||||
VersionRange *VersionRange `json:"version_range"` // Version range
|
VersionRange *VersionRange `json:"version_range"` // Version range
|
||||||
ChangeReasons []ChangeReason `json:"change_reasons"` // Specific change reasons
|
ChangeReasons []ChangeReason `json:"change_reasons"` // Specific change reasons
|
||||||
DecisionMakers []string `json:"decision_makers"` // Specific decision makers
|
DecisionMakers []string `json:"decision_makers"` // Specific decision makers
|
||||||
MinDecisionHops int `json:"min_decision_hops"` // Minimum decision hops
|
MinDecisionHops int `json:"min_decision_hops"` // Minimum decision hops
|
||||||
MaxDecisionHops int `json:"max_decision_hops"` // Maximum decision hops
|
MaxDecisionHops int `json:"max_decision_hops"` // Maximum decision hops
|
||||||
}
|
}
|
||||||
|
|
||||||
// VersionRange represents a range of versions
|
// VersionRange represents a range of versions
|
||||||
@@ -509,58 +510,58 @@ type VersionRange struct {
|
|||||||
// SearchResult represents a single search result
|
// SearchResult represents a single search result
|
||||||
type SearchResult struct {
|
type SearchResult struct {
|
||||||
Context *ResolvedContext `json:"context"` // Resolved context
|
Context *ResolvedContext `json:"context"` // Resolved context
|
||||||
TemporalNode *TemporalNode `json:"temporal_node"` // Associated temporal node
|
TemporalNode *TemporalNode `json:"temporal_node"` // Associated temporal node
|
||||||
MatchScore float64 `json:"match_score"` // How well it matches query (0-1)
|
MatchScore float64 `json:"match_score"` // How well it matches query (0-1)
|
||||||
MatchedFields []string `json:"matched_fields"` // Which fields matched
|
MatchedFields []string `json:"matched_fields"` // Which fields matched
|
||||||
Snippet string `json:"snippet"` // Text snippet showing match
|
Snippet string `json:"snippet"` // Text snippet showing match
|
||||||
Rank int `json:"rank"` // Result rank
|
Rank int `json:"rank"` // Result rank
|
||||||
}
|
}
|
||||||
|
|
||||||
// IndexMetadata represents metadata for context indexing
|
// IndexMetadata represents metadata for context indexing
|
||||||
type IndexMetadata struct {
|
type IndexMetadata struct {
|
||||||
IndexType string `json:"index_type"` // Type of index
|
IndexType string `json:"index_type"` // Type of index
|
||||||
IndexedFields []string `json:"indexed_fields"` // Fields that are indexed
|
IndexedFields []string `json:"indexed_fields"` // Fields that are indexed
|
||||||
IndexedAt time.Time `json:"indexed_at"` // When indexed
|
IndexedAt time.Time `json:"indexed_at"` // When indexed
|
||||||
IndexVersion string `json:"index_version"` // Index version
|
IndexVersion string `json:"index_version"` // Index version
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
// DecisionAnalysis represents analysis of decision patterns
|
// DecisionAnalysis represents analysis of decision patterns
|
||||||
type DecisionAnalysis struct {
|
type DecisionAnalysis struct {
|
||||||
TotalDecisions int `json:"total_decisions"` // Total decisions analyzed
|
TotalDecisions int `json:"total_decisions"` // Total decisions analyzed
|
||||||
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
|
DecisionMakers map[string]int `json:"decision_makers"` // Decision maker frequency
|
||||||
ChangeReasons map[ChangeReason]int `json:"change_reasons"` // Change reason frequency
|
ChangeReasons map[ChangeReason]int `json:"change_reasons"` // Change reason frequency
|
||||||
ImpactScopes map[ImpactScope]int `json:"impact_scopes"` // Impact scope distribution
|
ImpactScopes map[ImpactScope]int `json:"impact_scopes"` // Impact scope distribution
|
||||||
ConfidenceTrends map[string]float64 `json:"confidence_trends"` // Confidence trends over time
|
ConfidenceTrends map[string]float64 `json:"confidence_trends"` // Confidence trends over time
|
||||||
DecisionFrequency map[string]int `json:"decision_frequency"` // Decisions per time period
|
DecisionFrequency map[string]int `json:"decision_frequency"` // Decisions per time period
|
||||||
InfluenceNetworkStats *InfluenceNetworkStats `json:"influence_network_stats"` // Network statistics
|
InfluenceNetworkStats *InfluenceNetworkStats `json:"influence_network_stats"` // Network statistics
|
||||||
Patterns []*DecisionPattern `json:"patterns"` // Detected decision patterns
|
Patterns []*DecisionPattern `json:"patterns"` // Detected decision patterns
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
AnalysisTimeSpan time.Duration `json:"analysis_time_span"` // Time span analyzed
|
AnalysisTimeSpan time.Duration `json:"analysis_time_span"` // Time span analyzed
|
||||||
}
|
}
|
||||||
|
|
||||||
// InfluenceNetworkStats represents statistics about the influence network
|
// InfluenceNetworkStats represents statistics about the influence network
|
||||||
type InfluenceNetworkStats struct {
|
type InfluenceNetworkStats struct {
|
||||||
TotalNodes int `json:"total_nodes"` // Total nodes in network
|
TotalNodes int `json:"total_nodes"` // Total nodes in network
|
||||||
TotalEdges int `json:"total_edges"` // Total influence relationships
|
TotalEdges int `json:"total_edges"` // Total influence relationships
|
||||||
AverageConnections float64 `json:"average_connections"` // Average connections per node
|
AverageConnections float64 `json:"average_connections"` // Average connections per node
|
||||||
MaxConnections int `json:"max_connections"` // Maximum connections for any node
|
MaxConnections int `json:"max_connections"` // Maximum connections for any node
|
||||||
NetworkDensity float64 `json:"network_density"` // Network density (0-1)
|
NetworkDensity float64 `json:"network_density"` // Network density (0-1)
|
||||||
ClusteringCoeff float64 `json:"clustering_coeff"` // Clustering coefficient
|
ClusteringCoeff float64 `json:"clustering_coeff"` // Clustering coefficient
|
||||||
MaxPathLength int `json:"max_path_length"` // Maximum path length in network
|
MaxPathLength int `json:"max_path_length"` // Maximum path length in network
|
||||||
CentralNodes []string `json:"central_nodes"` // Most central nodes
|
CentralNodes []string `json:"central_nodes"` // Most central nodes
|
||||||
}
|
}
|
||||||
|
|
||||||
// DecisionPattern represents a detected pattern in decision-making
|
// DecisionPattern represents a detected pattern in decision-making
|
||||||
type DecisionPattern struct {
|
type DecisionPattern struct {
|
||||||
ID string `json:"id"` // Pattern identifier
|
ID string `json:"id"` // Pattern identifier
|
||||||
Name string `json:"name"` // Pattern name
|
Name string `json:"name"` // Pattern name
|
||||||
Description string `json:"description"` // Pattern description
|
Description string `json:"description"` // Pattern description
|
||||||
Frequency int `json:"frequency"` // How often this pattern occurs
|
Frequency int `json:"frequency"` // How often this pattern occurs
|
||||||
Confidence float64 `json:"confidence"` // Confidence in pattern (0-1)
|
Confidence float64 `json:"confidence"` // Confidence in pattern (0-1)
|
||||||
ExampleDecisions []string `json:"example_decisions"` // Example decisions that match
|
ExampleDecisions []string `json:"example_decisions"` // Example decisions that match
|
||||||
Characteristics map[string]interface{} `json:"characteristics"` // Pattern characteristics
|
Characteristics map[string]interface{} `json:"characteristics"` // Pattern characteristics
|
||||||
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
|
DetectedAt time.Time `json:"detected_at"` // When pattern was detected
|
||||||
}
|
}
|
||||||
|
|
||||||
// ResolverStatistics represents statistics about context resolution operations
|
// ResolverStatistics represents statistics about context resolution operations
|
||||||
|
|||||||
Reference in New Issue
Block a user