chore: align slurp config and scaffolding
This commit is contained in:
94
docs/development/sec-slurp-ucxl-beacon-pin-steward.md
Normal file
94
docs/development/sec-slurp-ucxl-beacon-pin-steward.md
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
# SEC-SLURP UCXL Beacon & Pin Steward Design Notes
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
- Establish the authoritative UCXL context beacon that bridges SLURP persistence with WHOOSH/role-aware agents.
|
||||||
|
- Define the Pin Steward responsibilities so DHT replication, healing, and telemetry satisfy SEC-SLURP 1.1a acceptance criteria.
|
||||||
|
- Provide an incremental execution plan aligned with the Persistence Wiring Report and DHT Resilience Supplement.
|
||||||
|
|
||||||
|
## UCXL Beacon Data Model
|
||||||
|
- **manifest_id** (`string`): deterministic hash of `project:task:address:version`.
|
||||||
|
- **ucxl_address** (`ucxl.Address`): canonical address that produced the manifest.
|
||||||
|
- **context_version** (`int`): monotonic version from SLURP temporal graph.
|
||||||
|
- **source_hash** (`string`): content hash emitted by `persistContext` (LevelDB) for change detection.
|
||||||
|
- **generated_by** (`string`): CHORUS agent id / role bundle that wrote the context.
|
||||||
|
- **generated_at** (`time.Time`): timestamp from SLURP persistence event.
|
||||||
|
- **replica_targets** (`[]string`): desired replica node ids (Pin Steward enforces `replication_factor`).
|
||||||
|
- **replica_state** (`[]ReplicaInfo`): health snapshot (`node_id`, `provider_id`, `status`, `last_checked`, `latency_ms`).
|
||||||
|
- **encryption** (`EncryptionMetadata`):
|
||||||
|
- `dek_fingerprint` (`string`)
|
||||||
|
- `kek_policy` (`string`): BACKBEAT rotation policy identifier.
|
||||||
|
- `rotation_due` (`time.Time`)
|
||||||
|
- **compliance_tags** (`[]string`): SHHH/WHOOSH governance hooks (e.g. `sec-high`, `audit-required`).
|
||||||
|
- **beacon_metrics** (`BeaconMetrics`): summarized counters for cache hits, DHT retrieves, validation errors.
|
||||||
|
|
||||||
|
### Storage Strategy
|
||||||
|
- Primary persistence in LevelDB (`pkg/slurp/slurp.go`) using key prefix `beacon::<manifest_id>`.
|
||||||
|
- Secondary replication to DHT under `dht://beacon/<manifest_id>` enabling WHOOSH agents to read via Pin Steward API.
|
||||||
|
- Optional export to UCXL Decision Record envelope for historical traceability.
|
||||||
|
|
||||||
|
## Beacon APIs
|
||||||
|
| Endpoint | Purpose | Notes |
|
||||||
|
|----------|---------|-------|
|
||||||
|
| `Beacon.Upsert(manifest)` | Persist/update manifest | Called by SLURP after `persistContext` success. |
|
||||||
|
| `Beacon.Get(ucxlAddress)` | Resolve latest manifest | Used by WHOOSH/agents to locate canonical context. |
|
||||||
|
| `Beacon.List(filter)` | Query manifests by tags/roles/time | Backs dashboards and Pin Steward audits. |
|
||||||
|
| `Beacon.StreamChanges(since)` | Provide change feed for Pin Steward anti-entropy jobs | Implements backpressure and bookmark tokens. |
|
||||||
|
|
||||||
|
All APIs return envelope with UCXL citation + checksum to make SLURP⇄WHOOSH handoff auditable.
|
||||||
|
|
||||||
|
## Pin Steward Responsibilities
|
||||||
|
1. **Replication Planning**
|
||||||
|
- Read manifests via `Beacon.StreamChanges`.
|
||||||
|
- Evaluate current replica_state vs. `replication_factor` from configuration.
|
||||||
|
- Produce queue of DHT store/refresh tasks (`storeAsync`, `storeSync`, `storeQuorum`).
|
||||||
|
2. **Healing & Anti-Entropy**
|
||||||
|
- Schedule `heal_under_replicated` jobs every `anti_entropy_interval`.
|
||||||
|
- Re-announce providers on Pulse/Reverb when TTL < threshold.
|
||||||
|
- Record outcomes back into manifest (`replica_state`).
|
||||||
|
3. **Envelope Encryption Enforcement**
|
||||||
|
- Request KEK material from KACHING/SHHH as described in SEC-SLURP 1.1a.
|
||||||
|
- Ensure DEK fingerprints match `encryption` metadata; trigger rotation if stale.
|
||||||
|
4. **Telemetry Export**
|
||||||
|
- Emit Prometheus counters: `pin_steward_replica_heal_total`, `pin_steward_replica_unhealthy`, `pin_steward_encryption_rotations_total`.
|
||||||
|
- Surface aggregated health to WHOOSH dashboards for council visibility.
|
||||||
|
|
||||||
|
## Interaction Flow
|
||||||
|
1. **SLURP Persistence**
|
||||||
|
- `UpsertContext` → LevelDB write → manifests assembled (`persistContext`).
|
||||||
|
- Beacon `Upsert` called with manifest + context hash.
|
||||||
|
2. **Pin Steward Intake**
|
||||||
|
- `StreamChanges` yields manifest → steward verifies encryption metadata and schedules replication tasks.
|
||||||
|
3. **DHT Coordination**
|
||||||
|
- `ReplicationManager.EnsureReplication` invoked with target factor.
|
||||||
|
- `defaultVectorClockManager` (temporary) to be replaced with libp2p-aware implementation for provider TTL tracking.
|
||||||
|
4. **WHOOSH Consumption**
|
||||||
|
- WHOOSH SLURP proxy fetches manifest via `Beacon.Get`, caches in WHOOSH DB, attaches to deliverable artifacts.
|
||||||
|
- Council UI surfaces replication state + encryption posture for operator decisions.
|
||||||
|
|
||||||
|
## Incremental Delivery Plan
|
||||||
|
1. **Sprint A (Persistence parity)**
|
||||||
|
- Finalize LevelDB manifest schema + tests (extend `slurp_persistence_test.go`).
|
||||||
|
- Implement Beacon interfaces within SLURP service (in-memory + LevelDB).
|
||||||
|
- Add Prometheus metrics for persistence reads/misses.
|
||||||
|
2. **Sprint B (Pin Steward MVP)**
|
||||||
|
- Build steward worker with configurable reconciliation loop.
|
||||||
|
- Wire to existing `DistributedStorage` stubs (`StoreAsync/Sync/Quorum`).
|
||||||
|
- Emit health logs; integrate with CLI diagnostics.
|
||||||
|
3. **Sprint C (DHT Resilience)**
|
||||||
|
- Swap `defaultVectorClockManager` with libp2p implementation; add provider TTL probes.
|
||||||
|
- Implement envelope encryption path leveraging KACHING/SHHH interfaces (replace stubs in `pkg/crypto`).
|
||||||
|
- Add CI checks: replica factor assertions, provider refresh tests, beacon schema validation.
|
||||||
|
4. **Sprint D (WHOOSH Integration)**
|
||||||
|
- Expose REST/gRPC endpoint for WHOOSH to query manifests.
|
||||||
|
- Update WHOOSH SLURPArtifactManager to require beacon confirmation before submission.
|
||||||
|
- Surface Pin Steward alerts in WHOOSH admin UI.
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
- Confirm whether Beacon manifests should include DER signatures or rely on UCXL envelope hash.
|
||||||
|
- Determine storage for historical manifests (append-only log vs. latest-only) to support temporal rewind.
|
||||||
|
- Align Pin Steward job scheduling with existing BACKBEAT cadence to avoid conflicting rotations.
|
||||||
|
|
||||||
|
## Next Actions
|
||||||
|
- Prototype `BeaconStore` interface + LevelDB implementation in SLURP package.
|
||||||
|
- Document Pin Steward anti-entropy algorithm with pseudocode and integrate into SEC-SLURP test plan.
|
||||||
|
- Sync with WHOOSH team on manifest query contract (REST vs. gRPC; pagination semantics).
|
||||||
52
docs/development/sec-slurp-whoosh-integration-demo.md
Normal file
52
docs/development/sec-slurp-whoosh-integration-demo.md
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
# WHOOSH ↔ CHORUS Integration Demo Plan (SEC-SLURP Track)
|
||||||
|
|
||||||
|
## Demo Objectives
|
||||||
|
- Showcase end-to-end persistence → UCXL beacon → Pin Steward → WHOOSH artifact submission flow.
|
||||||
|
- Validate role-based agent interactions with SLURP contexts (resolver + temporal graph) prior to DHT hardening.
|
||||||
|
- Capture metrics/telemetry needed for SEC-SLURP exit criteria and WHOOSH Phase 1 sign-off.
|
||||||
|
|
||||||
|
## Sequenced Milestones
|
||||||
|
1. **Persistence Validation Session**
|
||||||
|
- Run `GOWORK=off go test ./pkg/slurp/...` with stubs patched; demo LevelDB warm/load using `slurp_persistence_test.go`.
|
||||||
|
- Inspect beacon manifests via CLI (`slurpctl beacon list`).
|
||||||
|
- Deliverable: test log + manifest sample archived in UCXL.
|
||||||
|
|
||||||
|
2. **Beacon → Pin Steward Dry Run**
|
||||||
|
- Replay stored manifests through Pin Steward worker with mock DHT backend.
|
||||||
|
- Show replication planner queue + telemetry counters (`pin_steward_replica_heal_total`).
|
||||||
|
- Deliverable: decision record linking manifest to replication outcome.
|
||||||
|
|
||||||
|
3. **WHOOSH SLURP Proxy Alignment**
|
||||||
|
- Point WHOOSH dev stack (`npm run dev`) at local SLURP with beacon API enabled.
|
||||||
|
- Walk through council formation, capture SLURP artifact submission with beacon confirmation modal.
|
||||||
|
- Deliverable: screen recording + WHOOSH DB entry referencing beacon manifest id.
|
||||||
|
|
||||||
|
4. **DHT Resilience Checkpoint**
|
||||||
|
- Switch Pin Steward to libp2p DHT (once wired) and run replication + provider TTL check.
|
||||||
|
- Fail one node intentionally, demonstrate heal path + alert surfaced in WHOOSH UI.
|
||||||
|
- Deliverable: telemetry dump + alert screenshot.
|
||||||
|
|
||||||
|
5. **Governance & Telemetry Wrap-Up**
|
||||||
|
- Export Prometheus metrics (cache hit/miss, beacon writes, replication heals) into KACHING dashboard.
|
||||||
|
- Publish Decision Record documenting UCXL address flow, referencing SEC-SLURP docs.
|
||||||
|
|
||||||
|
## Roles & Responsibilities
|
||||||
|
- **SLURP Team:** finalize persistence build, implement beacon APIs, own Pin Steward worker.
|
||||||
|
- **WHOOSH Team:** wire beacon client, expose replication/encryption status in UI, capture council telemetry.
|
||||||
|
- **KACHING/SHHH Stakeholders:** validate telemetry ingestion and encryption custody notes.
|
||||||
|
- **Program Management:** schedule demo rehearsal, ensure Decision Records and UCXL addresses recorded.
|
||||||
|
|
||||||
|
## Tooling & Environments
|
||||||
|
- Local cluster via `docker compose up slurp whoosh pin-steward` (to be scripted in `commands/`).
|
||||||
|
- Use `make demo-sec-slurp` target to run integration harness (to be added).
|
||||||
|
- Prometheus/Grafana docker compose for metrics validation.
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- Beacon manifest accessible from WHOOSH UI within 2s average latency.
|
||||||
|
- Pin Steward resolves under-replicated manifest within demo timeline (<30s) and records healing event.
|
||||||
|
- All demo steps logged with UCXL references and SHHH redaction checks passing.
|
||||||
|
|
||||||
|
## Open Items
|
||||||
|
- Need sample repo/issues to feed WHOOSH analyzer (consider `project-queues/active/WHOOSH/demo-data`).
|
||||||
|
- Determine minimal DHT cluster footprint for the demo (3 vs 5 nodes).
|
||||||
|
- Align on telemetry retention window for demo (24h?).
|
||||||
32
docs/progress/SEC-SLURP-1.1a-supplemental.md
Normal file
32
docs/progress/SEC-SLURP-1.1a-supplemental.md
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
# SEC-SLURP 1.1a – DHT Resilience Supplement
|
||||||
|
|
||||||
|
## Requirements (derived from `docs/Modules/DHT.md`)
|
||||||
|
|
||||||
|
1. **Real DHT state & persistence**
|
||||||
|
- Replace mock DHT usage with libp2p-based storage or equivalent real implementation.
|
||||||
|
- Store DHT/blockstore data on persistent volumes (named volumes/ZFS/NFS) with node placement constraints.
|
||||||
|
- Ensure bootstrap nodes are stateful and survive container churn.
|
||||||
|
|
||||||
|
2. **Pin Steward + replication policy**
|
||||||
|
- Introduce a Pin Steward service that tracks UCXL CID manifests and enforces replication factor (e.g. 3–5 replicas).
|
||||||
|
- Re-announce providers on Pulse/Reverb and heal under-replicated content.
|
||||||
|
- Schedule anti-entropy jobs to verify and repair replicas.
|
||||||
|
|
||||||
|
3. **Envelope encryption & shared key custody**
|
||||||
|
- Implement envelope encryption (DEK+KEK) with threshold/organizational custody rather than per-role ownership.
|
||||||
|
- Store KEK metadata with UCXL manifests; rotate via BACKBEAT.
|
||||||
|
- Update crypto/key-manager stubs to real implementations once available.
|
||||||
|
|
||||||
|
4. **Shared UCXL Beacon index**
|
||||||
|
- Maintain an authoritative CID registry (DR/UCXL) replicated outside individual agents.
|
||||||
|
- Ensure metadata updates are durable and role-agnostic to prevent stranded CIDs.
|
||||||
|
|
||||||
|
5. **CI/SLO validation**
|
||||||
|
- Add automated tests/health checks covering provider refresh, replication factor, and persistent-storage guarantees.
|
||||||
|
- Gate releases on DHT resilience checks (provider TTLs, replica counts).
|
||||||
|
|
||||||
|
## Integration Path for SEC-SLURP 1.1
|
||||||
|
|
||||||
|
- Incorporate the above requirements as acceptance criteria alongside LevelDB persistence.
|
||||||
|
- Sequence work to: migrate DHT interactions, introduce Pin Steward, implement envelope crypto, and wire CI validation.
|
||||||
|
- Attach artifacts (Pin Steward design, envelope crypto spec, CI scripts) to the Phase 1 deliverable checklist.
|
||||||
@@ -5,10 +5,14 @@
|
|||||||
- Upgraded SLURP’s lifecycle so initialization bootstraps cached context data from disk, cache misses hydrate from persistence, successful `UpsertContext` calls write back to LevelDB, and shutdown closes the store with error telemetry.
|
- Upgraded SLURP’s lifecycle so initialization bootstraps cached context data from disk, cache misses hydrate from persistence, successful `UpsertContext` calls write back to LevelDB, and shutdown closes the store with error telemetry.
|
||||||
- Introduced `pkg/slurp/slurp_persistence_test.go` to confirm contexts survive process restarts and can be resolved after clearing in-memory caches.
|
- Introduced `pkg/slurp/slurp_persistence_test.go` to confirm contexts survive process restarts and can be resolved after clearing in-memory caches.
|
||||||
- Instrumented cache/persistence metrics so hit/miss ratios and storage failures are tracked for observability.
|
- Instrumented cache/persistence metrics so hit/miss ratios and storage failures are tracked for observability.
|
||||||
- Attempted `GOWORK=off go test ./pkg/slurp`; execution was blocked by legacy references to `config.Authority*` symbols in `pkg/slurp/context`, so the new test did not run.
|
- Implemented lightweight crypto/key-management stubs (`pkg/crypto/role_crypto_stub.go`, `pkg/crypto/key_manager_stub.go`) so SLURP modules compile while the production stack is ported.
|
||||||
|
- Updated DHT distribution and encrypted storage layers (`pkg/slurp/distribution/dht_impl.go`, `pkg/slurp/storage/encrypted_storage.go`) to use the crypto stubs, adding per-role fingerprints and durable decoding logic.
|
||||||
|
- Expanded storage metadata models (`pkg/slurp/storage/types.go`, `pkg/slurp/storage/backup_manager.go`) with fields referenced by backup/replication flows (progress, error messages, retention, data size).
|
||||||
|
- Incrementally stubbed/simplified distributed storage helpers to inch toward a compilable SLURP package.
|
||||||
|
- Attempted `GOWORK=off go test ./pkg/slurp`; the original authority-level blocker is resolved, but builds still fail in storage/index code due to remaining stub work (e.g., Bleve queries, DHT helpers).
|
||||||
|
|
||||||
## Recommended Next Steps
|
## Recommended Next Steps
|
||||||
- Address the `config.Authority*` symbol drift (or scope down the impacted packages) so the SLURP test suite can compile cleanly, then rerun `GOWORK=off go test ./pkg/slurp` to validate persistence changes.
|
- Stub the remaining storage/index dependencies (Bleve query scaffolding, UCXL helpers, `errorCh` queues, cache regex usage) or neutralize the heavy modules so that `GOWORK=off go test ./pkg/slurp` compiles and runs.
|
||||||
- Feed the durable store into the resolver and temporal graph implementations to finish the remaining Phase 1 SLURP roadmap items.
|
- Feed the durable store into the resolver and temporal graph implementations to finish the SEC-SLURP 1.1 milestone once the package builds cleanly.
|
||||||
- Expand Prometheus metrics and logging to track cache hit/miss ratios plus persistence errors for SEC-SLURP observability goals.
|
- Extend Prometheus metrics/logging to track cache hit/miss ratios plus persistence errors for observability alignment.
|
||||||
- Review unrelated changes on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert to keep the branch focused.
|
- Review unrelated changes still tracked on `feature/phase-4-real-providers` (e.g., docker-compose edits) and either align them with this roadmap work or revert for focus.
|
||||||
|
|||||||
@@ -131,6 +131,26 @@ type ResolutionConfig struct {
|
|||||||
// SlurpConfig defines SLURP settings
|
// SlurpConfig defines SLURP settings
|
||||||
type SlurpConfig struct {
|
type SlurpConfig struct {
|
||||||
Enabled bool `yaml:"enabled"`
|
Enabled bool `yaml:"enabled"`
|
||||||
|
BaseURL string `yaml:"base_url"`
|
||||||
|
APIKey string `yaml:"api_key"`
|
||||||
|
Timeout time.Duration `yaml:"timeout"`
|
||||||
|
RetryCount int `yaml:"retry_count"`
|
||||||
|
RetryDelay time.Duration `yaml:"retry_delay"`
|
||||||
|
TemporalAnalysis SlurpTemporalAnalysisConfig `yaml:"temporal_analysis"`
|
||||||
|
Performance SlurpPerformanceConfig `yaml:"performance"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SlurpTemporalAnalysisConfig captures temporal behaviour tuning for SLURP.
|
||||||
|
type SlurpTemporalAnalysisConfig struct {
|
||||||
|
MaxDecisionHops int `yaml:"max_decision_hops"`
|
||||||
|
StalenessCheckInterval time.Duration `yaml:"staleness_check_interval"`
|
||||||
|
StalenessThreshold float64 `yaml:"staleness_threshold"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SlurpPerformanceConfig exposes performance related tunables for SLURP.
|
||||||
|
type SlurpPerformanceConfig struct {
|
||||||
|
MaxConcurrentResolutions int `yaml:"max_concurrent_resolutions"`
|
||||||
|
MetricsCollectionInterval time.Duration `yaml:"metrics_collection_interval"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// WHOOSHAPIConfig defines WHOOSH API integration settings
|
// WHOOSHAPIConfig defines WHOOSH API integration settings
|
||||||
@@ -212,6 +232,20 @@ func LoadFromEnvironment() (*Config, error) {
|
|||||||
},
|
},
|
||||||
Slurp: SlurpConfig{
|
Slurp: SlurpConfig{
|
||||||
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
|
Enabled: getEnvBoolOrDefault("CHORUS_SLURP_ENABLED", false),
|
||||||
|
BaseURL: getEnvOrDefault("CHORUS_SLURP_API_BASE_URL", "http://localhost:9090"),
|
||||||
|
APIKey: getEnvOrFileContent("CHORUS_SLURP_API_KEY", "CHORUS_SLURP_API_KEY_FILE"),
|
||||||
|
Timeout: getEnvDurationOrDefault("CHORUS_SLURP_API_TIMEOUT", 15*time.Second),
|
||||||
|
RetryCount: getEnvIntOrDefault("CHORUS_SLURP_API_RETRY_COUNT", 3),
|
||||||
|
RetryDelay: getEnvDurationOrDefault("CHORUS_SLURP_API_RETRY_DELAY", 2*time.Second),
|
||||||
|
TemporalAnalysis: SlurpTemporalAnalysisConfig{
|
||||||
|
MaxDecisionHops: getEnvIntOrDefault("CHORUS_SLURP_MAX_DECISION_HOPS", 5),
|
||||||
|
StalenessCheckInterval: getEnvDurationOrDefault("CHORUS_SLURP_STALENESS_CHECK_INTERVAL", 5*time.Minute),
|
||||||
|
StalenessThreshold: 0.2,
|
||||||
|
},
|
||||||
|
Performance: SlurpPerformanceConfig{
|
||||||
|
MaxConcurrentResolutions: getEnvIntOrDefault("CHORUS_SLURP_MAX_CONCURRENT_RESOLUTIONS", 4),
|
||||||
|
MetricsCollectionInterval: getEnvDurationOrDefault("CHORUS_SLURP_METRICS_COLLECTION_INTERVAL", time.Minute),
|
||||||
|
},
|
||||||
},
|
},
|
||||||
Security: SecurityConfig{
|
Security: SecurityConfig{
|
||||||
KeyRotationDays: getEnvIntOrDefault("CHORUS_KEY_ROTATION_DAYS", 30),
|
KeyRotationDays: getEnvIntOrDefault("CHORUS_KEY_ROTATION_DAYS", 30),
|
||||||
|
|||||||
23
pkg/crypto/key_manager_stub.go
Normal file
23
pkg/crypto/key_manager_stub.go
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
package crypto
|
||||||
|
|
||||||
|
import "time"
|
||||||
|
|
||||||
|
// GenerateKey returns a deterministic placeholder key identifier for the given role.
|
||||||
|
func (km *KeyManager) GenerateKey(role string) (string, error) {
|
||||||
|
return "stub-key-" + role, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeprecateKey is a no-op in the stub implementation.
|
||||||
|
func (km *KeyManager) DeprecateKey(keyID string) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetKeysForRotation mirrors SEC-SLURP-1.1 key rotation discovery while remaining inert.
|
||||||
|
func (km *KeyManager) GetKeysForRotation(maxAge time.Duration) ([]*KeyInfo, error) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValidateKeyFingerprint accepts all fingerprints in the stubbed environment.
|
||||||
|
func (km *KeyManager) ValidateKeyFingerprint(role, fingerprint string) bool {
|
||||||
|
return true
|
||||||
|
}
|
||||||
75
pkg/crypto/role_crypto_stub.go
Normal file
75
pkg/crypto/role_crypto_stub.go
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
package crypto
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"chorus/pkg/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
type RoleCrypto struct {
|
||||||
|
config *config.Config
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRoleCrypto(cfg *config.Config, _ interface{}, _ interface{}, _ interface{}) (*RoleCrypto, error) {
|
||||||
|
if cfg == nil {
|
||||||
|
return nil, fmt.Errorf("config cannot be nil")
|
||||||
|
}
|
||||||
|
return &RoleCrypto{config: cfg}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RoleCrypto) EncryptForRole(data []byte, role string) ([]byte, string, error) {
|
||||||
|
if len(data) == 0 {
|
||||||
|
return []byte{}, rc.fingerprint(data), nil
|
||||||
|
}
|
||||||
|
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(data)))
|
||||||
|
base64.StdEncoding.Encode(encoded, data)
|
||||||
|
return encoded, rc.fingerprint(data), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RoleCrypto) DecryptForRole(data []byte, role string, _ string) ([]byte, error) {
|
||||||
|
if len(data) == 0 {
|
||||||
|
return []byte{}, nil
|
||||||
|
}
|
||||||
|
decoded := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
|
||||||
|
n, err := base64.StdEncoding.Decode(decoded, data)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return decoded[:n], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RoleCrypto) EncryptContextForRoles(payload interface{}, roles []string, _ []string) ([]byte, error) {
|
||||||
|
raw, err := json.Marshal(payload)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(raw)))
|
||||||
|
base64.StdEncoding.Encode(encoded, raw)
|
||||||
|
return encoded, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rc *RoleCrypto) fingerprint(data []byte) string {
|
||||||
|
sum := sha256.Sum256(data)
|
||||||
|
return base64.StdEncoding.EncodeToString(sum[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
type StorageAccessController interface {
|
||||||
|
CanStore(role, key string) bool
|
||||||
|
CanRetrieve(role, key string) bool
|
||||||
|
}
|
||||||
|
|
||||||
|
type StorageAuditLogger interface {
|
||||||
|
LogEncryptionOperation(role, key, operation string, success bool)
|
||||||
|
LogDecryptionOperation(role, key, operation string, success bool)
|
||||||
|
LogKeyRotation(role, keyID string, success bool, message string)
|
||||||
|
LogError(message string)
|
||||||
|
LogAccessDenial(role, key, operation string)
|
||||||
|
}
|
||||||
|
|
||||||
|
type KeyInfo struct {
|
||||||
|
Role string
|
||||||
|
KeyID string
|
||||||
|
}
|
||||||
284
pkg/slurp/alignment/stubs.go
Normal file
284
pkg/slurp/alignment/stubs.go
Normal file
@@ -0,0 +1,284 @@
|
|||||||
|
package alignment
|
||||||
|
|
||||||
|
import "time"
|
||||||
|
|
||||||
|
// GoalStatistics summarizes goal management metrics.
|
||||||
|
type GoalStatistics struct {
|
||||||
|
TotalGoals int
|
||||||
|
ActiveGoals int
|
||||||
|
Completed int
|
||||||
|
Archived int
|
||||||
|
LastUpdated time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentGapAnalysis captures detected misalignments that require follow-up.
|
||||||
|
type AlignmentGapAnalysis struct {
|
||||||
|
Address string
|
||||||
|
Severity string
|
||||||
|
Findings []string
|
||||||
|
DetectedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentComparison provides a simple comparison view between two contexts.
|
||||||
|
type AlignmentComparison struct {
|
||||||
|
PrimaryScore float64
|
||||||
|
SecondaryScore float64
|
||||||
|
Differences []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentStatistics aggregates assessment metrics across contexts.
|
||||||
|
type AlignmentStatistics struct {
|
||||||
|
TotalAssessments int
|
||||||
|
AverageScore float64
|
||||||
|
SuccessRate float64
|
||||||
|
FailureRate float64
|
||||||
|
LastUpdated time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressHistory captures historical progress samples for a goal.
|
||||||
|
type ProgressHistory struct {
|
||||||
|
GoalID string
|
||||||
|
Samples []ProgressSample
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressSample represents a single progress measurement.
|
||||||
|
type ProgressSample struct {
|
||||||
|
Timestamp time.Time
|
||||||
|
Percentage float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// CompletionPrediction represents a simple completion forecast for a goal.
|
||||||
|
type CompletionPrediction struct {
|
||||||
|
GoalID string
|
||||||
|
EstimatedFinish time.Time
|
||||||
|
Confidence float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressStatistics aggregates goal progress metrics.
|
||||||
|
type ProgressStatistics struct {
|
||||||
|
AverageCompletion float64
|
||||||
|
OpenGoals int
|
||||||
|
OnTrackGoals int
|
||||||
|
AtRiskGoals int
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftHistory tracks historical drift events.
|
||||||
|
type DriftHistory struct {
|
||||||
|
Address string
|
||||||
|
Events []DriftEvent
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftEvent captures a single drift occurrence.
|
||||||
|
type DriftEvent struct {
|
||||||
|
Timestamp time.Time
|
||||||
|
Severity DriftSeverity
|
||||||
|
Details string
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftThresholds defines sensitivity thresholds for drift detection.
|
||||||
|
type DriftThresholds struct {
|
||||||
|
SeverityThreshold DriftSeverity
|
||||||
|
ScoreDelta float64
|
||||||
|
ObservationWindow time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftPatternAnalysis summarizes detected drift patterns.
|
||||||
|
type DriftPatternAnalysis struct {
|
||||||
|
Patterns []string
|
||||||
|
Summary string
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftPrediction provides a lightweight stub for future drift forecasting.
|
||||||
|
type DriftPrediction struct {
|
||||||
|
Address string
|
||||||
|
Horizon time.Duration
|
||||||
|
Severity DriftSeverity
|
||||||
|
Confidence float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// DriftAlert represents an alert emitted when drift exceeds thresholds.
|
||||||
|
type DriftAlert struct {
|
||||||
|
ID string
|
||||||
|
Address string
|
||||||
|
Severity DriftSeverity
|
||||||
|
CreatedAt time.Time
|
||||||
|
Message string
|
||||||
|
}
|
||||||
|
|
||||||
|
// GoalRecommendation summarises next actions for a specific goal.
|
||||||
|
type GoalRecommendation struct {
|
||||||
|
GoalID string
|
||||||
|
Title string
|
||||||
|
Description string
|
||||||
|
Priority int
|
||||||
|
}
|
||||||
|
|
||||||
|
// StrategicRecommendation captures higher-level alignment guidance.
|
||||||
|
type StrategicRecommendation struct {
|
||||||
|
Theme string
|
||||||
|
Summary string
|
||||||
|
Impact string
|
||||||
|
RecommendedBy string
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrioritizedRecommendation wraps a recommendation with ranking metadata.
|
||||||
|
type PrioritizedRecommendation struct {
|
||||||
|
Recommendation *AlignmentRecommendation
|
||||||
|
Score float64
|
||||||
|
Rank int
|
||||||
|
}
|
||||||
|
|
||||||
|
// RecommendationHistory tracks lifecycle updates for a recommendation.
|
||||||
|
type RecommendationHistory struct {
|
||||||
|
RecommendationID string
|
||||||
|
Entries []RecommendationHistoryEntry
|
||||||
|
}
|
||||||
|
|
||||||
|
// RecommendationHistoryEntry represents a single change entry.
|
||||||
|
type RecommendationHistoryEntry struct {
|
||||||
|
Timestamp time.Time
|
||||||
|
Status ImplementationStatus
|
||||||
|
Notes string
|
||||||
|
}
|
||||||
|
|
||||||
|
// ImplementationStatus reflects execution state for recommendations.
|
||||||
|
type ImplementationStatus string
|
||||||
|
|
||||||
|
const (
|
||||||
|
ImplementationPending ImplementationStatus = "pending"
|
||||||
|
ImplementationActive ImplementationStatus = "active"
|
||||||
|
ImplementationBlocked ImplementationStatus = "blocked"
|
||||||
|
ImplementationDone ImplementationStatus = "completed"
|
||||||
|
)
|
||||||
|
|
||||||
|
// RecommendationEffectiveness offers coarse metrics on outcome quality.
|
||||||
|
type RecommendationEffectiveness struct {
|
||||||
|
SuccessRate float64
|
||||||
|
AverageTime time.Duration
|
||||||
|
Feedback []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// RecommendationStatistics aggregates recommendation issuance metrics.
|
||||||
|
type RecommendationStatistics struct {
|
||||||
|
TotalCreated int
|
||||||
|
TotalCompleted int
|
||||||
|
AveragePriority float64
|
||||||
|
LastUpdated time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentMetrics is a lightweight placeholder exported for engine integration.
|
||||||
|
type AlignmentMetrics struct {
|
||||||
|
Assessments int
|
||||||
|
SuccessRate float64
|
||||||
|
FailureRate float64
|
||||||
|
AverageScore float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// GoalMetrics is a stub summarising per-goal metrics.
|
||||||
|
type GoalMetrics struct {
|
||||||
|
GoalID string
|
||||||
|
AverageScore float64
|
||||||
|
SuccessRate float64
|
||||||
|
LastUpdated time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// ProgressMetrics is a stub capturing aggregate progress data.
|
||||||
|
type ProgressMetrics struct {
|
||||||
|
OverallCompletion float64
|
||||||
|
ActiveGoals int
|
||||||
|
CompletedGoals int
|
||||||
|
UpdatedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// MetricsTrends wraps high-level trend information.
|
||||||
|
type MetricsTrends struct {
|
||||||
|
Metric string
|
||||||
|
TrendLine []float64
|
||||||
|
Timestamp time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// MetricsReport represents a generated metrics report placeholder.
|
||||||
|
type MetricsReport struct {
|
||||||
|
ID string
|
||||||
|
Generated time.Time
|
||||||
|
Summary string
|
||||||
|
}
|
||||||
|
|
||||||
|
// MetricsConfiguration reflects configuration for metrics collection.
|
||||||
|
type MetricsConfiguration struct {
|
||||||
|
Enabled bool
|
||||||
|
Interval time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncResult summarises a synchronisation run.
|
||||||
|
type SyncResult struct {
|
||||||
|
SyncedItems int
|
||||||
|
Errors []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// ImportResult summarises the outcome of an import operation.
|
||||||
|
type ImportResult struct {
|
||||||
|
Imported int
|
||||||
|
Skipped int
|
||||||
|
Errors []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncSettings captures synchronisation preferences.
|
||||||
|
type SyncSettings struct {
|
||||||
|
Enabled bool
|
||||||
|
Interval time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncStatus provides health information about sync processes.
|
||||||
|
type SyncStatus struct {
|
||||||
|
LastSync time.Time
|
||||||
|
Healthy bool
|
||||||
|
Message string
|
||||||
|
}
|
||||||
|
|
||||||
|
// AssessmentValidation provides validation results for assessments.
|
||||||
|
type AssessmentValidation struct {
|
||||||
|
Valid bool
|
||||||
|
Issues []string
|
||||||
|
CheckedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConfigurationValidation summarises configuration validation status.
|
||||||
|
type ConfigurationValidation struct {
|
||||||
|
Valid bool
|
||||||
|
Messages []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// WeightsValidation describes validation for weighting schemes.
|
||||||
|
type WeightsValidation struct {
|
||||||
|
Normalized bool
|
||||||
|
Adjustments map[string]float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConsistencyIssue represents a detected consistency issue.
|
||||||
|
type ConsistencyIssue struct {
|
||||||
|
Description string
|
||||||
|
Severity DriftSeverity
|
||||||
|
DetectedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// AlignmentHealthCheck is a stub for health check outputs.
|
||||||
|
type AlignmentHealthCheck struct {
|
||||||
|
Status string
|
||||||
|
Details string
|
||||||
|
CheckedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// NotificationRules captures notification configuration stubs.
|
||||||
|
type NotificationRules struct {
|
||||||
|
Enabled bool
|
||||||
|
Channels []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// NotificationRecord represents a delivered notification.
|
||||||
|
type NotificationRecord struct {
|
||||||
|
ID string
|
||||||
|
Timestamp time.Time
|
||||||
|
Recipient string
|
||||||
|
Status string
|
||||||
|
}
|
||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
"chorus/pkg/ucxl"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// ProjectGoal represents a high-level project objective
|
// ProjectGoal represents a high-level project objective
|
||||||
|
|||||||
@@ -29,9 +29,22 @@ type ContextNode struct {
|
|||||||
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
|
OverridesParent bool `json:"overrides_parent"` // Whether this overrides parent context
|
||||||
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
|
ContextSpecificity int `json:"context_specificity"` // Specificity level (higher = more specific)
|
||||||
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
|
AppliesToChildren bool `json:"applies_to_children"` // Whether this applies to child directories
|
||||||
|
AppliesTo ContextScope `json:"applies_to"` // Scope of application within hierarchy
|
||||||
|
Parent *string `json:"parent,omitempty"` // Parent context path
|
||||||
|
Children []string `json:"children,omitempty"` // Child context paths
|
||||||
|
|
||||||
// Metadata
|
// File metadata
|
||||||
|
FileType string `json:"file_type"` // File extension or type
|
||||||
|
Language *string `json:"language,omitempty"` // Programming language
|
||||||
|
Size *int64 `json:"size,omitempty"` // File size in bytes
|
||||||
|
LastModified *time.Time `json:"last_modified,omitempty"` // Last modification timestamp
|
||||||
|
ContentHash *string `json:"content_hash,omitempty"` // Content hash for change detection
|
||||||
|
|
||||||
|
// Temporal metadata
|
||||||
GeneratedAt time.Time `json:"generated_at"` // When context was generated
|
GeneratedAt time.Time `json:"generated_at"` // When context was generated
|
||||||
|
UpdatedAt time.Time `json:"updated_at"` // Last update timestamp
|
||||||
|
CreatedBy string `json:"created_by"` // Who created the context
|
||||||
|
WhoUpdated string `json:"who_updated"` // Who performed the last update
|
||||||
RAGConfidence float64 `json:"rag_confidence"` // RAG system confidence (0-1)
|
RAGConfidence float64 `json:"rag_confidence"` // RAG system confidence (0-1)
|
||||||
|
|
||||||
// Access control
|
// Access control
|
||||||
|
|||||||
@@ -364,8 +364,8 @@ func (ch *ConsistentHashingImpl) FindClosestNodes(key string, count int) ([]stri
|
|||||||
if hash >= keyHash {
|
if hash >= keyHash {
|
||||||
distance = hash - keyHash
|
distance = hash - keyHash
|
||||||
} else {
|
} else {
|
||||||
// Wrap around distance
|
// Wrap around distance without overflowing 32-bit space
|
||||||
distance = (1<<32 - keyHash) + hash
|
distance = uint32((uint64(1)<<32 - uint64(keyHash)) + uint64(hash))
|
||||||
}
|
}
|
||||||
|
|
||||||
distances = append(distances, struct {
|
distances = append(distances, struct {
|
||||||
|
|||||||
@@ -7,19 +7,19 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
"chorus/pkg/election"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
"chorus/pkg/ucxl"
|
"chorus/pkg/crypto"
|
||||||
|
"chorus/pkg/dht"
|
||||||
|
"chorus/pkg/election"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DistributionCoordinator orchestrates distributed context operations across the cluster
|
// DistributionCoordinator orchestrates distributed context operations across the cluster
|
||||||
type DistributionCoordinator struct {
|
type DistributionCoordinator struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
config *config.Config
|
config *config.Config
|
||||||
dht *dht.DHT
|
dht dht.DHT
|
||||||
roleCrypto *crypto.RoleCrypto
|
roleCrypto *crypto.RoleCrypto
|
||||||
election election.Election
|
election election.Election
|
||||||
distributor ContextDistributor
|
distributor ContextDistributor
|
||||||
@@ -220,14 +220,14 @@ type StorageMetrics struct {
|
|||||||
// NewDistributionCoordinator creates a new distribution coordinator
|
// NewDistributionCoordinator creates a new distribution coordinator
|
||||||
func NewDistributionCoordinator(
|
func NewDistributionCoordinator(
|
||||||
config *config.Config,
|
config *config.Config,
|
||||||
dht *dht.DHT,
|
dhtInstance dht.DHT,
|
||||||
roleCrypto *crypto.RoleCrypto,
|
roleCrypto *crypto.RoleCrypto,
|
||||||
election election.Election,
|
election election.Election,
|
||||||
) (*DistributionCoordinator, error) {
|
) (*DistributionCoordinator, error) {
|
||||||
if config == nil {
|
if config == nil {
|
||||||
return nil, fmt.Errorf("config is required")
|
return nil, fmt.Errorf("config is required")
|
||||||
}
|
}
|
||||||
if dht == nil {
|
if dhtInstance == nil {
|
||||||
return nil, fmt.Errorf("DHT instance is required")
|
return nil, fmt.Errorf("DHT instance is required")
|
||||||
}
|
}
|
||||||
if roleCrypto == nil {
|
if roleCrypto == nil {
|
||||||
@@ -238,14 +238,14 @@ func NewDistributionCoordinator(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Create distributor
|
// Create distributor
|
||||||
distributor, err := NewDHTContextDistributor(dht, roleCrypto, election, config)
|
distributor, err := NewDHTContextDistributor(dhtInstance, roleCrypto, election, config)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to create context distributor: %w", err)
|
return nil, fmt.Errorf("failed to create context distributor: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
coord := &DistributionCoordinator{
|
coord := &DistributionCoordinator{
|
||||||
config: config,
|
config: config,
|
||||||
dht: dht,
|
dht: dhtInstance,
|
||||||
roleCrypto: roleCrypto,
|
roleCrypto: roleCrypto,
|
||||||
election: election,
|
election: election,
|
||||||
distributor: distributor,
|
distributor: distributor,
|
||||||
@@ -399,7 +399,7 @@ func (dc *DistributionCoordinator) GetClusterHealth() (*ClusterHealth, error) {
|
|||||||
|
|
||||||
health := &ClusterHealth{
|
health := &ClusterHealth{
|
||||||
OverallStatus: dc.calculateOverallHealth(),
|
OverallStatus: dc.calculateOverallHealth(),
|
||||||
NodeCount: len(dc.dht.GetConnectedPeers()) + 1, // +1 for current node
|
NodeCount: len(dc.healthMonitors) + 1, // Placeholder count including current node
|
||||||
HealthyNodes: 0,
|
HealthyNodes: 0,
|
||||||
UnhealthyNodes: 0,
|
UnhealthyNodes: 0,
|
||||||
ComponentHealth: make(map[string]*ComponentHealth),
|
ComponentHealth: make(map[string]*ComponentHealth),
|
||||||
@@ -736,14 +736,14 @@ func (dc *DistributionCoordinator) getDefaultDistributionOptions() *Distribution
|
|||||||
return &DistributionOptions{
|
return &DistributionOptions{
|
||||||
ReplicationFactor: 3,
|
ReplicationFactor: 3,
|
||||||
ConsistencyLevel: ConsistencyEventual,
|
ConsistencyLevel: ConsistencyEventual,
|
||||||
EncryptionLevel: crypto.AccessMedium,
|
EncryptionLevel: crypto.AccessLevel(slurpContext.AccessMedium),
|
||||||
ConflictResolution: ResolutionMerged,
|
ConflictResolution: ResolutionMerged,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dc *DistributionCoordinator) getAccessLevelForRole(role string) crypto.AccessLevel {
|
func (dc *DistributionCoordinator) getAccessLevelForRole(role string) crypto.AccessLevel {
|
||||||
// Placeholder implementation
|
// Placeholder implementation
|
||||||
return crypto.AccessMedium
|
return crypto.AccessLevel(slurpContext.AccessMedium)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (dc *DistributionCoordinator) getAllowedCompartments(role string) []string {
|
func (dc *DistributionCoordinator) getAllowedCompartments(role string) []string {
|
||||||
@@ -796,11 +796,11 @@ func (dc *DistributionCoordinator) updatePerformanceMetrics() {
|
|||||||
|
|
||||||
func (dc *DistributionCoordinator) priorityFromSeverity(severity ConflictSeverity) Priority {
|
func (dc *DistributionCoordinator) priorityFromSeverity(severity ConflictSeverity) Priority {
|
||||||
switch severity {
|
switch severity {
|
||||||
case SeverityCritical:
|
case ConflictSeverityCritical:
|
||||||
return PriorityCritical
|
return PriorityCritical
|
||||||
case SeverityHigh:
|
case ConflictSeverityHigh:
|
||||||
return PriorityHigh
|
return PriorityHigh
|
||||||
case SeverityMedium:
|
case ConflictSeverityMedium:
|
||||||
return PriorityNormal
|
return PriorityNormal
|
||||||
default:
|
default:
|
||||||
return PriorityLow
|
return PriorityLow
|
||||||
|
|||||||
@@ -9,12 +9,12 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
"chorus/pkg/election"
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
|
"chorus/pkg/crypto"
|
||||||
|
"chorus/pkg/dht"
|
||||||
|
"chorus/pkg/election"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ContextDistributor handles distributed context operations via DHT
|
// ContextDistributor handles distributed context operations via DHT
|
||||||
@@ -61,6 +61,12 @@ type ContextDistributor interface {
|
|||||||
|
|
||||||
// SetReplicationPolicy configures replication behavior
|
// SetReplicationPolicy configures replication behavior
|
||||||
SetReplicationPolicy(policy *ReplicationPolicy) error
|
SetReplicationPolicy(policy *ReplicationPolicy) error
|
||||||
|
|
||||||
|
// Start initializes background distribution routines
|
||||||
|
Start(ctx context.Context) error
|
||||||
|
|
||||||
|
// Stop releases distribution resources
|
||||||
|
Stop(ctx context.Context) error
|
||||||
}
|
}
|
||||||
|
|
||||||
// DHTStorage provides direct DHT storage operations for context data
|
// DHTStorage provides direct DHT storage operations for context data
|
||||||
@@ -245,10 +251,10 @@ const (
|
|||||||
type ConflictSeverity string
|
type ConflictSeverity string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
SeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
|
ConflictSeverityLow ConflictSeverity = "low" // Low severity - auto-resolvable
|
||||||
SeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
|
ConflictSeverityMedium ConflictSeverity = "medium" // Medium severity - may need review
|
||||||
SeverityHigh ConflictSeverity = "high" // High severity - needs attention
|
ConflictSeverityHigh ConflictSeverity = "high" // High severity - needs attention
|
||||||
SeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
|
ConflictSeverityCritical ConflictSeverity = "critical" // Critical - manual intervention required
|
||||||
)
|
)
|
||||||
|
|
||||||
// ResolutionStrategy represents conflict resolution strategy configuration
|
// ResolutionStrategy represents conflict resolution strategy configuration
|
||||||
|
|||||||
@@ -10,18 +10,18 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
"chorus/pkg/election"
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
|
"chorus/pkg/crypto"
|
||||||
|
"chorus/pkg/dht"
|
||||||
|
"chorus/pkg/election"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DHTContextDistributor implements ContextDistributor using CHORUS DHT infrastructure
|
// DHTContextDistributor implements ContextDistributor using CHORUS DHT infrastructure
|
||||||
type DHTContextDistributor struct {
|
type DHTContextDistributor struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
dht *dht.DHT
|
dht dht.DHT
|
||||||
roleCrypto *crypto.RoleCrypto
|
roleCrypto *crypto.RoleCrypto
|
||||||
election election.Election
|
election election.Election
|
||||||
config *config.Config
|
config *config.Config
|
||||||
@@ -37,7 +37,7 @@ type DHTContextDistributor struct {
|
|||||||
|
|
||||||
// NewDHTContextDistributor creates a new DHT-based context distributor
|
// NewDHTContextDistributor creates a new DHT-based context distributor
|
||||||
func NewDHTContextDistributor(
|
func NewDHTContextDistributor(
|
||||||
dht *dht.DHT,
|
dht dht.DHT,
|
||||||
roleCrypto *crypto.RoleCrypto,
|
roleCrypto *crypto.RoleCrypto,
|
||||||
election election.Election,
|
election election.Election,
|
||||||
config *config.Config,
|
config *config.Config,
|
||||||
@@ -147,13 +147,13 @@ func (d *DHTContextDistributor) DistributeContext(ctx context.Context, node *slu
|
|||||||
return d.recordError(fmt.Sprintf("failed to get vector clock: %v", err))
|
return d.recordError(fmt.Sprintf("failed to get vector clock: %v", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Encrypt context for roles
|
// Prepare context payload for role encryption
|
||||||
encryptedData, err := d.roleCrypto.EncryptContextForRoles(node, roles, []string{})
|
rawContext, err := json.Marshal(node)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return d.recordError(fmt.Sprintf("failed to encrypt context: %v", err))
|
return d.recordError(fmt.Sprintf("failed to marshal context: %v", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create distribution metadata
|
// Create distribution metadata (checksum calculated per-role below)
|
||||||
metadata := &DistributionMetadata{
|
metadata := &DistributionMetadata{
|
||||||
Address: node.UCXLAddress,
|
Address: node.UCXLAddress,
|
||||||
Roles: roles,
|
Roles: roles,
|
||||||
@@ -162,21 +162,28 @@ func (d *DHTContextDistributor) DistributeContext(ctx context.Context, node *slu
|
|||||||
DistributedBy: d.config.Agent.ID,
|
DistributedBy: d.config.Agent.ID,
|
||||||
DistributedAt: time.Now(),
|
DistributedAt: time.Now(),
|
||||||
ReplicationFactor: d.getReplicationFactor(),
|
ReplicationFactor: d.getReplicationFactor(),
|
||||||
Checksum: d.calculateChecksum(encryptedData),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Store encrypted data in DHT for each role
|
// Store encrypted data in DHT for each role
|
||||||
for _, role := range roles {
|
for _, role := range roles {
|
||||||
key := d.keyGenerator.GenerateContextKey(node.UCXLAddress.String(), role)
|
key := d.keyGenerator.GenerateContextKey(node.UCXLAddress.String(), role)
|
||||||
|
|
||||||
|
cipher, fingerprint, err := d.roleCrypto.EncryptForRole(rawContext, role)
|
||||||
|
if err != nil {
|
||||||
|
return d.recordError(fmt.Sprintf("failed to encrypt context for role %s: %v", role, err))
|
||||||
|
}
|
||||||
|
|
||||||
// Create role-specific storage package
|
// Create role-specific storage package
|
||||||
storagePackage := &ContextStoragePackage{
|
storagePackage := &ContextStoragePackage{
|
||||||
EncryptedData: encryptedData,
|
EncryptedData: cipher,
|
||||||
|
KeyFingerprint: fingerprint,
|
||||||
Metadata: metadata,
|
Metadata: metadata,
|
||||||
Role: role,
|
Role: role,
|
||||||
StoredAt: time.Now(),
|
StoredAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
metadata.Checksum = d.calculateChecksum(cipher)
|
||||||
|
|
||||||
// Serialize for storage
|
// Serialize for storage
|
||||||
storageBytes, err := json.Marshal(storagePackage)
|
storageBytes, err := json.Marshal(storagePackage)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -252,11 +259,16 @@ func (d *DHTContextDistributor) RetrieveContext(ctx context.Context, address ucx
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Decrypt context for role
|
// Decrypt context for role
|
||||||
contextNode, err := d.roleCrypto.DecryptContextForRole(storagePackage.EncryptedData, role)
|
plain, err := d.roleCrypto.DecryptForRole(storagePackage.EncryptedData, role, storagePackage.KeyFingerprint)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decrypt context: %v", err))
|
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decrypt context: %v", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var contextNode slurpContext.ContextNode
|
||||||
|
if err := json.Unmarshal(plain, &contextNode); err != nil {
|
||||||
|
return nil, d.recordRetrievalError(fmt.Sprintf("failed to decode context: %v", err))
|
||||||
|
}
|
||||||
|
|
||||||
// Convert to resolved context
|
// Convert to resolved context
|
||||||
resolvedContext := &slurpContext.ResolvedContext{
|
resolvedContext := &slurpContext.ResolvedContext{
|
||||||
UCXLAddress: contextNode.UCXLAddress,
|
UCXLAddress: contextNode.UCXLAddress,
|
||||||
@@ -453,28 +465,13 @@ func (d *DHTContextDistributor) calculateChecksum(data interface{}) string {
|
|||||||
return hex.EncodeToString(hash[:])
|
return hex.EncodeToString(hash[:])
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ensure DHT is bootstrapped before operations
|
|
||||||
func (d *DHTContextDistributor) ensureDHTReady() error {
|
|
||||||
if !d.dht.IsBootstrapped() {
|
|
||||||
return fmt.Errorf("DHT not bootstrapped")
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start starts the distribution service
|
// Start starts the distribution service
|
||||||
func (d *DHTContextDistributor) Start(ctx context.Context) error {
|
func (d *DHTContextDistributor) Start(ctx context.Context) error {
|
||||||
// Bootstrap DHT if not already done
|
if d.gossipProtocol != nil {
|
||||||
if !d.dht.IsBootstrapped() {
|
|
||||||
if err := d.dht.Bootstrap(); err != nil {
|
|
||||||
return fmt.Errorf("failed to bootstrap DHT: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start gossip protocol
|
|
||||||
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
|
if err := d.gossipProtocol.StartGossip(ctx); err != nil {
|
||||||
return fmt.Errorf("failed to start gossip protocol: %w", err)
|
return fmt.Errorf("failed to start gossip protocol: %w", err)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -488,7 +485,8 @@ func (d *DHTContextDistributor) Stop(ctx context.Context) error {
|
|||||||
|
|
||||||
// ContextStoragePackage represents a complete package for DHT storage
|
// ContextStoragePackage represents a complete package for DHT storage
|
||||||
type ContextStoragePackage struct {
|
type ContextStoragePackage struct {
|
||||||
EncryptedData *crypto.EncryptedContextData `json:"encrypted_data"`
|
EncryptedData []byte `json:"encrypted_data"`
|
||||||
|
KeyFingerprint string `json:"key_fingerprint,omitempty"`
|
||||||
Metadata *DistributionMetadata `json:"metadata"`
|
Metadata *DistributionMetadata `json:"metadata"`
|
||||||
Role string `json:"role"`
|
Role string `json:"role"`
|
||||||
StoredAt time.Time `json:"stored_at"`
|
StoredAt time.Time `json:"stored_at"`
|
||||||
@@ -532,45 +530,48 @@ func (kg *DHTKeyGenerator) GenerateReplicationKey(address string) string {
|
|||||||
// Component constructors - these would be implemented in separate files
|
// Component constructors - these would be implemented in separate files
|
||||||
|
|
||||||
// NewReplicationManager creates a new replication manager
|
// NewReplicationManager creates a new replication manager
|
||||||
func NewReplicationManager(dht *dht.DHT, config *config.Config) (ReplicationManager, error) {
|
func NewReplicationManager(dht dht.DHT, config *config.Config) (ReplicationManager, error) {
|
||||||
// Placeholder implementation
|
impl, err := NewReplicationManagerImpl(dht, config)
|
||||||
return &ReplicationManagerImpl{}, nil
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return impl, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewConflictResolver creates a new conflict resolver
|
// NewConflictResolver creates a new conflict resolver
|
||||||
func NewConflictResolver(dht *dht.DHT, config *config.Config) (ConflictResolver, error) {
|
func NewConflictResolver(dht dht.DHT, config *config.Config) (ConflictResolver, error) {
|
||||||
// Placeholder implementation
|
// Placeholder implementation until full resolver is wired
|
||||||
return &ConflictResolverImpl{}, nil
|
return &ConflictResolverImpl{}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewGossipProtocol creates a new gossip protocol
|
// NewGossipProtocol creates a new gossip protocol
|
||||||
func NewGossipProtocol(dht *dht.DHT, config *config.Config) (GossipProtocol, error) {
|
func NewGossipProtocol(dht dht.DHT, config *config.Config) (GossipProtocol, error) {
|
||||||
// Placeholder implementation
|
impl, err := NewGossipProtocolImpl(dht, config)
|
||||||
return &GossipProtocolImpl{}, nil
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return impl, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewNetworkManager creates a new network manager
|
// NewNetworkManager creates a new network manager
|
||||||
func NewNetworkManager(dht *dht.DHT, config *config.Config) (NetworkManager, error) {
|
func NewNetworkManager(dht dht.DHT, config *config.Config) (NetworkManager, error) {
|
||||||
// Placeholder implementation
|
impl, err := NewNetworkManagerImpl(dht, config)
|
||||||
return &NetworkManagerImpl{}, nil
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return impl, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewVectorClockManager creates a new vector clock manager
|
// NewVectorClockManager creates a new vector clock manager
|
||||||
func NewVectorClockManager(dht *dht.DHT, nodeID string) (VectorClockManager, error) {
|
func NewVectorClockManager(dht dht.DHT, nodeID string) (VectorClockManager, error) {
|
||||||
// Placeholder implementation
|
return &defaultVectorClockManager{
|
||||||
return &VectorClockManagerImpl{}, nil
|
clocks: make(map[string]*VectorClock),
|
||||||
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Placeholder structs for components - these would be properly implemented
|
// ConflictResolverImpl is a temporary stub until the full resolver is implemented
|
||||||
|
|
||||||
type ReplicationManagerImpl struct{}
|
|
||||||
func (rm *ReplicationManagerImpl) EnsureReplication(ctx context.Context, address ucxl.Address, factor int) error { return nil }
|
|
||||||
func (rm *ReplicationManagerImpl) GetReplicationStatus(ctx context.Context, address ucxl.Address) (*ReplicaHealth, error) {
|
|
||||||
return &ReplicaHealth{}, nil
|
|
||||||
}
|
|
||||||
func (rm *ReplicationManagerImpl) SetReplicationFactor(factor int) error { return nil }
|
|
||||||
|
|
||||||
type ConflictResolverImpl struct{}
|
type ConflictResolverImpl struct{}
|
||||||
|
|
||||||
func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remote *slurpContext.ContextNode) (*ConflictResolution, error) {
|
func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remote *slurpContext.ContextNode) (*ConflictResolution, error) {
|
||||||
return &ConflictResolution{
|
return &ConflictResolution{
|
||||||
Address: local.UCXLAddress,
|
Address: local.UCXLAddress,
|
||||||
@@ -582,15 +583,71 @@ func (cr *ConflictResolverImpl) ResolveConflict(ctx context.Context, local, remo
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
type GossipProtocolImpl struct{}
|
// defaultVectorClockManager provides a minimal vector clock store for SEC-SLURP scaffolding.
|
||||||
func (gp *GossipProtocolImpl) StartGossip(ctx context.Context) error { return nil }
|
type defaultVectorClockManager struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
clocks map[string]*VectorClock
|
||||||
|
}
|
||||||
|
|
||||||
type NetworkManagerImpl struct{}
|
func (vcm *defaultVectorClockManager) GetClock(nodeID string) (*VectorClock, error) {
|
||||||
|
vcm.mu.Lock()
|
||||||
|
defer vcm.mu.Unlock()
|
||||||
|
|
||||||
type VectorClockManagerImpl struct{}
|
if clock, ok := vcm.clocks[nodeID]; ok {
|
||||||
func (vcm *VectorClockManagerImpl) GetClock(nodeID string) (*VectorClock, error) {
|
return clock, nil
|
||||||
return &VectorClock{
|
}
|
||||||
|
clock := &VectorClock{
|
||||||
Clock: map[string]int64{nodeID: time.Now().Unix()},
|
Clock: map[string]int64{nodeID: time.Now().Unix()},
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
}, nil
|
}
|
||||||
|
vcm.clocks[nodeID] = clock
|
||||||
|
return clock, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (vcm *defaultVectorClockManager) UpdateClock(nodeID string, clock *VectorClock) error {
|
||||||
|
vcm.mu.Lock()
|
||||||
|
defer vcm.mu.Unlock()
|
||||||
|
|
||||||
|
vcm.clocks[nodeID] = clock
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (vcm *defaultVectorClockManager) CompareClock(clock1, clock2 *VectorClock) ClockRelation {
|
||||||
|
if clock1 == nil || clock2 == nil {
|
||||||
|
return ClockConcurrent
|
||||||
|
}
|
||||||
|
if clock1.UpdatedAt.Before(clock2.UpdatedAt) {
|
||||||
|
return ClockBefore
|
||||||
|
}
|
||||||
|
if clock1.UpdatedAt.After(clock2.UpdatedAt) {
|
||||||
|
return ClockAfter
|
||||||
|
}
|
||||||
|
return ClockEqual
|
||||||
|
}
|
||||||
|
|
||||||
|
func (vcm *defaultVectorClockManager) MergeClock(clocks []*VectorClock) *VectorClock {
|
||||||
|
if len(clocks) == 0 {
|
||||||
|
return &VectorClock{
|
||||||
|
Clock: map[string]int64{},
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
merged := &VectorClock{
|
||||||
|
Clock: make(map[string]int64),
|
||||||
|
UpdatedAt: clocks[0].UpdatedAt,
|
||||||
|
}
|
||||||
|
for _, clock := range clocks {
|
||||||
|
if clock == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if clock.UpdatedAt.After(merged.UpdatedAt) {
|
||||||
|
merged.UpdatedAt = clock.UpdatedAt
|
||||||
|
}
|
||||||
|
for node, value := range clock.Clock {
|
||||||
|
if existing, ok := merged.Clock[node]; !ok || value > existing {
|
||||||
|
merged.Clock[node] = value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return merged
|
||||||
}
|
}
|
||||||
@@ -332,10 +332,10 @@ type Alert struct {
|
|||||||
type AlertSeverity string
|
type AlertSeverity string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
SeverityInfo AlertSeverity = "info"
|
AlertAlertSeverityInfo AlertSeverity = "info"
|
||||||
SeverityWarning AlertSeverity = "warning"
|
AlertAlertSeverityWarning AlertSeverity = "warning"
|
||||||
SeverityError AlertSeverity = "error"
|
AlertAlertSeverityError AlertSeverity = "error"
|
||||||
SeverityCritical AlertSeverity = "critical"
|
AlertAlertSeverityCritical AlertSeverity = "critical"
|
||||||
)
|
)
|
||||||
|
|
||||||
// AlertStatus represents the current status of an alert
|
// AlertStatus represents the current status of an alert
|
||||||
@@ -1134,13 +1134,13 @@ func (ms *MonitoringSystem) createDefaultDashboards() {
|
|||||||
|
|
||||||
func (ms *MonitoringSystem) severityWeight(severity AlertSeverity) int {
|
func (ms *MonitoringSystem) severityWeight(severity AlertSeverity) int {
|
||||||
switch severity {
|
switch severity {
|
||||||
case SeverityCritical:
|
case AlertSeverityCritical:
|
||||||
return 4
|
return 4
|
||||||
case SeverityError:
|
case AlertSeverityError:
|
||||||
return 3
|
return 3
|
||||||
case SeverityWarning:
|
case AlertSeverityWarning:
|
||||||
return 2
|
return 2
|
||||||
case SeverityInfo:
|
case AlertSeverityInfo:
|
||||||
return 1
|
return 1
|
||||||
default:
|
default:
|
||||||
return 0
|
return 0
|
||||||
|
|||||||
@@ -9,8 +9,8 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
|
"chorus/pkg/dht"
|
||||||
"github.com/libp2p/go-libp2p/core/peer"
|
"github.com/libp2p/go-libp2p/core/peer"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -62,7 +62,7 @@ type ConnectionInfo struct {
|
|||||||
type NetworkHealthChecker struct {
|
type NetworkHealthChecker struct {
|
||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
nodeHealth map[string]*NodeHealth
|
nodeHealth map[string]*NodeHealth
|
||||||
healthHistory map[string][]*HealthCheckResult
|
healthHistory map[string][]*NetworkHealthCheckResult
|
||||||
alertThresholds *NetworkAlertThresholds
|
alertThresholds *NetworkAlertThresholds
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -91,7 +91,7 @@ const (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// HealthCheckResult represents the result of a health check
|
// HealthCheckResult represents the result of a health check
|
||||||
type HealthCheckResult struct {
|
type NetworkHealthCheckResult struct {
|
||||||
NodeID string `json:"node_id"`
|
NodeID string `json:"node_id"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
Success bool `json:"success"`
|
Success bool `json:"success"`
|
||||||
@@ -274,7 +274,7 @@ func (nm *NetworkManagerImpl) initializeComponents() error {
|
|||||||
// Initialize health checker
|
// Initialize health checker
|
||||||
nm.healthChecker = &NetworkHealthChecker{
|
nm.healthChecker = &NetworkHealthChecker{
|
||||||
nodeHealth: make(map[string]*NodeHealth),
|
nodeHealth: make(map[string]*NodeHealth),
|
||||||
healthHistory: make(map[string][]*HealthCheckResult),
|
healthHistory: make(map[string][]*NetworkHealthCheckResult),
|
||||||
alertThresholds: &NetworkAlertThresholds{
|
alertThresholds: &NetworkAlertThresholds{
|
||||||
LatencyWarning: 500 * time.Millisecond,
|
LatencyWarning: 500 * time.Millisecond,
|
||||||
LatencyCritical: 2 * time.Second,
|
LatencyCritical: 2 * time.Second,
|
||||||
@@ -677,7 +677,7 @@ func (nm *NetworkManagerImpl) performHealthChecks(ctx context.Context) {
|
|||||||
|
|
||||||
// Store health check history
|
// Store health check history
|
||||||
if _, exists := nm.healthChecker.healthHistory[peer.String()]; !exists {
|
if _, exists := nm.healthChecker.healthHistory[peer.String()]; !exists {
|
||||||
nm.healthChecker.healthHistory[peer.String()] = []*HealthCheckResult{}
|
nm.healthChecker.healthHistory[peer.String()] = []*NetworkHealthCheckResult{}
|
||||||
}
|
}
|
||||||
nm.healthChecker.healthHistory[peer.String()] = append(
|
nm.healthChecker.healthHistory[peer.String()] = append(
|
||||||
nm.healthChecker.healthHistory[peer.String()],
|
nm.healthChecker.healthHistory[peer.String()],
|
||||||
@@ -907,7 +907,7 @@ func (nm *NetworkManagerImpl) testPeerConnectivity(ctx context.Context, peerID s
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *HealthCheckResult {
|
func (nm *NetworkManagerImpl) performHealthCheck(ctx context.Context, nodeID string) *NetworkHealthCheckResult {
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
|
|
||||||
// In a real implementation, this would perform actual health checks
|
// In a real implementation, this would perform actual health checks
|
||||||
@@ -1024,14 +1024,14 @@ func (nm *NetworkManagerImpl) calculateOverallNetworkHealth() float64 {
|
|||||||
return float64(nm.stats.ConnectedNodes) / float64(nm.stats.TotalNodes)
|
return float64(nm.stats.ConnectedNodes) / float64(nm.stats.TotalNodes)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (nm *NetworkManagerImpl) determineNodeStatus(result *HealthCheckResult) NodeStatus {
|
func (nm *NetworkManagerImpl) determineNodeStatus(result *NetworkHealthCheckResult) NodeStatus {
|
||||||
if result.Success {
|
if result.Success {
|
||||||
return NodeStatusHealthy
|
return NodeStatusHealthy
|
||||||
}
|
}
|
||||||
return NodeStatusUnreachable
|
return NodeStatusUnreachable
|
||||||
}
|
}
|
||||||
|
|
||||||
func (nm *NetworkManagerImpl) calculateHealthScore(result *HealthCheckResult) float64 {
|
func (nm *NetworkManagerImpl) calculateHealthScore(result *NetworkHealthCheckResult) float64 {
|
||||||
if result.Success {
|
if result.Success {
|
||||||
return 1.0
|
return 1.0
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,8 +7,8 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/config"
|
"chorus/pkg/config"
|
||||||
|
"chorus/pkg/dht"
|
||||||
"chorus/pkg/ucxl"
|
"chorus/pkg/ucxl"
|
||||||
"github.com/libp2p/go-libp2p/core/peer"
|
"github.com/libp2p/go-libp2p/core/peer"
|
||||||
)
|
)
|
||||||
@@ -462,7 +462,7 @@ func (rm *ReplicationManagerImpl) discoverReplicas(ctx context.Context, address
|
|||||||
// For now, we'll simulate some replicas
|
// For now, we'll simulate some replicas
|
||||||
peers := rm.dht.GetConnectedPeers()
|
peers := rm.dht.GetConnectedPeers()
|
||||||
if len(peers) > 0 {
|
if len(peers) > 0 {
|
||||||
status.CurrentReplicas = min(len(peers), rm.policy.DefaultFactor)
|
status.CurrentReplicas = minInt(len(peers), rm.policy.DefaultFactor)
|
||||||
status.HealthyReplicas = status.CurrentReplicas
|
status.HealthyReplicas = status.CurrentReplicas
|
||||||
|
|
||||||
for i, peer := range peers {
|
for i, peer := range peers {
|
||||||
@@ -638,7 +638,7 @@ type RebalanceMove struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Utility functions
|
// Utility functions
|
||||||
func min(a, b int) int {
|
func minInt(a, b int) int {
|
||||||
if a < b {
|
if a < b {
|
||||||
return a
|
return a
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -242,12 +242,12 @@ const (
|
|||||||
type SecuritySeverity string
|
type SecuritySeverity string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
SeverityDebug SecuritySeverity = "debug"
|
SecuritySeverityDebug SecuritySeverity = "debug"
|
||||||
SeverityInfo SecuritySeverity = "info"
|
SecuritySeverityInfo SecuritySeverity = "info"
|
||||||
SeverityWarning SecuritySeverity = "warning"
|
SecuritySeverityWarning SecuritySeverity = "warning"
|
||||||
SeverityError SecuritySeverity = "error"
|
SecuritySeverityError SecuritySeverity = "error"
|
||||||
SeverityCritical SecuritySeverity = "critical"
|
SecuritySeverityCritical SecuritySeverity = "critical"
|
||||||
SeverityAlert SecuritySeverity = "alert"
|
SecuritySeverityAlert SecuritySeverity = "alert"
|
||||||
)
|
)
|
||||||
|
|
||||||
// NodeAuthentication handles node-to-node authentication
|
// NodeAuthentication handles node-to-node authentication
|
||||||
@@ -508,7 +508,7 @@ func (sm *SecurityManager) Authenticate(ctx context.Context, credentials *Creden
|
|||||||
// Log authentication attempt
|
// Log authentication attempt
|
||||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||||
EventType: EventTypeAuthentication,
|
EventType: EventTypeAuthentication,
|
||||||
Severity: SeverityInfo,
|
Severity: SecuritySeverityInfo,
|
||||||
Action: "authenticate",
|
Action: "authenticate",
|
||||||
Message: "Authentication attempt",
|
Message: "Authentication attempt",
|
||||||
Details: map[string]interface{}{
|
Details: map[string]interface{}{
|
||||||
@@ -525,7 +525,7 @@ func (sm *SecurityManager) Authorize(ctx context.Context, request *Authorization
|
|||||||
// Log authorization attempt
|
// Log authorization attempt
|
||||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||||
EventType: EventTypeAuthorization,
|
EventType: EventTypeAuthorization,
|
||||||
Severity: SeverityInfo,
|
Severity: SecuritySeverityInfo,
|
||||||
UserID: request.UserID,
|
UserID: request.UserID,
|
||||||
Resource: request.Resource,
|
Resource: request.Resource,
|
||||||
Action: request.Action,
|
Action: request.Action,
|
||||||
@@ -554,7 +554,7 @@ func (sm *SecurityManager) ValidateNodeIdentity(ctx context.Context, nodeID stri
|
|||||||
// Log successful validation
|
// Log successful validation
|
||||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||||
EventType: EventTypeAuthentication,
|
EventType: EventTypeAuthentication,
|
||||||
Severity: SeverityInfo,
|
Severity: SecuritySeverityInfo,
|
||||||
NodeID: nodeID,
|
NodeID: nodeID,
|
||||||
Action: "validate_node_identity",
|
Action: "validate_node_identity",
|
||||||
Result: "success",
|
Result: "success",
|
||||||
@@ -609,7 +609,7 @@ func (sm *SecurityManager) AddTrustedNode(ctx context.Context, node *TrustedNode
|
|||||||
// Log node addition
|
// Log node addition
|
||||||
sm.logSecurityEvent(ctx, &SecurityEvent{
|
sm.logSecurityEvent(ctx, &SecurityEvent{
|
||||||
EventType: EventTypeConfiguration,
|
EventType: EventTypeConfiguration,
|
||||||
Severity: SeverityInfo,
|
Severity: SecuritySeverityInfo,
|
||||||
NodeID: node.NodeID,
|
NodeID: node.NodeID,
|
||||||
Action: "add_trusted_node",
|
Action: "add_trusted_node",
|
||||||
Result: "success",
|
Result: "success",
|
||||||
|
|||||||
@@ -11,8 +11,8 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DefaultDirectoryAnalyzer provides comprehensive directory structure analysis
|
// DefaultDirectoryAnalyzer provides comprehensive directory structure analysis
|
||||||
@@ -340,7 +340,7 @@ func (da *DefaultDirectoryAnalyzer) DetectConventions(ctx context.Context, dirPa
|
|||||||
OrganizationalPatterns: []*OrganizationalPattern{},
|
OrganizationalPatterns: []*OrganizationalPattern{},
|
||||||
Consistency: 0.0,
|
Consistency: 0.0,
|
||||||
Violations: []*Violation{},
|
Violations: []*Violation{},
|
||||||
Recommendations: []*Recommendation{},
|
Recommendations: []*BasicRecommendation{},
|
||||||
AppliedStandards: []string{},
|
AppliedStandards: []string{},
|
||||||
AnalyzedAt: time.Now(),
|
AnalyzedAt: time.Now(),
|
||||||
}
|
}
|
||||||
@@ -996,7 +996,7 @@ func (da *DefaultDirectoryAnalyzer) analyzeNamingPattern(paths []string, scope s
|
|||||||
Type: "naming",
|
Type: "naming",
|
||||||
Description: fmt.Sprintf("Naming convention for %ss", scope),
|
Description: fmt.Sprintf("Naming convention for %ss", scope),
|
||||||
Confidence: da.calculateNamingConsistency(names, convention),
|
Confidence: da.calculateNamingConsistency(names, convention),
|
||||||
Examples: names[:min(5, len(names))],
|
Examples: names[:minInt(5, len(names))],
|
||||||
},
|
},
|
||||||
Convention: convention,
|
Convention: convention,
|
||||||
Scope: scope,
|
Scope: scope,
|
||||||
@@ -1100,12 +1100,12 @@ func (da *DefaultDirectoryAnalyzer) detectNamingStyle(name string) string {
|
|||||||
return "unknown"
|
return "unknown"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*Recommendation {
|
func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *ConventionAnalysis) []*BasicRecommendation {
|
||||||
recommendations := []*Recommendation{}
|
recommendations := []*BasicRecommendation{}
|
||||||
|
|
||||||
// Recommend consistency improvements
|
// Recommend consistency improvements
|
||||||
if analysis.Consistency < 0.8 {
|
if analysis.Consistency < 0.8 {
|
||||||
recommendations = append(recommendations, &Recommendation{
|
recommendations = append(recommendations, &BasicRecommendation{
|
||||||
Type: "consistency",
|
Type: "consistency",
|
||||||
Title: "Improve naming consistency",
|
Title: "Improve naming consistency",
|
||||||
Description: "Consider standardizing naming conventions across the project",
|
Description: "Consider standardizing naming conventions across the project",
|
||||||
@@ -1118,7 +1118,7 @@ func (da *DefaultDirectoryAnalyzer) generateConventionRecommendations(analysis *
|
|||||||
|
|
||||||
// Recommend architectural improvements
|
// Recommend architectural improvements
|
||||||
if len(analysis.OrganizationalPatterns) == 0 {
|
if len(analysis.OrganizationalPatterns) == 0 {
|
||||||
recommendations = append(recommendations, &Recommendation{
|
recommendations = append(recommendations, &BasicRecommendation{
|
||||||
Type: "architecture",
|
Type: "architecture",
|
||||||
Title: "Consider architectural patterns",
|
Title: "Consider architectural patterns",
|
||||||
Description: "Project structure could benefit from established architectural patterns",
|
Description: "Project structure could benefit from established architectural patterns",
|
||||||
@@ -1225,7 +1225,6 @@ func (da *DefaultDirectoryAnalyzer) extractImports(content string, patterns []*r
|
|||||||
|
|
||||||
func (da *DefaultDirectoryAnalyzer) isLocalDependency(importPath, fromDir, toDir string) bool {
|
func (da *DefaultDirectoryAnalyzer) isLocalDependency(importPath, fromDir, toDir string) bool {
|
||||||
// Simple heuristic: check if import path references the target directory
|
// Simple heuristic: check if import path references the target directory
|
||||||
fromBase := filepath.Base(fromDir)
|
|
||||||
toBase := filepath.Base(toDir)
|
toBase := filepath.Base(toDir)
|
||||||
|
|
||||||
return strings.Contains(importPath, toBase) ||
|
return strings.Contains(importPath, toBase) ||
|
||||||
@@ -1399,7 +1398,7 @@ func (da *DefaultDirectoryAnalyzer) walkDirectoryHierarchy(rootPath string, curr
|
|||||||
|
|
||||||
func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Address, error) {
|
func (da *DefaultDirectoryAnalyzer) generateUCXLAddress(path string) (*ucxl.Address, error) {
|
||||||
cleanPath := filepath.Clean(path)
|
cleanPath := filepath.Clean(path)
|
||||||
addr, err := ucxl.ParseAddress(fmt.Sprintf("dir://%s", cleanPath))
|
addr, err := ucxl.Parse(fmt.Sprintf("dir://%s", cleanPath))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
||||||
}
|
}
|
||||||
@@ -1417,7 +1416,7 @@ func (da *DefaultDirectoryAnalyzer) generateDirectorySummary(structure *Director
|
|||||||
langs = append(langs, fmt.Sprintf("%s (%d)", lang, count))
|
langs = append(langs, fmt.Sprintf("%s (%d)", lang, count))
|
||||||
}
|
}
|
||||||
sort.Strings(langs)
|
sort.Strings(langs)
|
||||||
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:min(3, len(langs))], ", "))
|
summary += fmt.Sprintf(", containing: %s", strings.Join(langs[:minInt(3, len(langs))], ", "))
|
||||||
}
|
}
|
||||||
|
|
||||||
return summary
|
return summary
|
||||||
@@ -1497,7 +1496,7 @@ func (da *DefaultDirectoryAnalyzer) calculateDirectorySpecificity(structure *Dir
|
|||||||
return specificity
|
return specificity
|
||||||
}
|
}
|
||||||
|
|
||||||
func min(a, b int) int {
|
func minInt(a, b int) int {
|
||||||
if a < b {
|
if a < b {
|
||||||
return a
|
return a
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,9 +2,9 @@ package intelligence
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -171,6 +171,11 @@ type EngineConfig struct {
|
|||||||
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
|
RAGEndpoint string `json:"rag_endpoint"` // RAG system endpoint
|
||||||
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
|
RAGTimeout time.Duration `json:"rag_timeout"` // RAG query timeout
|
||||||
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
|
RAGEnabled bool `json:"rag_enabled"` // Whether RAG is enabled
|
||||||
|
EnableRAG bool `json:"enable_rag"` // Legacy toggle for RAG enablement
|
||||||
|
// Feature toggles
|
||||||
|
EnableGoalAlignment bool `json:"enable_goal_alignment"`
|
||||||
|
EnablePatternDetection bool `json:"enable_pattern_detection"`
|
||||||
|
EnableRoleAware bool `json:"enable_role_aware"`
|
||||||
|
|
||||||
// Quality settings
|
// Quality settings
|
||||||
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
|
MinConfidenceThreshold float64 `json:"min_confidence_threshold"` // Minimum confidence for results
|
||||||
@@ -250,6 +255,10 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
|
|||||||
config = DefaultEngineConfig()
|
config = DefaultEngineConfig()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if config.EnableRAG {
|
||||||
|
config.RAGEnabled = true
|
||||||
|
}
|
||||||
|
|
||||||
// Initialize file analyzer
|
// Initialize file analyzer
|
||||||
fileAnalyzer := NewDefaultFileAnalyzer(config)
|
fileAnalyzer := NewDefaultFileAnalyzer(config)
|
||||||
|
|
||||||
@@ -283,3 +292,12 @@ func NewDefaultIntelligenceEngine(config *EngineConfig) (*DefaultIntelligenceEng
|
|||||||
|
|
||||||
return engine, nil
|
return engine, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// NewIntelligenceEngine is a convenience wrapper expected by legacy callers.
|
||||||
|
func NewIntelligenceEngine(config *EngineConfig) *DefaultIntelligenceEngine {
|
||||||
|
engine, err := NewDefaultIntelligenceEngine(config)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return engine
|
||||||
|
}
|
||||||
|
|||||||
@@ -4,14 +4,13 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// AnalyzeFile analyzes a single file and generates contextual understanding
|
// AnalyzeFile analyzes a single file and generates contextual understanding
|
||||||
@@ -136,8 +135,7 @@ func (e *DefaultIntelligenceEngine) AnalyzeDirectory(ctx context.Context, dirPat
|
|||||||
}()
|
}()
|
||||||
|
|
||||||
// Analyze directory structure
|
// Analyze directory structure
|
||||||
structure, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath)
|
if _, err := e.directoryAnalyzer.AnalyzeStructure(ctx, dirPath); err != nil {
|
||||||
if err != nil {
|
|
||||||
e.updateStats("directory_analysis", time.Since(start), false)
|
e.updateStats("directory_analysis", time.Since(start), false)
|
||||||
return nil, fmt.Errorf("failed to analyze directory structure: %w", err)
|
return nil, fmt.Errorf("failed to analyze directory structure: %w", err)
|
||||||
}
|
}
|
||||||
@@ -430,7 +428,7 @@ func (e *DefaultIntelligenceEngine) readFileContent(filePath string) ([]byte, er
|
|||||||
func (e *DefaultIntelligenceEngine) generateUCXLAddress(filePath string) (*ucxl.Address, error) {
|
func (e *DefaultIntelligenceEngine) generateUCXLAddress(filePath string) (*ucxl.Address, error) {
|
||||||
// Simple implementation - in reality this would be more sophisticated
|
// Simple implementation - in reality this would be more sophisticated
|
||||||
cleanPath := filepath.Clean(filePath)
|
cleanPath := filepath.Clean(filePath)
|
||||||
addr, err := ucxl.ParseAddress(fmt.Sprintf("file://%s", cleanPath))
|
addr, err := ucxl.Parse(fmt.Sprintf("file://%s", cleanPath))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
return nil, fmt.Errorf("failed to generate UCXL address: %w", err)
|
||||||
}
|
}
|
||||||
@@ -640,6 +638,10 @@ func DefaultEngineConfig() *EngineConfig {
|
|||||||
RAGEndpoint: "",
|
RAGEndpoint: "",
|
||||||
RAGTimeout: 10 * time.Second,
|
RAGTimeout: 10 * time.Second,
|
||||||
RAGEnabled: false,
|
RAGEnabled: false,
|
||||||
|
EnableRAG: false,
|
||||||
|
EnableGoalAlignment: false,
|
||||||
|
EnablePatternDetection: false,
|
||||||
|
EnableRoleAware: false,
|
||||||
MinConfidenceThreshold: 0.6,
|
MinConfidenceThreshold: 0.6,
|
||||||
RequireValidation: true,
|
RequireValidation: true,
|
||||||
CacheEnabled: true,
|
CacheEnabled: true,
|
||||||
|
|||||||
@@ -1,3 +1,6 @@
|
|||||||
|
//go:build integration
|
||||||
|
// +build integration
|
||||||
|
|
||||||
package intelligence
|
package intelligence
|
||||||
|
|
||||||
import (
|
import (
|
||||||
@@ -34,7 +37,7 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
|
|||||||
Purpose: "Handles user login and authentication for the web application",
|
Purpose: "Handles user login and authentication for the web application",
|
||||||
Technologies: []string{"go", "jwt", "bcrypt"},
|
Technologies: []string{"go", "jwt", "bcrypt"},
|
||||||
Tags: []string{"authentication", "security", "web"},
|
Tags: []string{"authentication", "security", "web"},
|
||||||
CreatedAt: time.Now(),
|
GeneratedAt: time.Now(),
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -47,7 +50,7 @@ func TestIntelligenceEngine_Integration(t *testing.T) {
|
|||||||
Priority: 1,
|
Priority: 1,
|
||||||
Phase: "development",
|
Phase: "development",
|
||||||
Deadline: nil,
|
Deadline: nil,
|
||||||
CreatedAt: time.Now(),
|
GeneratedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
t.Run("AnalyzeFile", func(t *testing.T) {
|
t.Run("AnalyzeFile", func(t *testing.T) {
|
||||||
@@ -652,7 +655,7 @@ func createTestContextNode(path, summary, purpose string, technologies, tags []s
|
|||||||
Purpose: purpose,
|
Purpose: purpose,
|
||||||
Technologies: technologies,
|
Technologies: technologies,
|
||||||
Tags: tags,
|
Tags: tags,
|
||||||
CreatedAt: time.Now(),
|
GeneratedAt: time.Now(),
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -665,7 +668,7 @@ func createTestProjectGoal(id, name, description string, keywords []string, prio
|
|||||||
Keywords: keywords,
|
Keywords: keywords,
|
||||||
Priority: priority,
|
Priority: priority,
|
||||||
Phase: phase,
|
Phase: phase,
|
||||||
CreatedAt: time.Now(),
|
GeneratedAt: time.Now(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
package intelligence
|
package intelligence
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|||||||
@@ -8,7 +8,6 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -22,7 +21,7 @@ type RoleAwareProcessor struct {
|
|||||||
accessController *AccessController
|
accessController *AccessController
|
||||||
auditLogger *AuditLogger
|
auditLogger *AuditLogger
|
||||||
permissions *PermissionMatrix
|
permissions *PermissionMatrix
|
||||||
roleProfiles map[string]*RoleProfile
|
roleProfiles map[string]*RoleBlueprint
|
||||||
}
|
}
|
||||||
|
|
||||||
// RoleManager manages role definitions and hierarchies
|
// RoleManager manages role definitions and hierarchies
|
||||||
@@ -276,7 +275,7 @@ type AuditConfig struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RoleProfile contains comprehensive role configuration
|
// RoleProfile contains comprehensive role configuration
|
||||||
type RoleProfile struct {
|
type RoleBlueprint struct {
|
||||||
Role *Role `json:"role"`
|
Role *Role `json:"role"`
|
||||||
Capabilities *RoleCapabilities `json:"capabilities"`
|
Capabilities *RoleCapabilities `json:"capabilities"`
|
||||||
Restrictions *RoleRestrictions `json:"restrictions"`
|
Restrictions *RoleRestrictions `json:"restrictions"`
|
||||||
@@ -331,7 +330,7 @@ func NewRoleAwareProcessor(config *EngineConfig) *RoleAwareProcessor {
|
|||||||
accessController: NewAccessController(),
|
accessController: NewAccessController(),
|
||||||
auditLogger: NewAuditLogger(),
|
auditLogger: NewAuditLogger(),
|
||||||
permissions: NewPermissionMatrix(),
|
permissions: NewPermissionMatrix(),
|
||||||
roleProfiles: make(map[string]*RoleProfile),
|
roleProfiles: make(map[string]*RoleBlueprint),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize default roles
|
// Initialize default roles
|
||||||
@@ -383,8 +382,11 @@ func (rap *RoleAwareProcessor) ProcessContextForRole(ctx context.Context, node *
|
|||||||
|
|
||||||
// Apply insights to node
|
// Apply insights to node
|
||||||
if len(insights) > 0 {
|
if len(insights) > 0 {
|
||||||
filteredNode.RoleSpecificInsights = insights
|
if filteredNode.Metadata == nil {
|
||||||
filteredNode.ProcessedForRole = roleID
|
filteredNode.Metadata = make(map[string]interface{})
|
||||||
|
}
|
||||||
|
filteredNode.Metadata["role_specific_insights"] = insights
|
||||||
|
filteredNode.Metadata["processed_for_role"] = roleID
|
||||||
}
|
}
|
||||||
|
|
||||||
// Log successful processing
|
// Log successful processing
|
||||||
@@ -510,7 +512,7 @@ func (rap *RoleAwareProcessor) initializeDefaultRoles() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, role := range defaultRoles {
|
for _, role := range defaultRoles {
|
||||||
rap.roleProfiles[role.ID] = &RoleProfile{
|
rap.roleProfiles[role.ID] = &RoleBlueprint{
|
||||||
Role: role,
|
Role: role,
|
||||||
Capabilities: rap.createDefaultCapabilities(role),
|
Capabilities: rap.createDefaultCapabilities(role),
|
||||||
Restrictions: rap.createDefaultRestrictions(role),
|
Restrictions: rap.createDefaultRestrictions(role),
|
||||||
@@ -1174,6 +1176,7 @@ func (al *AuditLogger) GetAuditLog(limit int) []*AuditEntry {
|
|||||||
// These would be fully implemented with sophisticated logic in production
|
// These would be fully implemented with sophisticated logic in production
|
||||||
|
|
||||||
type ArchitectInsightGenerator struct{}
|
type ArchitectInsightGenerator struct{}
|
||||||
|
|
||||||
func NewArchitectInsightGenerator() *ArchitectInsightGenerator { return &ArchitectInsightGenerator{} }
|
func NewArchitectInsightGenerator() *ArchitectInsightGenerator { return &ArchitectInsightGenerator{} }
|
||||||
func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1191,10 +1194,15 @@ func (aig *ArchitectInsightGenerator) GenerateInsights(ctx context.Context, node
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (aig *ArchitectInsightGenerator) GetSupportedRoles() []string { return []string{"architect"} }
|
func (aig *ArchitectInsightGenerator) GetSupportedRoles() []string { return []string{"architect"} }
|
||||||
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string { return []string{"architecture", "design", "patterns"} }
|
func (aig *ArchitectInsightGenerator) GetInsightTypes() []string {
|
||||||
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
return []string{"architecture", "design", "patterns"}
|
||||||
|
}
|
||||||
|
func (aig *ArchitectInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
type DeveloperInsightGenerator struct{}
|
type DeveloperInsightGenerator struct{}
|
||||||
|
|
||||||
func NewDeveloperInsightGenerator() *DeveloperInsightGenerator { return &DeveloperInsightGenerator{} }
|
func NewDeveloperInsightGenerator() *DeveloperInsightGenerator { return &DeveloperInsightGenerator{} }
|
||||||
func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1212,10 +1220,15 @@ func (dig *DeveloperInsightGenerator) GenerateInsights(ctx context.Context, node
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (dig *DeveloperInsightGenerator) GetSupportedRoles() []string { return []string{"developer"} }
|
func (dig *DeveloperInsightGenerator) GetSupportedRoles() []string { return []string{"developer"} }
|
||||||
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string { return []string{"code_quality", "implementation", "bugs"} }
|
func (dig *DeveloperInsightGenerator) GetInsightTypes() []string {
|
||||||
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
return []string{"code_quality", "implementation", "bugs"}
|
||||||
|
}
|
||||||
|
func (dig *DeveloperInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
type SecurityInsightGenerator struct{}
|
type SecurityInsightGenerator struct{}
|
||||||
|
|
||||||
func NewSecurityInsightGenerator() *SecurityInsightGenerator { return &SecurityInsightGenerator{} }
|
func NewSecurityInsightGenerator() *SecurityInsightGenerator { return &SecurityInsightGenerator{} }
|
||||||
func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1232,11 +1245,18 @@ func (sig *SecurityInsightGenerator) GenerateInsights(ctx context.Context, node
|
|||||||
},
|
},
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string { return []string{"security_analyst"} }
|
func (sig *SecurityInsightGenerator) GetSupportedRoles() []string {
|
||||||
func (sig *SecurityInsightGenerator) GetInsightTypes() []string { return []string{"security", "vulnerability", "compliance"} }
|
return []string{"security_analyst"}
|
||||||
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
}
|
||||||
|
func (sig *SecurityInsightGenerator) GetInsightTypes() []string {
|
||||||
|
return []string{"security", "vulnerability", "compliance"}
|
||||||
|
}
|
||||||
|
func (sig *SecurityInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
type DevOpsInsightGenerator struct{}
|
type DevOpsInsightGenerator struct{}
|
||||||
|
|
||||||
func NewDevOpsInsightGenerator() *DevOpsInsightGenerator { return &DevOpsInsightGenerator{} }
|
func NewDevOpsInsightGenerator() *DevOpsInsightGenerator { return &DevOpsInsightGenerator{} }
|
||||||
func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1254,10 +1274,15 @@ func (doig *DevOpsInsightGenerator) GenerateInsights(ctx context.Context, node *
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (doig *DevOpsInsightGenerator) GetSupportedRoles() []string { return []string{"devops_engineer"} }
|
func (doig *DevOpsInsightGenerator) GetSupportedRoles() []string { return []string{"devops_engineer"} }
|
||||||
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string { return []string{"infrastructure", "deployment", "monitoring"} }
|
func (doig *DevOpsInsightGenerator) GetInsightTypes() []string {
|
||||||
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
return []string{"infrastructure", "deployment", "monitoring"}
|
||||||
|
}
|
||||||
|
func (doig *DevOpsInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
type QAInsightGenerator struct{}
|
type QAInsightGenerator struct{}
|
||||||
|
|
||||||
func NewQAInsightGenerator() *QAInsightGenerator { return &QAInsightGenerator{} }
|
func NewQAInsightGenerator() *QAInsightGenerator { return &QAInsightGenerator{} }
|
||||||
func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slurpContext.ContextNode, role *Role) ([]*RoleSpecificInsight, error) {
|
||||||
return []*RoleSpecificInsight{
|
return []*RoleSpecificInsight{
|
||||||
@@ -1275,5 +1300,9 @@ func (qaig *QAInsightGenerator) GenerateInsights(ctx context.Context, node *slur
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
func (qaig *QAInsightGenerator) GetSupportedRoles() []string { return []string{"qa_engineer"} }
|
func (qaig *QAInsightGenerator) GetSupportedRoles() []string { return []string{"qa_engineer"} }
|
||||||
func (qaig *QAInsightGenerator) GetInsightTypes() []string { return []string{"quality", "testing", "validation"} }
|
func (qaig *QAInsightGenerator) GetInsightTypes() []string {
|
||||||
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error { return nil }
|
return []string{"quality", "testing", "validation"}
|
||||||
|
}
|
||||||
|
func (qaig *QAInsightGenerator) ValidateContext(node *slurpContext.ContextNode, role *Role) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -138,7 +138,7 @@ type ConventionAnalysis struct {
|
|||||||
OrganizationalPatterns []*OrganizationalPattern `json:"organizational_patterns"` // Organizational patterns
|
OrganizationalPatterns []*OrganizationalPattern `json:"organizational_patterns"` // Organizational patterns
|
||||||
Consistency float64 `json:"consistency"` // Overall consistency score
|
Consistency float64 `json:"consistency"` // Overall consistency score
|
||||||
Violations []*Violation `json:"violations"` // Convention violations
|
Violations []*Violation `json:"violations"` // Convention violations
|
||||||
Recommendations []*Recommendation `json:"recommendations"` // Improvement recommendations
|
Recommendations []*BasicRecommendation `json:"recommendations"` // Improvement recommendations
|
||||||
AppliedStandards []string `json:"applied_standards"` // Applied coding standards
|
AppliedStandards []string `json:"applied_standards"` // Applied coding standards
|
||||||
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
AnalyzedAt time.Time `json:"analyzed_at"` // When analysis was performed
|
||||||
}
|
}
|
||||||
@@ -289,7 +289,7 @@ type Suggestion struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Recommendation represents an improvement recommendation
|
// Recommendation represents an improvement recommendation
|
||||||
type Recommendation struct {
|
type BasicRecommendation struct {
|
||||||
Type string `json:"type"` // Recommendation type
|
Type string `json:"type"` // Recommendation type
|
||||||
Title string `json:"title"` // Recommendation title
|
Title string `json:"title"` // Recommendation title
|
||||||
Description string `json:"description"` // Detailed description
|
Description string `json:"description"` // Detailed description
|
||||||
|
|||||||
@@ -742,29 +742,57 @@ func CloneContextNode(node *slurpContext.ContextNode) *slurpContext.ContextNode
|
|||||||
|
|
||||||
clone := &slurpContext.ContextNode{
|
clone := &slurpContext.ContextNode{
|
||||||
Path: node.Path,
|
Path: node.Path,
|
||||||
|
UCXLAddress: node.UCXLAddress,
|
||||||
Summary: node.Summary,
|
Summary: node.Summary,
|
||||||
Purpose: node.Purpose,
|
Purpose: node.Purpose,
|
||||||
Technologies: make([]string, len(node.Technologies)),
|
Technologies: make([]string, len(node.Technologies)),
|
||||||
Tags: make([]string, len(node.Tags)),
|
Tags: make([]string, len(node.Tags)),
|
||||||
Insights: make([]string, len(node.Insights)),
|
Insights: make([]string, len(node.Insights)),
|
||||||
CreatedAt: node.CreatedAt,
|
OverridesParent: node.OverridesParent,
|
||||||
UpdatedAt: node.UpdatedAt,
|
|
||||||
ContextSpecificity: node.ContextSpecificity,
|
ContextSpecificity: node.ContextSpecificity,
|
||||||
|
AppliesToChildren: node.AppliesToChildren,
|
||||||
|
AppliesTo: node.AppliesTo,
|
||||||
|
GeneratedAt: node.GeneratedAt,
|
||||||
|
UpdatedAt: node.UpdatedAt,
|
||||||
|
CreatedBy: node.CreatedBy,
|
||||||
|
WhoUpdated: node.WhoUpdated,
|
||||||
RAGConfidence: node.RAGConfidence,
|
RAGConfidence: node.RAGConfidence,
|
||||||
ProcessedForRole: node.ProcessedForRole,
|
EncryptedFor: make([]string, len(node.EncryptedFor)),
|
||||||
|
AccessLevel: node.AccessLevel,
|
||||||
}
|
}
|
||||||
|
|
||||||
copy(clone.Technologies, node.Technologies)
|
copy(clone.Technologies, node.Technologies)
|
||||||
copy(clone.Tags, node.Tags)
|
copy(clone.Tags, node.Tags)
|
||||||
copy(clone.Insights, node.Insights)
|
copy(clone.Insights, node.Insights)
|
||||||
|
copy(clone.EncryptedFor, node.EncryptedFor)
|
||||||
|
|
||||||
if node.RoleSpecificInsights != nil {
|
if node.Parent != nil {
|
||||||
clone.RoleSpecificInsights = make([]*RoleSpecificInsight, len(node.RoleSpecificInsights))
|
parent := *node.Parent
|
||||||
copy(clone.RoleSpecificInsights, node.RoleSpecificInsights)
|
clone.Parent = &parent
|
||||||
|
}
|
||||||
|
if len(node.Children) > 0 {
|
||||||
|
clone.Children = make([]string, len(node.Children))
|
||||||
|
copy(clone.Children, node.Children)
|
||||||
|
}
|
||||||
|
if node.Language != nil {
|
||||||
|
language := *node.Language
|
||||||
|
clone.Language = &language
|
||||||
|
}
|
||||||
|
if node.Size != nil {
|
||||||
|
sz := *node.Size
|
||||||
|
clone.Size = &sz
|
||||||
|
}
|
||||||
|
if node.LastModified != nil {
|
||||||
|
lm := *node.LastModified
|
||||||
|
clone.LastModified = &lm
|
||||||
|
}
|
||||||
|
if node.ContentHash != nil {
|
||||||
|
hash := *node.ContentHash
|
||||||
|
clone.ContentHash = &hash
|
||||||
}
|
}
|
||||||
|
|
||||||
if node.Metadata != nil {
|
if node.Metadata != nil {
|
||||||
clone.Metadata = make(map[string]interface{})
|
clone.Metadata = make(map[string]interface{}, len(node.Metadata))
|
||||||
for k, v := range node.Metadata {
|
for k, v := range node.Metadata {
|
||||||
clone.Metadata[k] = v
|
clone.Metadata[k] = v
|
||||||
}
|
}
|
||||||
@@ -799,9 +827,11 @@ func MergeContextNodes(nodes ...*slurpContext.ContextNode) *slurpContext.Context
|
|||||||
// Merge insights
|
// Merge insights
|
||||||
merged.Insights = mergeStringSlices(merged.Insights, node.Insights)
|
merged.Insights = mergeStringSlices(merged.Insights, node.Insights)
|
||||||
|
|
||||||
// Use most recent timestamps
|
// Use most relevant timestamps
|
||||||
if node.CreatedAt.Before(merged.CreatedAt) {
|
if merged.GeneratedAt.IsZero() {
|
||||||
merged.CreatedAt = node.CreatedAt
|
merged.GeneratedAt = node.GeneratedAt
|
||||||
|
} else if !node.GeneratedAt.IsZero() && node.GeneratedAt.Before(merged.GeneratedAt) {
|
||||||
|
merged.GeneratedAt = node.GeneratedAt
|
||||||
}
|
}
|
||||||
if node.UpdatedAt.After(merged.UpdatedAt) {
|
if node.UpdatedAt.After(merged.UpdatedAt) {
|
||||||
merged.UpdatedAt = node.UpdatedAt
|
merged.UpdatedAt = node.UpdatedAt
|
||||||
|
|||||||
@@ -2,6 +2,9 @@ package slurp
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"chorus/pkg/crypto"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Core interfaces for the SLURP contextual intelligence system.
|
// Core interfaces for the SLURP contextual intelligence system.
|
||||||
@@ -497,8 +500,6 @@ type HealthChecker interface {
|
|||||||
|
|
||||||
// Additional types needed by interfaces
|
// Additional types needed by interfaces
|
||||||
|
|
||||||
import "time"
|
|
||||||
|
|
||||||
type StorageStats struct {
|
type StorageStats struct {
|
||||||
TotalKeys int64 `json:"total_keys"`
|
TotalKeys int64 `json:"total_keys"`
|
||||||
TotalSize int64 `json:"total_size"`
|
TotalSize int64 `json:"total_size"`
|
||||||
|
|||||||
@@ -631,7 +631,7 @@ func (s *SLURP) GetTemporalEvolution(ctx context.Context, ucxlAddress string) ([
|
|||||||
return nil, fmt.Errorf("invalid UCXL address: %w", err)
|
return nil, fmt.Errorf("invalid UCXL address: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return s.temporalGraph.GetEvolutionHistory(ctx, *parsed)
|
return s.temporalGraph.GetEvolutionHistory(ctx, parsed.String())
|
||||||
}
|
}
|
||||||
|
|
||||||
// NavigateDecisionHops navigates through the decision graph by hop distance.
|
// NavigateDecisionHops navigates through the decision graph by hop distance.
|
||||||
@@ -654,7 +654,7 @@ func (s *SLURP) NavigateDecisionHops(ctx context.Context, ucxlAddress string, ho
|
|||||||
}
|
}
|
||||||
|
|
||||||
if navigator, ok := s.temporalGraph.(DecisionNavigator); ok {
|
if navigator, ok := s.temporalGraph.(DecisionNavigator); ok {
|
||||||
return navigator.NavigateDecisionHops(ctx, *parsed, hops, direction)
|
return navigator.NavigateDecisionHops(ctx, parsed.String(), hops, direction)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil, fmt.Errorf("decision navigation not supported by temporal graph")
|
return nil, fmt.Errorf("decision navigation not supported by temporal graph")
|
||||||
@@ -1348,26 +1348,42 @@ func (s *SLURP) handleEvent(event *SLURPEvent) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// validateSLURPConfig validates SLURP configuration for consistency and correctness
|
// validateSLURPConfig normalises runtime tunables sourced from configuration.
|
||||||
func validateSLURPConfig(config *SLURPConfig) error {
|
func validateSLURPConfig(cfg *config.SlurpConfig) error {
|
||||||
if config.ContextResolution.MaxHierarchyDepth < 1 {
|
if cfg == nil {
|
||||||
return fmt.Errorf("max_hierarchy_depth must be at least 1")
|
return fmt.Errorf("slurp config is nil")
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.ContextResolution.MinConfidenceThreshold < 0 || config.ContextResolution.MinConfidenceThreshold > 1 {
|
if cfg.Timeout <= 0 {
|
||||||
return fmt.Errorf("min_confidence_threshold must be between 0 and 1")
|
cfg.Timeout = 15 * time.Second
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.TemporalAnalysis.MaxDecisionHops < 1 {
|
if cfg.RetryCount < 0 {
|
||||||
return fmt.Errorf("max_decision_hops must be at least 1")
|
cfg.RetryCount = 0
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.TemporalAnalysis.StalenessThreshold < 0 || config.TemporalAnalysis.StalenessThreshold > 1 {
|
if cfg.RetryDelay <= 0 && cfg.RetryCount > 0 {
|
||||||
return fmt.Errorf("staleness_threshold must be between 0 and 1")
|
cfg.RetryDelay = 2 * time.Second
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.Performance.MaxConcurrentResolutions < 1 {
|
if cfg.Performance.MaxConcurrentResolutions <= 0 {
|
||||||
return fmt.Errorf("max_concurrent_resolutions must be at least 1")
|
cfg.Performance.MaxConcurrentResolutions = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Performance.MetricsCollectionInterval <= 0 {
|
||||||
|
cfg.Performance.MetricsCollectionInterval = time.Minute
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.TemporalAnalysis.MaxDecisionHops <= 0 {
|
||||||
|
cfg.TemporalAnalysis.MaxDecisionHops = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.TemporalAnalysis.StalenessCheckInterval <= 0 {
|
||||||
|
cfg.TemporalAnalysis.StalenessCheckInterval = 5 * time.Minute
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.TemporalAnalysis.StalenessThreshold < 0 || cfg.TemporalAnalysis.StalenessThreshold > 1 {
|
||||||
|
cfg.TemporalAnalysis.StalenessThreshold = 0.2
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -164,6 +164,8 @@ func (bm *BackupManagerImpl) CreateBackup(
|
|||||||
Incremental: config.Incremental,
|
Incremental: config.Incremental,
|
||||||
ParentBackupID: config.ParentBackupID,
|
ParentBackupID: config.ParentBackupID,
|
||||||
Status: BackupStatusInProgress,
|
Status: BackupStatusInProgress,
|
||||||
|
Progress: 0,
|
||||||
|
ErrorMessage: "",
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
RetentionUntil: time.Now().Add(config.Retention),
|
RetentionUntil: time.Now().Add(config.Retention),
|
||||||
}
|
}
|
||||||
@@ -707,6 +709,7 @@ func (bm *BackupManagerImpl) validateFile(filePath string) error {
|
|||||||
func (bm *BackupManagerImpl) failBackup(job *BackupJob, backupInfo *BackupInfo, err error) {
|
func (bm *BackupManagerImpl) failBackup(job *BackupJob, backupInfo *BackupInfo, err error) {
|
||||||
bm.mu.Lock()
|
bm.mu.Lock()
|
||||||
backupInfo.Status = BackupStatusFailed
|
backupInfo.Status = BackupStatusFailed
|
||||||
|
backupInfo.Progress = 0
|
||||||
backupInfo.ErrorMessage = err.Error()
|
backupInfo.ErrorMessage = err.Error()
|
||||||
job.Error = err
|
job.Error = err
|
||||||
bm.mu.Unlock()
|
bm.mu.Unlock()
|
||||||
|
|||||||
@@ -3,11 +3,12 @@ package storage
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// BatchOperationsImpl provides efficient batch operations for context storage
|
// BatchOperationsImpl provides efficient batch operations for context storage
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"regexp"
|
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
|||||||
@@ -3,10 +3,8 @@ package storage
|
|||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"os"
|
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestLocalStorageCompression(t *testing.T) {
|
func TestLocalStorageCompression(t *testing.T) {
|
||||||
|
|||||||
@@ -2,15 +2,12 @@ package storage
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
"chorus/pkg/dht"
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ContextStoreImpl is the main implementation of the ContextStore interface
|
// ContextStoreImpl is the main implementation of the ContextStore interface
|
||||||
|
|||||||
@@ -8,7 +8,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/dht"
|
"chorus/pkg/dht"
|
||||||
"chorus/pkg/types"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// DistributedStorageImpl implements the DistributedStorage interface
|
// DistributedStorageImpl implements the DistributedStorage interface
|
||||||
@@ -125,8 +124,6 @@ func (ds *DistributedStorageImpl) Store(
|
|||||||
data interface{},
|
data interface{},
|
||||||
options *DistributedStoreOptions,
|
options *DistributedStoreOptions,
|
||||||
) error {
|
) error {
|
||||||
start := time.Now()
|
|
||||||
|
|
||||||
if options == nil {
|
if options == nil {
|
||||||
options = ds.options
|
options = ds.options
|
||||||
}
|
}
|
||||||
@@ -179,7 +176,7 @@ func (ds *DistributedStorageImpl) Retrieve(
|
|||||||
|
|
||||||
// Try local first if prefer local is enabled
|
// Try local first if prefer local is enabled
|
||||||
if ds.options.PreferLocal {
|
if ds.options.PreferLocal {
|
||||||
if localData, err := ds.dht.Get(key); err == nil {
|
if localData, err := ds.dht.GetValue(ctx, key); err == nil {
|
||||||
return ds.deserializeEntry(localData)
|
return ds.deserializeEntry(localData)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -226,25 +223,9 @@ func (ds *DistributedStorageImpl) Exists(
|
|||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
key string,
|
key string,
|
||||||
) (bool, error) {
|
) (bool, error) {
|
||||||
// Try local first
|
if _, err := ds.dht.GetValue(ctx, key); err == nil {
|
||||||
if ds.options.PreferLocal {
|
|
||||||
if exists, err := ds.dht.Exists(key); err == nil {
|
|
||||||
return exists, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check replicas
|
|
||||||
replicas, err := ds.getReplicationNodes(key)
|
|
||||||
if err != nil {
|
|
||||||
return false, fmt.Errorf("failed to get replication nodes: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, nodeID := range replicas {
|
|
||||||
if exists, err := ds.checkExistsOnNode(ctx, nodeID, key); err == nil && exists {
|
|
||||||
return true, nil
|
return true, nil
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
return false, nil
|
return false, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -306,10 +287,7 @@ func (ds *DistributedStorageImpl) FindReplicas(
|
|||||||
|
|
||||||
// Sync synchronizes with other DHT nodes
|
// Sync synchronizes with other DHT nodes
|
||||||
func (ds *DistributedStorageImpl) Sync(ctx context.Context) error {
|
func (ds *DistributedStorageImpl) Sync(ctx context.Context) error {
|
||||||
start := time.Now()
|
|
||||||
defer func() {
|
|
||||||
ds.metrics.LastRebalance = time.Now()
|
ds.metrics.LastRebalance = time.Now()
|
||||||
}()
|
|
||||||
|
|
||||||
// Get list of active nodes
|
// Get list of active nodes
|
||||||
activeNodes := ds.heartbeat.getActiveNodes()
|
activeNodes := ds.heartbeat.getActiveNodes()
|
||||||
@@ -346,7 +324,7 @@ func (ds *DistributedStorageImpl) GetDistributedStats() (*DistributedStorageStat
|
|||||||
healthyReplicas := int64(0)
|
healthyReplicas := int64(0)
|
||||||
underReplicated := int64(0)
|
underReplicated := int64(0)
|
||||||
|
|
||||||
for key, replicas := range ds.replicas {
|
for _, replicas := range ds.replicas {
|
||||||
totalReplicas += int64(len(replicas))
|
totalReplicas += int64(len(replicas))
|
||||||
healthy := 0
|
healthy := 0
|
||||||
for _, nodeID := range replicas {
|
for _, nodeID := range replicas {
|
||||||
@@ -405,13 +383,13 @@ func (ds *DistributedStorageImpl) selectReplicationNodes(key string, replication
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||||
// Store asynchronously on all nodes
|
// Store asynchronously on all nodes for SEC-SLURP-1.1a replication policy
|
||||||
errCh := make(chan error, len(nodes))
|
errCh := make(chan error, len(nodes))
|
||||||
|
|
||||||
for _, nodeID := range nodes {
|
for _, nodeID := range nodes {
|
||||||
go func(node string) {
|
go func(node string) {
|
||||||
err := ds.storeOnNode(ctx, node, entry)
|
err := ds.storeOnNode(ctx, node, entry)
|
||||||
errorCh <- err
|
errCh <- err
|
||||||
}(nodeID)
|
}(nodeID)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -445,13 +423,13 @@ func (ds *DistributedStorageImpl) storeEventual(ctx context.Context, entry *Dist
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||||
// Store synchronously on all nodes
|
// Store synchronously on all nodes per SEC-SLURP-1.1a durability target
|
||||||
errCh := make(chan error, len(nodes))
|
errCh := make(chan error, len(nodes))
|
||||||
|
|
||||||
for _, nodeID := range nodes {
|
for _, nodeID := range nodes {
|
||||||
go func(node string) {
|
go func(node string) {
|
||||||
err := ds.storeOnNode(ctx, node, entry)
|
err := ds.storeOnNode(ctx, node, entry)
|
||||||
errorCh <- err
|
errCh <- err
|
||||||
}(nodeID)
|
}(nodeID)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -476,14 +454,14 @@ func (ds *DistributedStorageImpl) storeStrong(ctx context.Context, entry *Distri
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (ds *DistributedStorageImpl) storeQuorum(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
func (ds *DistributedStorageImpl) storeQuorum(ctx context.Context, entry *DistributedEntry, nodes []string) error {
|
||||||
// Store on quorum of nodes
|
// Store on quorum of nodes per SEC-SLURP-1.1a availability guardrail
|
||||||
quorumSize := (len(nodes) / 2) + 1
|
quorumSize := (len(nodes) / 2) + 1
|
||||||
errCh := make(chan error, len(nodes))
|
errCh := make(chan error, len(nodes))
|
||||||
|
|
||||||
for _, nodeID := range nodes {
|
for _, nodeID := range nodes {
|
||||||
go func(node string) {
|
go func(node string) {
|
||||||
err := ds.storeOnNode(ctx, node, entry)
|
err := ds.storeOnNode(ctx, node, entry)
|
||||||
errorCh <- err
|
errCh <- err
|
||||||
}(nodeID)
|
}(nodeID)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/crypto"
|
"chorus/pkg/crypto"
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -19,8 +18,8 @@ type EncryptedStorageImpl struct {
|
|||||||
crypto crypto.RoleCrypto
|
crypto crypto.RoleCrypto
|
||||||
localStorage LocalStorage
|
localStorage LocalStorage
|
||||||
keyManager crypto.KeyManager
|
keyManager crypto.KeyManager
|
||||||
accessControl crypto.AccessController
|
accessControl crypto.StorageAccessController
|
||||||
auditLogger crypto.AuditLogger
|
auditLogger crypto.StorageAuditLogger
|
||||||
metrics *EncryptionMetrics
|
metrics *EncryptionMetrics
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -45,8 +44,8 @@ func NewEncryptedStorage(
|
|||||||
crypto crypto.RoleCrypto,
|
crypto crypto.RoleCrypto,
|
||||||
localStorage LocalStorage,
|
localStorage LocalStorage,
|
||||||
keyManager crypto.KeyManager,
|
keyManager crypto.KeyManager,
|
||||||
accessControl crypto.AccessController,
|
accessControl crypto.StorageAccessController,
|
||||||
auditLogger crypto.AuditLogger,
|
auditLogger crypto.StorageAuditLogger,
|
||||||
) *EncryptedStorageImpl {
|
) *EncryptedStorageImpl {
|
||||||
return &EncryptedStorageImpl{
|
return &EncryptedStorageImpl{
|
||||||
crypto: crypto,
|
crypto: crypto,
|
||||||
@@ -286,12 +285,11 @@ func (es *EncryptedStorageImpl) GetAccessRoles(
|
|||||||
return roles, nil
|
return roles, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// RotateKeys rotates encryption keys
|
// RotateKeys rotates encryption keys in line with SEC-SLURP-1.1 retention constraints
|
||||||
func (es *EncryptedStorageImpl) RotateKeys(
|
func (es *EncryptedStorageImpl) RotateKeys(
|
||||||
ctx context.Context,
|
ctx context.Context,
|
||||||
maxAge time.Duration,
|
maxAge time.Duration,
|
||||||
) error {
|
) error {
|
||||||
start := time.Now()
|
|
||||||
defer func() {
|
defer func() {
|
||||||
es.metrics.mu.Lock()
|
es.metrics.mu.Lock()
|
||||||
es.metrics.KeyRotations++
|
es.metrics.KeyRotations++
|
||||||
|
|||||||
@@ -9,12 +9,13 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
"github.com/blevesearch/bleve/v2"
|
"github.com/blevesearch/bleve/v2"
|
||||||
"github.com/blevesearch/bleve/v2/analysis/analyzer/standard"
|
"github.com/blevesearch/bleve/v2/analysis/analyzer/standard"
|
||||||
"github.com/blevesearch/bleve/v2/analysis/lang/en"
|
"github.com/blevesearch/bleve/v2/analysis/lang/en"
|
||||||
"github.com/blevesearch/bleve/v2/mapping"
|
"github.com/blevesearch/bleve/v2/mapping"
|
||||||
"chorus/pkg/ucxl"
|
"github.com/blevesearch/bleve/v2/search/query"
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// IndexManagerImpl implements the IndexManager interface using Bleve
|
// IndexManagerImpl implements the IndexManager interface using Bleve
|
||||||
@@ -432,31 +433,31 @@ func (im *IndexManagerImpl) createIndexDocument(data interface{}) (map[string]in
|
|||||||
return doc, nil
|
return doc, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.SearchRequest, error) {
|
func (im *IndexManagerImpl) buildSearchRequest(searchQuery *SearchQuery) (*bleve.SearchRequest, error) {
|
||||||
// Build Bleve search request from our search query
|
// Build Bleve search request from our search query (SEC-SLURP-1.1 search path)
|
||||||
var bleveQuery bleve.Query
|
var bleveQuery query.Query
|
||||||
|
|
||||||
if query.Query == "" {
|
if searchQuery.Query == "" {
|
||||||
// Match all query
|
// Match all query
|
||||||
bleveQuery = bleve.NewMatchAllQuery()
|
bleveQuery = bleve.NewMatchAllQuery()
|
||||||
} else {
|
} else {
|
||||||
// Text search query
|
// Text search query
|
||||||
if query.FuzzyMatch {
|
if searchQuery.FuzzyMatch {
|
||||||
// Use fuzzy query
|
// Use fuzzy query
|
||||||
bleveQuery = bleve.NewFuzzyQuery(query.Query)
|
bleveQuery = bleve.NewFuzzyQuery(searchQuery.Query)
|
||||||
} else {
|
} else {
|
||||||
// Use match query for better scoring
|
// Use match query for better scoring
|
||||||
bleveQuery = bleve.NewMatchQuery(query.Query)
|
bleveQuery = bleve.NewMatchQuery(searchQuery.Query)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add filters
|
// Add filters
|
||||||
var conjuncts []bleve.Query
|
var conjuncts []query.Query
|
||||||
conjuncts = append(conjuncts, bleveQuery)
|
conjuncts = append(conjuncts, bleveQuery)
|
||||||
|
|
||||||
// Technology filters
|
// Technology filters
|
||||||
if len(query.Technologies) > 0 {
|
if len(searchQuery.Technologies) > 0 {
|
||||||
for _, tech := range query.Technologies {
|
for _, tech := range searchQuery.Technologies {
|
||||||
techQuery := bleve.NewTermQuery(tech)
|
techQuery := bleve.NewTermQuery(tech)
|
||||||
techQuery.SetField("technologies_facet")
|
techQuery.SetField("technologies_facet")
|
||||||
conjuncts = append(conjuncts, techQuery)
|
conjuncts = append(conjuncts, techQuery)
|
||||||
@@ -464,8 +465,8 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Tag filters
|
// Tag filters
|
||||||
if len(query.Tags) > 0 {
|
if len(searchQuery.Tags) > 0 {
|
||||||
for _, tag := range query.Tags {
|
for _, tag := range searchQuery.Tags {
|
||||||
tagQuery := bleve.NewTermQuery(tag)
|
tagQuery := bleve.NewTermQuery(tag)
|
||||||
tagQuery.SetField("tags_facet")
|
tagQuery.SetField("tags_facet")
|
||||||
conjuncts = append(conjuncts, tagQuery)
|
conjuncts = append(conjuncts, tagQuery)
|
||||||
@@ -481,18 +482,18 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
|||||||
searchRequest := bleve.NewSearchRequest(bleveQuery)
|
searchRequest := bleve.NewSearchRequest(bleveQuery)
|
||||||
|
|
||||||
// Set result options
|
// Set result options
|
||||||
if query.Limit > 0 && query.Limit <= im.options.MaxResults {
|
if searchQuery.Limit > 0 && searchQuery.Limit <= im.options.MaxResults {
|
||||||
searchRequest.Size = query.Limit
|
searchRequest.Size = searchQuery.Limit
|
||||||
} else {
|
} else {
|
||||||
searchRequest.Size = im.options.MaxResults
|
searchRequest.Size = im.options.MaxResults
|
||||||
}
|
}
|
||||||
|
|
||||||
if query.Offset > 0 {
|
if searchQuery.Offset > 0 {
|
||||||
searchRequest.From = query.Offset
|
searchRequest.From = searchQuery.Offset
|
||||||
}
|
}
|
||||||
|
|
||||||
// Enable highlighting if requested
|
// Enable highlighting if requested
|
||||||
if query.HighlightTerms && im.options.EnableHighlighting {
|
if searchQuery.HighlightTerms && im.options.EnableHighlighting {
|
||||||
searchRequest.Highlight = bleve.NewHighlight()
|
searchRequest.Highlight = bleve.NewHighlight()
|
||||||
searchRequest.Highlight.AddField("content")
|
searchRequest.Highlight.AddField("content")
|
||||||
searchRequest.Highlight.AddField("summary")
|
searchRequest.Highlight.AddField("summary")
|
||||||
@@ -500,9 +501,9 @@ func (im *IndexManagerImpl) buildSearchRequest(query *SearchQuery) (*bleve.Searc
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Add facets if requested
|
// Add facets if requested
|
||||||
if len(query.Facets) > 0 && im.options.EnableFaceting {
|
if len(searchQuery.Facets) > 0 && im.options.EnableFaceting {
|
||||||
searchRequest.Facets = make(bleve.FacetsRequest)
|
searchRequest.Facets = make(bleve.FacetsRequest)
|
||||||
for _, facet := range query.Facets {
|
for _, facet := range searchQuery.Facets {
|
||||||
switch facet {
|
switch facet {
|
||||||
case "technologies":
|
case "technologies":
|
||||||
searchRequest.Facets["technologies"] = bleve.NewFacetRequest("technologies_facet", 10)
|
searchRequest.Facets["technologies"] = bleve.NewFacetRequest("technologies_facet", 10)
|
||||||
@@ -558,8 +559,8 @@ func (im *IndexManagerImpl) convertSearchResults(
|
|||||||
|
|
||||||
// Parse UCXL address
|
// Parse UCXL address
|
||||||
if ucxlStr, ok := hit.Fields["ucxl_address"].(string); ok {
|
if ucxlStr, ok := hit.Fields["ucxl_address"].(string); ok {
|
||||||
if addr, err := ucxl.ParseAddress(ucxlStr); err == nil {
|
if addr, err := ucxl.Parse(ucxlStr); err == nil {
|
||||||
contextNode.UCXLAddress = addr
|
contextNode.UCXLAddress = *addr
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -572,9 +573,11 @@ func (im *IndexManagerImpl) convertSearchResults(
|
|||||||
results.Facets = make(map[string]map[string]int)
|
results.Facets = make(map[string]map[string]int)
|
||||||
for facetName, facetResult := range searchResult.Facets {
|
for facetName, facetResult := range searchResult.Facets {
|
||||||
facetCounts := make(map[string]int)
|
facetCounts := make(map[string]int)
|
||||||
for _, term := range facetResult.Terms {
|
if facetResult.Terms != nil {
|
||||||
|
for _, term := range facetResult.Terms.Terms() {
|
||||||
facetCounts[term.Term] = term.Count
|
facetCounts[term.Term] = term.Count
|
||||||
}
|
}
|
||||||
|
}
|
||||||
results.Facets[facetName] = facetCounts
|
results.Facets[facetName] = facetCounts
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,9 +4,8 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ContextStore provides the main interface for context storage and retrieval
|
// ContextStore provides the main interface for context storage and retrieval
|
||||||
|
|||||||
@@ -135,6 +135,7 @@ func (ls *LocalStorageImpl) Store(
|
|||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
Metadata: make(map[string]interface{}),
|
Metadata: make(map[string]interface{}),
|
||||||
}
|
}
|
||||||
|
entry.Checksum = ls.computeChecksum(dataBytes)
|
||||||
|
|
||||||
// Apply options
|
// Apply options
|
||||||
if options != nil {
|
if options != nil {
|
||||||
@@ -179,6 +180,7 @@ func (ls *LocalStorageImpl) Store(
|
|||||||
if entry.Compressed {
|
if entry.Compressed {
|
||||||
ls.metrics.CompressedSize += entry.CompressedSize
|
ls.metrics.CompressedSize += entry.CompressedSize
|
||||||
}
|
}
|
||||||
|
ls.updateFileMetricsLocked()
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -231,6 +233,14 @@ func (ls *LocalStorageImpl) Retrieve(ctx context.Context, key string) (interface
|
|||||||
dataBytes = decompressedData
|
dataBytes = decompressedData
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Verify integrity against stored checksum (SEC-SLURP-1.1a requirement)
|
||||||
|
if entry.Checksum != "" {
|
||||||
|
computed := ls.computeChecksum(dataBytes)
|
||||||
|
if computed != entry.Checksum {
|
||||||
|
return nil, fmt.Errorf("data integrity check failed for key %s", key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Deserialize data
|
// Deserialize data
|
||||||
var result interface{}
|
var result interface{}
|
||||||
if err := json.Unmarshal(dataBytes, &result); err != nil {
|
if err := json.Unmarshal(dataBytes, &result); err != nil {
|
||||||
@@ -260,6 +270,7 @@ func (ls *LocalStorageImpl) Delete(ctx context.Context, key string) error {
|
|||||||
if entryBytes != nil {
|
if entryBytes != nil {
|
||||||
ls.metrics.TotalSize -= int64(len(entryBytes))
|
ls.metrics.TotalSize -= int64(len(entryBytes))
|
||||||
}
|
}
|
||||||
|
ls.updateFileMetricsLocked()
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -397,6 +408,7 @@ type StorageEntry struct {
|
|||||||
Compressed bool `json:"compressed"`
|
Compressed bool `json:"compressed"`
|
||||||
OriginalSize int64 `json:"original_size"`
|
OriginalSize int64 `json:"original_size"`
|
||||||
CompressedSize int64 `json:"compressed_size"`
|
CompressedSize int64 `json:"compressed_size"`
|
||||||
|
Checksum string `json:"checksum"`
|
||||||
AccessLevel string `json:"access_level"`
|
AccessLevel string `json:"access_level"`
|
||||||
Metadata map[string]interface{} `json:"metadata"`
|
Metadata map[string]interface{} `json:"metadata"`
|
||||||
}
|
}
|
||||||
@@ -434,6 +446,42 @@ func (ls *LocalStorageImpl) compress(data []byte) ([]byte, error) {
|
|||||||
return compressed, nil
|
return compressed, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (ls *LocalStorageImpl) computeChecksum(data []byte) string {
|
||||||
|
// Compute SHA-256 checksum to satisfy SEC-SLURP-1.1a integrity tracking
|
||||||
|
digest := sha256.Sum256(data)
|
||||||
|
return fmt.Sprintf("%x", digest)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ls *LocalStorageImpl) updateFileMetricsLocked() {
|
||||||
|
// Refresh filesystem metrics using io/fs traversal (SEC-SLURP-1.1a durability telemetry)
|
||||||
|
var fileCount int64
|
||||||
|
var aggregateSize int64
|
||||||
|
|
||||||
|
walkErr := fs.WalkDir(os.DirFS(ls.basePath), ".", func(path string, d fs.DirEntry, err error) error {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if d.IsDir() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
fileCount++
|
||||||
|
if info, infoErr := d.Info(); infoErr == nil {
|
||||||
|
aggregateSize += info.Size()
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
if walkErr != nil {
|
||||||
|
fmt.Printf("filesystem metrics refresh failed: %v\n", walkErr)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ls.metrics.TotalFiles = fileCount
|
||||||
|
if aggregateSize > 0 {
|
||||||
|
ls.metrics.TotalSize = aggregateSize
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
|
func (ls *LocalStorageImpl) decompress(data []byte) ([]byte, error) {
|
||||||
// Create gzip reader
|
// Create gzip reader
|
||||||
reader, err := gzip.NewReader(bytes.NewReader(data))
|
reader, err := gzip.NewReader(bytes.NewReader(data))
|
||||||
|
|||||||
@@ -97,6 +97,84 @@ type AlertManager struct {
|
|||||||
maxHistory int
|
maxHistory int
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (am *AlertManager) severityRank(severity AlertSeverity) int {
|
||||||
|
switch severity {
|
||||||
|
case SeverityCritical:
|
||||||
|
return 4
|
||||||
|
case SeverityError:
|
||||||
|
return 3
|
||||||
|
case SeverityWarning:
|
||||||
|
return 2
|
||||||
|
case SeverityInfo:
|
||||||
|
return 1
|
||||||
|
default:
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetActiveAlerts returns sorted active alerts (SEC-SLURP-1.1 monitoring path)
|
||||||
|
func (am *AlertManager) GetActiveAlerts() []*Alert {
|
||||||
|
am.mu.RLock()
|
||||||
|
defer am.mu.RUnlock()
|
||||||
|
|
||||||
|
if len(am.activealerts) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
alerts := make([]*Alert, 0, len(am.activealerts))
|
||||||
|
for _, alert := range am.activealerts {
|
||||||
|
alerts = append(alerts, alert)
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Slice(alerts, func(i, j int) bool {
|
||||||
|
iRank := am.severityRank(alerts[i].Severity)
|
||||||
|
jRank := am.severityRank(alerts[j].Severity)
|
||||||
|
if iRank == jRank {
|
||||||
|
return alerts[i].StartTime.After(alerts[j].StartTime)
|
||||||
|
}
|
||||||
|
return iRank > jRank
|
||||||
|
})
|
||||||
|
|
||||||
|
return alerts
|
||||||
|
}
|
||||||
|
|
||||||
|
// Snapshot marshals monitoring state for UCXL persistence (SEC-SLURP-1.1a telemetry)
|
||||||
|
func (ms *MonitoringSystem) Snapshot(ctx context.Context) (string, error) {
|
||||||
|
ms.mu.RLock()
|
||||||
|
defer ms.mu.RUnlock()
|
||||||
|
|
||||||
|
if ms.alerts == nil {
|
||||||
|
return "", fmt.Errorf("alert manager not initialised")
|
||||||
|
}
|
||||||
|
|
||||||
|
active := ms.alerts.GetActiveAlerts()
|
||||||
|
alertPayload := make([]map[string]interface{}, 0, len(active))
|
||||||
|
for _, alert := range active {
|
||||||
|
alertPayload = append(alertPayload, map[string]interface{}{
|
||||||
|
"id": alert.ID,
|
||||||
|
"name": alert.Name,
|
||||||
|
"severity": alert.Severity,
|
||||||
|
"message": fmt.Sprintf("%s (threshold %.2f)", alert.Description, alert.Threshold),
|
||||||
|
"labels": alert.Labels,
|
||||||
|
"started_at": alert.StartTime,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
snapshot := map[string]interface{}{
|
||||||
|
"node_id": ms.nodeID,
|
||||||
|
"generated_at": time.Now().UTC(),
|
||||||
|
"alert_count": len(active),
|
||||||
|
"alerts": alertPayload,
|
||||||
|
}
|
||||||
|
|
||||||
|
encoded, err := json.MarshalIndent(snapshot, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to marshal monitoring snapshot: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return string(encoded), nil
|
||||||
|
}
|
||||||
|
|
||||||
// AlertRule defines conditions for triggering alerts
|
// AlertRule defines conditions for triggering alerts
|
||||||
type AlertRule struct {
|
type AlertRule struct {
|
||||||
ID string `json:"id"`
|
ID string `json:"id"`
|
||||||
|
|||||||
@@ -3,9 +3,8 @@ package storage
|
|||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/crypto"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DatabaseSchema defines the complete schema for encrypted context storage
|
// DatabaseSchema defines the complete schema for encrypted context storage
|
||||||
|
|||||||
@@ -291,6 +291,7 @@ type BackupConfig struct {
|
|||||||
Encryption bool `json:"encryption"` // Enable encryption
|
Encryption bool `json:"encryption"` // Enable encryption
|
||||||
EncryptionKey string `json:"encryption_key"` // Encryption key
|
EncryptionKey string `json:"encryption_key"` // Encryption key
|
||||||
Incremental bool `json:"incremental"` // Incremental backup
|
Incremental bool `json:"incremental"` // Incremental backup
|
||||||
|
ParentBackupID string `json:"parent_backup_id"` // Parent backup reference
|
||||||
Retention time.Duration `json:"retention"` // Backup retention period
|
Retention time.Duration `json:"retention"` // Backup retention period
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
@@ -298,16 +299,25 @@ type BackupConfig struct {
|
|||||||
// BackupInfo represents information about a backup
|
// BackupInfo represents information about a backup
|
||||||
type BackupInfo struct {
|
type BackupInfo struct {
|
||||||
ID string `json:"id"` // Backup ID
|
ID string `json:"id"` // Backup ID
|
||||||
|
BackupID string `json:"backup_id"` // Legacy identifier
|
||||||
Name string `json:"name"` // Backup name
|
Name string `json:"name"` // Backup name
|
||||||
|
Destination string `json:"destination"` // Destination path
|
||||||
CreatedAt time.Time `json:"created_at"` // Creation time
|
CreatedAt time.Time `json:"created_at"` // Creation time
|
||||||
Size int64 `json:"size"` // Backup size
|
Size int64 `json:"size"` // Backup size
|
||||||
CompressedSize int64 `json:"compressed_size"` // Compressed size
|
CompressedSize int64 `json:"compressed_size"` // Compressed size
|
||||||
|
DataSize int64 `json:"data_size"` // Total data size
|
||||||
ContextCount int64 `json:"context_count"` // Number of contexts
|
ContextCount int64 `json:"context_count"` // Number of contexts
|
||||||
Encrypted bool `json:"encrypted"` // Whether encrypted
|
Encrypted bool `json:"encrypted"` // Whether encrypted
|
||||||
Incremental bool `json:"incremental"` // Whether incremental
|
Incremental bool `json:"incremental"` // Whether incremental
|
||||||
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
|
ParentBackupID string `json:"parent_backup_id"` // Parent backup for incremental
|
||||||
|
IncludesIndexes bool `json:"includes_indexes"` // Include indexes
|
||||||
|
IncludesCache bool `json:"includes_cache"` // Include cache data
|
||||||
Checksum string `json:"checksum"` // Backup checksum
|
Checksum string `json:"checksum"` // Backup checksum
|
||||||
Status BackupStatus `json:"status"` // Backup status
|
Status BackupStatus `json:"status"` // Backup status
|
||||||
|
Progress float64 `json:"progress"` // Completion progress 0-1
|
||||||
|
ErrorMessage string `json:"error_message"` // Last error message
|
||||||
|
RetentionUntil time.Time `json:"retention_until"` // Retention deadline
|
||||||
|
CompletedAt *time.Time `json:"completed_at"` // Completion time
|
||||||
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
Metadata map[string]interface{} `json:"metadata"` // Additional metadata
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,9 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
"chorus/pkg/slurp/storage"
|
"chorus/pkg/slurp/storage"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// TemporalGraphFactory creates and configures temporal graph components
|
// TemporalGraphFactory creates and configures temporal graph components
|
||||||
@@ -309,7 +311,7 @@ func (cd *conflictDetectorImpl) ResolveTemporalConflict(ctx context.Context, con
|
|||||||
// Implementation would resolve specific temporal conflicts
|
// Implementation would resolve specific temporal conflicts
|
||||||
return &ConflictResolution{
|
return &ConflictResolution{
|
||||||
ConflictID: conflict.ID,
|
ConflictID: conflict.ID,
|
||||||
Resolution: "auto_resolved",
|
ResolutionMethod: "auto_resolved",
|
||||||
ResolvedAt: time.Now(),
|
ResolvedAt: time.Now(),
|
||||||
ResolvedBy: "system",
|
ResolvedBy: "system",
|
||||||
Confidence: 0.8,
|
Confidence: 0.8,
|
||||||
|
|||||||
@@ -9,9 +9,9 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
slurpContext "chorus/pkg/slurp/context"
|
slurpContext "chorus/pkg/slurp/context"
|
||||||
"chorus/pkg/slurp/storage"
|
"chorus/pkg/slurp/storage"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// temporalGraphImpl implements the TemporalGraph interface
|
// temporalGraphImpl implements the TemporalGraph interface
|
||||||
@@ -534,7 +534,7 @@ func (tg *temporalGraphImpl) FindDecisionPath(ctx context.Context, from, to ucxl
|
|||||||
return nil, fmt.Errorf("from node not found: %w", err)
|
return nil, fmt.Errorf("from node not found: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
toNode, err := tg.getLatestNodeUnsafe(to)
|
_, err := tg.getLatestNodeUnsafe(to)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("to node not found: %w", err)
|
return nil, fmt.Errorf("to node not found: %w", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -899,7 +899,6 @@ func (ia *influenceAnalyzerImpl) findShortestPathLength(fromID, toID string) int
|
|||||||
|
|
||||||
func (ia *influenceAnalyzerImpl) getNodeCentrality(nodeID string) float64 {
|
func (ia *influenceAnalyzerImpl) getNodeCentrality(nodeID string) float64 {
|
||||||
// Simple centrality based on degree
|
// Simple centrality based on degree
|
||||||
influences := len(ia.graph.influences[nodeID])
|
|
||||||
influencedBy := len(ia.graph.influencedBy[nodeID])
|
influencedBy := len(ia.graph.influencedBy[nodeID])
|
||||||
totalNodes := len(ia.graph.nodes)
|
totalNodes := len(ia.graph.nodes)
|
||||||
|
|
||||||
|
|||||||
@@ -252,7 +252,7 @@ func (dn *decisionNavigatorImpl) ResetNavigation(ctx context.Context, address uc
|
|||||||
defer dn.mu.Unlock()
|
defer dn.mu.Unlock()
|
||||||
|
|
||||||
// Clear any navigation sessions for this address
|
// Clear any navigation sessions for this address
|
||||||
for sessionID, session := range dn.navigationSessions {
|
for _, session := range dn.navigationSessions {
|
||||||
if session.CurrentPosition.String() == address.String() {
|
if session.CurrentPosition.String() == address.String() {
|
||||||
// Reset to latest version
|
// Reset to latest version
|
||||||
latestNode, err := dn.graph.getLatestNodeUnsafe(address)
|
latestNode, err := dn.graph.getLatestNodeUnsafe(address)
|
||||||
|
|||||||
@@ -7,8 +7,8 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"chorus/pkg/ucxl"
|
|
||||||
"chorus/pkg/slurp/storage"
|
"chorus/pkg/slurp/storage"
|
||||||
|
"chorus/pkg/ucxl"
|
||||||
)
|
)
|
||||||
|
|
||||||
// persistenceManagerImpl handles persistence and synchronization of temporal graph data
|
// persistenceManagerImpl handles persistence and synchronization of temporal graph data
|
||||||
@@ -289,17 +289,9 @@ func (pm *persistenceManagerImpl) BackupGraph(ctx context.Context) error {
|
|||||||
return fmt.Errorf("failed to create snapshot: %w", err)
|
return fmt.Errorf("failed to create snapshot: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Serialize snapshot
|
|
||||||
data, err := json.Marshal(snapshot)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to serialize snapshot: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create backup configuration
|
// Create backup configuration
|
||||||
backupConfig := &storage.BackupConfig{
|
backupConfig := &storage.BackupConfig{
|
||||||
Type: "temporal_graph",
|
Name: "temporal_graph",
|
||||||
Description: "Temporal graph backup",
|
|
||||||
Tags: []string{"temporal", "graph", "decision"},
|
|
||||||
Metadata: map[string]interface{}{
|
Metadata: map[string]interface{}{
|
||||||
"node_count": snapshot.Metadata.NodeCount,
|
"node_count": snapshot.Metadata.NodeCount,
|
||||||
"edge_count": snapshot.Metadata.EdgeCount,
|
"edge_count": snapshot.Metadata.EdgeCount,
|
||||||
@@ -356,16 +348,14 @@ func (pm *persistenceManagerImpl) flushWriteBuffer() error {
|
|||||||
|
|
||||||
// Create batch store request
|
// Create batch store request
|
||||||
batch := &storage.BatchStoreRequest{
|
batch := &storage.BatchStoreRequest{
|
||||||
Operations: make([]*storage.BatchStoreOperation, len(pm.writeBuffer)),
|
Contexts: make([]*storage.ContextStoreItem, len(pm.writeBuffer)),
|
||||||
|
Roles: pm.config.EncryptionRoles,
|
||||||
|
FailOnError: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
for i, node := range pm.writeBuffer {
|
for i, node := range pm.writeBuffer {
|
||||||
key := pm.generateNodeKey(node)
|
batch.Contexts[i] = &storage.ContextStoreItem{
|
||||||
|
Context: node,
|
||||||
batch.Operations[i] = &storage.BatchStoreOperation{
|
|
||||||
Type: "store",
|
|
||||||
Key: key,
|
|
||||||
Data: node,
|
|
||||||
Roles: pm.config.EncryptionRoles,
|
Roles: pm.config.EncryptionRoles,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -841,21 +831,7 @@ type SyncConflict struct {
|
|||||||
Severity string `json:"severity"`
|
Severity string `json:"severity"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ConflictType string
|
// Default conflict resolver implementation
|
||||||
|
|
||||||
const (
|
|
||||||
ConflictTypeNodeMismatch ConflictType = "node_mismatch"
|
|
||||||
ConflictTypeInfluenceMismatch ConflictType = "influence_mismatch"
|
|
||||||
ConflictTypeMetadataMismatch ConflictType = "metadata_mismatch"
|
|
||||||
)
|
|
||||||
|
|
||||||
type ConflictResolution struct {
|
|
||||||
ConflictID string `json:"conflict_id"`
|
|
||||||
Resolution string `json:"resolution"`
|
|
||||||
ResolvedData interface{} `json:"resolved_data"`
|
|
||||||
ResolvedAt time.Time `json:"resolved_at"`
|
|
||||||
ResolvedBy string `json:"resolved_by"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Default conflict resolver implementation
|
// Default conflict resolver implementation
|
||||||
|
|
||||||
|
|||||||
@@ -44,6 +44,7 @@ type ContextNode struct {
|
|||||||
CreatedBy string `json:"created_by"` // Who/what created this context
|
CreatedBy string `json:"created_by"` // Who/what created this context
|
||||||
CreatedAt time.Time `json:"created_at"` // When created
|
CreatedAt time.Time `json:"created_at"` // When created
|
||||||
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
UpdatedAt time.Time `json:"updated_at"` // When last updated
|
||||||
|
UpdatedBy string `json:"updated_by"` // Who performed the last update
|
||||||
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
|
Confidence float64 `json:"confidence"` // Confidence in accuracy (0-1)
|
||||||
|
|
||||||
// Cascading behavior rules
|
// Cascading behavior rules
|
||||||
|
|||||||
Reference in New Issue
Block a user