Docs: Comprehensive inline rustdoc and architectural summary PDF
This commit is contained in:
1
.serena/.gitignore
vendored
Normal file
1
.serena/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
/cache
|
||||
126
.serena/project.yml
Normal file
126
.serena/project.yml
Normal file
@@ -0,0 +1,126 @@
|
||||
# the name by which the project can be referenced within Serena
|
||||
project_name: "CHORUS"
|
||||
|
||||
|
||||
# list of languages for which language servers are started; choose from:
|
||||
# al bash clojure cpp csharp
|
||||
# csharp_omnisharp dart elixir elm erlang
|
||||
# fortran fsharp go groovy haskell
|
||||
# java julia kotlin lua markdown
|
||||
# matlab nix pascal perl php
|
||||
# php_phpactor powershell python python_jedi r
|
||||
# rego ruby ruby_solargraph rust scala
|
||||
# swift terraform toml typescript typescript_vts
|
||||
# vue yaml zig
|
||||
# (This list may be outdated. For the current list, see values of Language enum here:
|
||||
# https://github.com/oraios/serena/blob/main/src/solidlsp/ls_config.py
|
||||
# For some languages, there are alternative language servers, e.g. csharp_omnisharp, ruby_solargraph.)
|
||||
# Note:
|
||||
# - For C, use cpp
|
||||
# - For JavaScript, use typescript
|
||||
# - For Free Pascal/Lazarus, use pascal
|
||||
# Special requirements:
|
||||
# Some languages require additional setup/installations.
|
||||
# See here for details: https://oraios.github.io/serena/01-about/020_programming-languages.html#language-servers
|
||||
# When using multiple languages, the first language server that supports a given file will be used for that file.
|
||||
# The first language is the default language and the respective language server will be used as a fallback.
|
||||
# Note that when using the JetBrains backend, language servers are not used and this list is correspondingly ignored.
|
||||
languages:
|
||||
- rust
|
||||
|
||||
# the encoding used by text files in the project
|
||||
# For a list of possible encodings, see https://docs.python.org/3.11/library/codecs.html#standard-encodings
|
||||
encoding: "utf-8"
|
||||
|
||||
# The language backend to use for this project.
|
||||
# If not set, the global setting from serena_config.yml is used.
|
||||
# Valid values: LSP, JetBrains
|
||||
# Note: the backend is fixed at startup. If a project with a different backend
|
||||
# is activated post-init, an error will be returned.
|
||||
language_backend:
|
||||
|
||||
# whether to use project's .gitignore files to ignore files
|
||||
ignore_all_files_in_gitignore: true
|
||||
|
||||
# list of additional paths to ignore in this project.
|
||||
# Same syntax as gitignore, so you can use * and **.
|
||||
# Note: global ignored_paths from serena_config.yml are also applied additively.
|
||||
ignored_paths: []
|
||||
|
||||
# whether the project is in read-only mode
|
||||
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
|
||||
# Added on 2025-04-18
|
||||
read_only: false
|
||||
|
||||
# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
|
||||
# Below is the complete list of tools for convenience.
|
||||
# To make sure you have the latest list of tools, and to view their descriptions,
|
||||
# execute `uv run scripts/print_tool_overview.py`.
|
||||
#
|
||||
# * `activate_project`: Activates a project by name.
|
||||
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
|
||||
# * `create_text_file`: Creates/overwrites a file in the project directory.
|
||||
# * `delete_lines`: Deletes a range of lines within a file.
|
||||
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
|
||||
# * `execute_shell_command`: Executes a shell command.
|
||||
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
|
||||
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
|
||||
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
|
||||
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
|
||||
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
|
||||
# * `initial_instructions`: Gets the initial instructions for the current project.
|
||||
# Should only be used in settings where the system prompt cannot be set,
|
||||
# e.g. in clients you have no control over, like Claude Desktop.
|
||||
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
|
||||
# * `insert_at_line`: Inserts content at a given line in a file.
|
||||
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
|
||||
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
|
||||
# * `list_memories`: Lists memories in Serena's project-specific memory store.
|
||||
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
|
||||
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
|
||||
# * `read_file`: Reads a file within the project directory.
|
||||
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
|
||||
# * `remove_project`: Removes a project from the Serena configuration.
|
||||
# * `replace_lines`: Replaces a range of lines within a file with new content.
|
||||
# * `replace_symbol_body`: Replaces the full definition of a symbol.
|
||||
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
|
||||
# * `search_for_pattern`: Performs a search for a pattern in the project.
|
||||
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
|
||||
# * `switch_modes`: Activates modes by providing a list of their names
|
||||
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
|
||||
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
|
||||
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
|
||||
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
|
||||
excluded_tools: []
|
||||
|
||||
# list of tools to include that would otherwise be disabled (particularly optional tools that are disabled by default)
|
||||
included_optional_tools: []
|
||||
|
||||
# fixed set of tools to use as the base tool set (if non-empty), replacing Serena's default set of tools.
|
||||
# This cannot be combined with non-empty excluded_tools or included_optional_tools.
|
||||
fixed_tools: []
|
||||
|
||||
# list of mode names to that are always to be included in the set of active modes
|
||||
# The full set of modes to be activated is base_modes + default_modes.
|
||||
# If the setting is undefined, the base_modes from the global configuration (serena_config.yml) apply.
|
||||
# Otherwise, this setting overrides the global configuration.
|
||||
# Set this to [] to disable base modes for this project.
|
||||
# Set this to a list of mode names to always include the respective modes for this project.
|
||||
base_modes:
|
||||
|
||||
# list of mode names that are to be activated by default.
|
||||
# The full set of modes to be activated is base_modes + default_modes.
|
||||
# If the setting is undefined, the default_modes from the global configuration (serena_config.yml) apply.
|
||||
# Otherwise, this overrides the setting from the global configuration (serena_config.yml).
|
||||
# This setting can, in turn, be overridden by CLI parameters (--mode).
|
||||
default_modes:
|
||||
|
||||
# initial prompt for the project. It will always be given to the LLM upon activating the project
|
||||
# (contrary to the memories, which are loaded on demand).
|
||||
initial_prompt: ""
|
||||
|
||||
# time budget (seconds) per tool call for the retrieval of additional symbol information
|
||||
# such as docstrings or parameter information.
|
||||
# This overrides the corresponding setting in the global configuration; see the documentation there.
|
||||
# If null or missing, use the setting from the global configuration.
|
||||
symbol_info_budget:
|
||||
1404
Cargo.lock
generated
Normal file
1404
Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
13
Cargo.toml
Normal file
13
Cargo.toml
Normal file
@@ -0,0 +1,13 @@
|
||||
[workspace]
|
||||
members = [
|
||||
"UCXL",
|
||||
"chrs-mail",
|
||||
"chrs-graph",
|
||||
"chrs-agent",
|
||||
"chrs-sync",
|
||||
"chrs-slurp",
|
||||
"chrs-shhh",
|
||||
"chrs-bubble",
|
||||
"chrs-poc"
|
||||
]
|
||||
resolver = "2"
|
||||
1
UCXL/.serena/.gitignore
vendored
Normal file
1
UCXL/.serena/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
/cache
|
||||
126
UCXL/.serena/project.yml
Normal file
126
UCXL/.serena/project.yml
Normal file
@@ -0,0 +1,126 @@
|
||||
# the name by which the project can be referenced within Serena
|
||||
project_name: "UCXL"
|
||||
|
||||
|
||||
# list of languages for which language servers are started; choose from:
|
||||
# al bash clojure cpp csharp
|
||||
# csharp_omnisharp dart elixir elm erlang
|
||||
# fortran fsharp go groovy haskell
|
||||
# java julia kotlin lua markdown
|
||||
# matlab nix pascal perl php
|
||||
# php_phpactor powershell python python_jedi r
|
||||
# rego ruby ruby_solargraph rust scala
|
||||
# swift terraform toml typescript typescript_vts
|
||||
# vue yaml zig
|
||||
# (This list may be outdated. For the current list, see values of Language enum here:
|
||||
# https://github.com/oraios/serena/blob/main/src/solidlsp/ls_config.py
|
||||
# For some languages, there are alternative language servers, e.g. csharp_omnisharp, ruby_solargraph.)
|
||||
# Note:
|
||||
# - For C, use cpp
|
||||
# - For JavaScript, use typescript
|
||||
# - For Free Pascal/Lazarus, use pascal
|
||||
# Special requirements:
|
||||
# Some languages require additional setup/installations.
|
||||
# See here for details: https://oraios.github.io/serena/01-about/020_programming-languages.html#language-servers
|
||||
# When using multiple languages, the first language server that supports a given file will be used for that file.
|
||||
# The first language is the default language and the respective language server will be used as a fallback.
|
||||
# Note that when using the JetBrains backend, language servers are not used and this list is correspondingly ignored.
|
||||
languages:
|
||||
- rust
|
||||
|
||||
# the encoding used by text files in the project
|
||||
# For a list of possible encodings, see https://docs.python.org/3.11/library/codecs.html#standard-encodings
|
||||
encoding: "utf-8"
|
||||
|
||||
# The language backend to use for this project.
|
||||
# If not set, the global setting from serena_config.yml is used.
|
||||
# Valid values: LSP, JetBrains
|
||||
# Note: the backend is fixed at startup. If a project with a different backend
|
||||
# is activated post-init, an error will be returned.
|
||||
language_backend:
|
||||
|
||||
# whether to use project's .gitignore files to ignore files
|
||||
ignore_all_files_in_gitignore: true
|
||||
|
||||
# list of additional paths to ignore in this project.
|
||||
# Same syntax as gitignore, so you can use * and **.
|
||||
# Note: global ignored_paths from serena_config.yml are also applied additively.
|
||||
ignored_paths: []
|
||||
|
||||
# whether the project is in read-only mode
|
||||
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
|
||||
# Added on 2025-04-18
|
||||
read_only: false
|
||||
|
||||
# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
|
||||
# Below is the complete list of tools for convenience.
|
||||
# To make sure you have the latest list of tools, and to view their descriptions,
|
||||
# execute `uv run scripts/print_tool_overview.py`.
|
||||
#
|
||||
# * `activate_project`: Activates a project by name.
|
||||
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
|
||||
# * `create_text_file`: Creates/overwrites a file in the project directory.
|
||||
# * `delete_lines`: Deletes a range of lines within a file.
|
||||
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
|
||||
# * `execute_shell_command`: Executes a shell command.
|
||||
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
|
||||
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
|
||||
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
|
||||
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
|
||||
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
|
||||
# * `initial_instructions`: Gets the initial instructions for the current project.
|
||||
# Should only be used in settings where the system prompt cannot be set,
|
||||
# e.g. in clients you have no control over, like Claude Desktop.
|
||||
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
|
||||
# * `insert_at_line`: Inserts content at a given line in a file.
|
||||
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
|
||||
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
|
||||
# * `list_memories`: Lists memories in Serena's project-specific memory store.
|
||||
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
|
||||
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
|
||||
# * `read_file`: Reads a file within the project directory.
|
||||
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
|
||||
# * `remove_project`: Removes a project from the Serena configuration.
|
||||
# * `replace_lines`: Replaces a range of lines within a file with new content.
|
||||
# * `replace_symbol_body`: Replaces the full definition of a symbol.
|
||||
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
|
||||
# * `search_for_pattern`: Performs a search for a pattern in the project.
|
||||
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
|
||||
# * `switch_modes`: Activates modes by providing a list of their names
|
||||
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
|
||||
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
|
||||
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
|
||||
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
|
||||
excluded_tools: []
|
||||
|
||||
# list of tools to include that would otherwise be disabled (particularly optional tools that are disabled by default)
|
||||
included_optional_tools: []
|
||||
|
||||
# fixed set of tools to use as the base tool set (if non-empty), replacing Serena's default set of tools.
|
||||
# This cannot be combined with non-empty excluded_tools or included_optional_tools.
|
||||
fixed_tools: []
|
||||
|
||||
# list of mode names to that are always to be included in the set of active modes
|
||||
# The full set of modes to be activated is base_modes + default_modes.
|
||||
# If the setting is undefined, the base_modes from the global configuration (serena_config.yml) apply.
|
||||
# Otherwise, this setting overrides the global configuration.
|
||||
# Set this to [] to disable base modes for this project.
|
||||
# Set this to a list of mode names to always include the respective modes for this project.
|
||||
base_modes:
|
||||
|
||||
# list of mode names that are to be activated by default.
|
||||
# The full set of modes to be activated is base_modes + default_modes.
|
||||
# If the setting is undefined, the default_modes from the global configuration (serena_config.yml) apply.
|
||||
# Otherwise, this overrides the setting from the global configuration (serena_config.yml).
|
||||
# This setting can, in turn, be overridden by CLI parameters (--mode).
|
||||
default_modes:
|
||||
|
||||
# initial prompt for the project. It will always be given to the LLM upon activating the project
|
||||
# (contrary to the memories, which are loaded on demand).
|
||||
initial_prompt: ""
|
||||
|
||||
# time budget (seconds) per tool call for the retrieval of additional symbol information
|
||||
# such as docstrings or parameter information.
|
||||
# This overrides the corresponding setting in the global configuration; see the documentation there.
|
||||
# If null or missing, use the setting from the global configuration.
|
||||
symbol_info_budget:
|
||||
119
UCXL/src/lib.rs
119
UCXL/src/lib.rs
@@ -1,4 +1,9 @@
|
||||
// UCXL Core Data Structures
|
||||
//! UCXL core data structures and utilities.
|
||||
//!
|
||||
//! This module provides the fundamental types used throughout the CHORUS
|
||||
//! system for addressing resources (UCXL addresses), handling temporal axes,
|
||||
//! and storing lightweight metadata. The implementation is deliberately
|
||||
//! lightweight and in‑memory to keep the core fast and dependency‑free.
|
||||
|
||||
pub mod watcher;
|
||||
|
||||
@@ -7,18 +12,41 @@ use std::fmt;
|
||||
use std::str::FromStr;
|
||||
|
||||
/// Represents the temporal axis in a UCXL address.
|
||||
///
|
||||
/// **What**: An enumeration of the three supported temporal positions –
|
||||
/// present, past, and future – each represented by a symbolic string in the
|
||||
/// address format.
|
||||
///
|
||||
/// **How**: The enum derives `Debug`, `PartialEq`, `Eq`, `Clone`, and `Copy`
|
||||
/// for ergonomic usage. Conversions to and from strings are provided via the
|
||||
/// `FromStr` and `fmt::Display` implementations.
|
||||
///
|
||||
/// **Why**: Temporal axes enable UCXL to refer to data at different points in
|
||||
/// time (e.g. versioned resources). The simple three‑state model matches the
|
||||
/// CHURUS architectural decision to keep addressing lightweight while still
|
||||
/// supporting historical and speculative queries.
|
||||
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
|
||||
pub enum TemporalAxis {
|
||||
/// Present ("#")
|
||||
/// Present ("#") – the current version of a resource.
|
||||
Present,
|
||||
/// Past ("~~")
|
||||
/// Past ("~~") – a historical snapshot of a resource.
|
||||
Past,
|
||||
/// Future ("^^")
|
||||
/// Future ("^^") – a speculative or planned version of a resource.
|
||||
Future,
|
||||
}
|
||||
|
||||
impl FromStr for TemporalAxis {
|
||||
type Err = String;
|
||||
/// Parses a temporal axis token from its textual representation.
|
||||
///
|
||||
/// **What**: Accepts "#", "~~" or "^^" and maps them to the corresponding
|
||||
/// enum variant.
|
||||
///
|
||||
/// **How**: A simple `match` statement is used; an error string is
|
||||
/// returned for any unrecognised token.
|
||||
///
|
||||
/// **Why**: Centralises validation of temporal markers used throughout the
|
||||
/// address parsing logic, ensuring consistency.
|
||||
fn from_str(s: &str) -> Result<Self, Self::Err> {
|
||||
match s {
|
||||
"#" => Ok(TemporalAxis::Present),
|
||||
@@ -30,6 +58,15 @@ impl FromStr for TemporalAxis {
|
||||
}
|
||||
|
||||
impl fmt::Display for TemporalAxis {
|
||||
/// Formats the temporal axis back to its string token.
|
||||
///
|
||||
/// **What**: Returns "#", "~~" or "^^" depending on the variant.
|
||||
///
|
||||
/// **How**: Matches on `self` and writes the corresponding string to the
|
||||
/// formatter.
|
||||
///
|
||||
/// **Why**: Required for serialising a `UCXLAddress` back to its textual
|
||||
/// representation.
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
let s = match self {
|
||||
TemporalAxis::Present => "#",
|
||||
@@ -41,18 +78,48 @@ impl fmt::Display for TemporalAxis {
|
||||
}
|
||||
|
||||
/// Represents a parsed UCXL address.
|
||||
///
|
||||
/// **What**: Holds the components extracted from a UCXL URI – the agent, an
|
||||
/// optional role, the project identifier, task name, temporal axis, and the
|
||||
/// resource path within the project.
|
||||
///
|
||||
/// **How**: The struct is constructed via the `FromStr` implementation which
|
||||
/// validates the scheme, splits the address into its constituent parts and
|
||||
/// populates the fields. The `Display` implementation performs the inverse
|
||||
/// operation.
|
||||
///
|
||||
/// **Why**: UCXL addresses are the primary routing mechanism inside CHORUS.
|
||||
/// Encapsulating them in a dedicated type provides type‑safety and makes it
|
||||
/// easy to work with address components in the rest of the codebase.
|
||||
#[derive(Debug, PartialEq, Eq, Clone)]
|
||||
pub struct UCXLAddress {
|
||||
/// The identifier of the agent (e.g., a user or system component).
|
||||
pub agent: String,
|
||||
/// Optional role associated with the agent (e.g., "admin").
|
||||
pub role: Option<String>,
|
||||
/// The project namespace this address belongs to.
|
||||
pub project: String,
|
||||
/// The specific task within the project.
|
||||
pub task: String,
|
||||
/// Temporal axis indicating present, past or future.
|
||||
pub temporal: TemporalAxis,
|
||||
/// Path to the resource relative to the project root.
|
||||
pub path: String,
|
||||
}
|
||||
|
||||
impl FromStr for UCXLAddress {
|
||||
type Err = String;
|
||||
/// Parses a full UCXL address string into a `UCXLAddress` value.
|
||||
///
|
||||
/// **What**: Validates the scheme (`ucxl://`), extracts the agent, optional
|
||||
/// role, project, task, temporal axis and the trailing resource path.
|
||||
///
|
||||
/// **How**: The implementation performs a series of `split` operations,
|
||||
/// handling optional components and converting the temporal token via
|
||||
/// `TemporalAxis::from_str`. Errors are surfaced as descriptive strings.
|
||||
///
|
||||
/// **Why**: Centralises address parsing logic, ensuring that all parts of
|
||||
/// the system interpret UCXL URIs consistently.
|
||||
fn from_str(address: &str) -> Result<Self, Self::Err> {
|
||||
// Ensure the scheme is correct
|
||||
let scheme_split: Vec<&str> = address.splitn(2, "://").collect();
|
||||
@@ -102,6 +169,16 @@ impl FromStr for UCXLAddress {
|
||||
}
|
||||
|
||||
impl fmt::Display for UCXLAddress {
|
||||
/// Serialises the address back to its canonical string form.
|
||||
///
|
||||
/// **What**: Constructs a `ucxl://` URI including optional role and path.
|
||||
///
|
||||
/// **How**: Conditionally inserts the role component, then formats the
|
||||
/// project, task, temporal token and optional path using standard `write!`
|
||||
/// semantics.
|
||||
///
|
||||
/// **Why**: Needed when emitting addresses (e.g., logging events or
|
||||
/// generating links) so that external tools can consume them.
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
let role_part = if let Some(r) = &self.role {
|
||||
format!(":{}", r)
|
||||
@@ -125,21 +202,51 @@ impl fmt::Display for UCXLAddress {
|
||||
}
|
||||
}
|
||||
|
||||
/// Simple in‑memory metadata store mapping a file path to a metadata string.
|
||||
/// Trait defining a simple key‑value metadata store.
|
||||
///
|
||||
/// **What**: Provides read, write and removal operations for associating a
|
||||
/// string of metadata with a file‑system path.
|
||||
///
|
||||
/// **How**: The trait abstracts over concrete storage implementations –
|
||||
/// currently an in‑memory `HashMap` – allowing callers to depend on the trait
|
||||
/// rather than a specific type.
|
||||
///
|
||||
/// **Why**: CHORUS needs a lightweight way to attach auxiliary information to
|
||||
/// files without persisting to a database; the trait makes it easy to swap in a
|
||||
/// persistent backend later if required.
|
||||
pub trait MetadataStore {
|
||||
/// Retrieves the metadata for `path` if it exists.
|
||||
fn get(&self, path: &str) -> Option<&String>;
|
||||
/// Stores `metadata` for `path`, overwriting any existing value.
|
||||
fn set(&mut self, path: &str, metadata: String);
|
||||
/// Removes the metadata entry for `path`, returning the old value if any.
|
||||
fn remove(&mut self, path: &str) -> Option<String> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// A concrete in‑memory implementation using a HashMap.
|
||||
/// In‑memory implementation of `MetadataStore` backed by a `HashMap`.
|
||||
///
|
||||
/// **What**: Holds metadata in a hash map where the key is the file path.
|
||||
///
|
||||
/// **How**: Provides a `new` constructor and implements the `MetadataStore`
|
||||
/// trait methods by delegating to the underlying map.
|
||||
///
|
||||
/// **Why**: Offers a zero‑cost, dependency‑free store suitable for unit tests
|
||||
/// and simple scenarios. It can be replaced with a persistent store without
|
||||
/// changing callers.
|
||||
pub struct InMemoryMetadataStore {
|
||||
map: HashMap<String, String>,
|
||||
}
|
||||
|
||||
impl InMemoryMetadataStore {
|
||||
/// Creates a fresh, empty `InMemoryMetadataStore`.
|
||||
///
|
||||
/// **What**: Returns a struct with an empty internal map.
|
||||
///
|
||||
/// **How**: Calls `HashMap::new`.
|
||||
///
|
||||
/// **Why**: Convenience constructor for callers.
|
||||
pub fn new() -> Self {
|
||||
InMemoryMetadataStore {
|
||||
map: HashMap::new(),
|
||||
|
||||
@@ -1,20 +1,63 @@
|
||||
//! UCXL filesystem watcher.
|
||||
//!
|
||||
//! This module provides a thin wrapper around the `notify` crate to watch a
|
||||
//! directory (or "project") for filesystem events. When a change is detected,
|
||||
//! the watcher attempts to construct a corresponding `UCXLAddress` using a
|
||||
//! simple heuristic and logs the event. This is primarily used by CHORUS for
|
||||
//! reactive workflows such as automatically updating metadata when files are
|
||||
//! added, modified or removed.
|
||||
|
||||
use notify::{Config, RecommendedWatcher, RecursiveMode, Watcher};
|
||||
use std::path::Path;
|
||||
use std::sync::mpsc::channel;
|
||||
use crate::{UCXLAddress, TemporalAxis};
|
||||
use crate::UCXLAddress;
|
||||
use std::str::FromStr;
|
||||
|
||||
/// Represents a watcher rooted at a specific base path.
|
||||
///
|
||||
/// **What**: Holds the absolute path that the watcher monitors.
|
||||
///
|
||||
/// **How**: The path is stored as a `PathBuf`. The watcher is created via the
|
||||
/// `new` constructor which accepts any type that can be referenced as a `Path`.
|
||||
/// The underlying `notify::RecommendedWatcher` is configured with the default
|
||||
/// `Config` and set to watch recursively.
|
||||
///
|
||||
/// **Why**: Encapsulating the watcher logic in a dedicated struct makes it easy
|
||||
/// to instantiate multiple independent watchers and keeps the public API tidy.
|
||||
pub struct UCXLWatcher {
|
||||
base_path: std::path::PathBuf,
|
||||
}
|
||||
|
||||
impl UCXLWatcher {
|
||||
/// Creates a new `UCXLWatcher` for the given path.
|
||||
///
|
||||
/// **What**: Accepts any generic `AsRef<Path>` so callers can pass a `&str`,
|
||||
/// `Path`, or `PathBuf`.
|
||||
///
|
||||
/// **How**: The provided path is converted to a `PathBuf` and stored.
|
||||
///
|
||||
/// **Why**: Convenience constructor used throughout CHORUS when a watcher is
|
||||
/// needed for a project directory.
|
||||
pub fn new<P: AsRef<Path>>(path: P) -> Self {
|
||||
Self {
|
||||
base_path: path.as_ref().to_path_buf(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Starts the watch loop, blocking indefinitely while handling events.
|
||||
///
|
||||
/// **What**: Sets up a channel, creates a `RecommendedWatcher`, and begins
|
||||
/// watching the `base_path` recursively. For each incoming event, it
|
||||
/// attempts to map the filesystem path to a UCXL address and prints a log.
|
||||
///
|
||||
/// **How**: Uses the `notify` crate's event API. The heuristic address
|
||||
/// format is `ucxl://system:watcher@local:filesystem/#/<relative_path>`.
|
||||
/// It parses this string with `UCXLAddress::from_str` and logs the result.
|
||||
/// Errors from parsing are ignored (they simply aren't printed).
|
||||
///
|
||||
/// **Why**: Provides a simple, observable bridge between raw filesystem
|
||||
/// changes and the UCXL addressing scheme, allowing other components to react
|
||||
/// to changes using a uniform identifier.
|
||||
pub fn watch_loop(&self) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let (tx, rx) = channel();
|
||||
|
||||
@@ -29,8 +72,11 @@ impl UCXLWatcher {
|
||||
for path in event.paths {
|
||||
if let Some(rel_path) = path.strip_prefix(&self.base_path).ok() {
|
||||
let rel_str = rel_path.to_string_lossy();
|
||||
// Attempt a heuristic address mapping: ucxl://system:watcher@local:filesystem/#/path
|
||||
let addr_str = format!("ucxl://system:watcher@local:filesystem/#/{}", rel_str);
|
||||
// Heuristic address mapping: ucxl://system:watcher@local:filesystem/#/path
|
||||
let addr_str = format!(
|
||||
"ucxl://system:watcher@local:filesystem/#/{}",
|
||||
rel_str
|
||||
);
|
||||
if let Ok(addr) = UCXLAddress::from_str(&addr_str) {
|
||||
println!("[UCXL EVENT] {:?} -> {}", event.kind, addr);
|
||||
}
|
||||
|
||||
@@ -1,3 +1,11 @@
|
||||
/// chrs-agent crate implements the core CHORUS agent runtime.
|
||||
///
|
||||
/// An agent runs a message loop that receives tasks from a `Mailbox`, logs them to a
|
||||
/// `DoltGraph` (the persistent state graph), and marks them as read. The design
|
||||
/// follows the CHORUS architectural pattern where agents are autonomous workers
|
||||
/// that interact through the `chrs_mail` messaging layer and maintain a provable
|
||||
/// execution history in the graph.
|
||||
|
||||
use chrs_graph::DoltGraph;
|
||||
use chrs_mail::{Mailbox, Message};
|
||||
use chrono::Utc;
|
||||
@@ -6,13 +14,36 @@ use std::time::Duration;
|
||||
use tokio::time::sleep;
|
||||
use uuid::Uuid;
|
||||
|
||||
struct CHORUSAgent {
|
||||
/// Represents a running CHORUS agent.
|
||||
///
|
||||
/// # Fields
|
||||
/// * `id` – Logical identifier for the agent (e.g., "agent-001").
|
||||
/// * `mailbox` – The `Mailbox` used for inter‑agent communication.
|
||||
/// * `graph` – Persistence layer (`DoltGraph`) where task logs are stored.
|
||||
///
|
||||
/// # Rationale
|
||||
/// Agents are isolated units of work. By keeping a dedicated mailbox and a graph
|
||||
/// per agent we guarantee that each agent can be started, stopped, and reasoned
|
||||
/// about independently while still contributing to the global CHORUS state.
|
||||
pub struct CHORUSAgent {
|
||||
id: String,
|
||||
mailbox: Mailbox,
|
||||
graph: DoltGraph,
|
||||
}
|
||||
|
||||
impl CHORUSAgent {
|
||||
/// Initializes a new `CHORUSAgent`.
|
||||
///
|
||||
/// This creates the filesystem layout under `base_path`, opens or creates the
|
||||
/// SQLite mailbox, and initialises a `DoltGraph` for state persistence.
|
||||
/// It also ensures that a `task_log` table exists for recording incoming
|
||||
/// messages.
|
||||
///
|
||||
/// # Parameters
|
||||
/// * `id` – Identifier for the agent instance.
|
||||
/// * `base_path` – Directory where the agent stores its data.
|
||||
///
|
||||
/// Returns an instance ready to run its event loop.
|
||||
async fn init(id: &str, base_path: &Path) -> Result<Self, Box<dyn std::error::Error>> {
|
||||
let mail_path = base_path.join("mail.sqlite");
|
||||
let graph_path = base_path.join("state_graph");
|
||||
@@ -32,6 +63,12 @@ impl CHORUSAgent {
|
||||
})
|
||||
}
|
||||
|
||||
/// Main event loop of the agent.
|
||||
///
|
||||
/// It repeatedly polls the mailbox for pending messages addressed to this
|
||||
/// agent, logs each message into the `task_log` table, commits the graph, and
|
||||
/// acknowledges the message. The loop sleeps for a configurable interval to
|
||||
/// avoid busy‑waiting.
|
||||
async fn run_loop(&self) {
|
||||
println!("Agent {} starting run loop...", self.id);
|
||||
loop {
|
||||
@@ -60,6 +97,11 @@ impl CHORUSAgent {
|
||||
}
|
||||
}
|
||||
|
||||
/// Entry point for the CHORUS agent binary.
|
||||
///
|
||||
/// It creates a data directory under `/home/Tony/rust/projects/reset/CHORUS/data`
|
||||
/// (note the capitalised `Tony` matches the original path), initialises the
|
||||
/// `CHORUSAgent`, and starts its run loop.
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let agent_id = "agent-001";
|
||||
|
||||
@@ -1,18 +1,63 @@
|
||||
/// # chrs-bubble
|
||||
///
|
||||
/// A provenance‑tracking crate that records nodes and edges in a directed acyclic
|
||||
/// graph (DAG) and persists them using a Dolt‑backed graph implementation.
|
||||
/// The crate is deliberately small – it only pulls in `petgraph` for the in‑memory
|
||||
/// DAG, `serde` for serialization, `uuid` for unique identifiers and `thiserror`
|
||||
/// for ergonomic error handling. It is used by higher‑level components that need
|
||||
/// to capture the provenance of generated artifacts (e.g. files, messages, or
|
||||
/// results) and later query that history.
|
||||
///
|
||||
/// The public API is organised around three concepts:
|
||||
/// * **ProvenanceEdge** – The type of relationship between two nodes.
|
||||
/// * **BubbleError** – Errors that can occur when interacting with the underlying
|
||||
/// Dolt graph or when a node cannot be found.
|
||||
/// * **ProvenanceGraph** – The façade that holds an in‑memory DAG and a
|
||||
/// `DoltGraph` persistence layer, exposing methods to record nodes and links.
|
||||
///
|
||||
/// Each item is documented with a *WHAT*, *HOW* and *WHY* section so that users can
|
||||
/// quickly understand its purpose, its implementation details and the design
|
||||
/// rationale.
|
||||
use chrs_graph::{DoltGraph, GraphError};
|
||||
use ucxl::UCXLAddress;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use thiserror::Error;
|
||||
use uuid::Uuid;
|
||||
use petgraph::graph::{DiGraph, NodeIndex};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::collections::HashMap;
|
||||
use thiserror::Error;
|
||||
use ucxl::UCXLAddress;
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Represents the kind of relationship between two provenance nodes.
|
||||
///
|
||||
/// * **WHAT** – An enumeration of supported edge types. Currently we support:
|
||||
/// - `DerivedFrom` – Indicates that the target was derived from the source.
|
||||
/// - `Cites` – A citation relationship.
|
||||
/// - `InfluencedBy` – Denotes influence without direct derivation.
|
||||
/// * **HOW** – Used as the edge payload in the `petgraph::DiGraph`. The enum is
|
||||
/// `#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq)]` so it
|
||||
/// can be serialised when persisting the graph.
|
||||
/// * **WHY** – Encoding edge semantics as a dedicated enum makes provenance
|
||||
/// queries expressive and type‑safe, while keeping the on‑disk representation
|
||||
/// simple (a stringified variant).
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum ProvenanceEdge {
|
||||
/// The target node was *derived* from the source node.
|
||||
DerivedFrom,
|
||||
/// The target node *cites* the source node.
|
||||
Cites,
|
||||
/// The target node was *influenced* by the source node.
|
||||
InfluencedBy,
|
||||
}
|
||||
|
||||
/// Errors that can arise when working with a `ProvenanceGraph`.
|
||||
///
|
||||
/// * **WHAT** – Enumerates possible failure modes:
|
||||
/// - Graph‑level errors (`GraphError`).
|
||||
/// - Serde JSON errors (`serde_json::Error`).
|
||||
/// - A lookup failure when a node identifier cannot be resolved.
|
||||
/// * **HOW** – Implements `std::error::Error` via the `thiserror::Error` derive
|
||||
/// macro, forwarding underlying error sources with `#[from]`.
|
||||
/// * **WHY** – A single error type simplifies error propagation for callers and
|
||||
/// retains the original context for debugging.
|
||||
#[derive(Debug, Error)]
|
||||
pub enum BubbleError {
|
||||
#[error("Graph error: {0}")]
|
||||
@@ -23,6 +68,22 @@ pub enum BubbleError {
|
||||
NodeNotFound(Uuid),
|
||||
}
|
||||
|
||||
/// Core structure that maintains an in‑memory DAG of provenance nodes and a
|
||||
/// persistent `DoltGraph` backend.
|
||||
///
|
||||
/// * **WHAT** – Holds:
|
||||
/// - `persistence`: The Dolt‑based storage implementation.
|
||||
/// - `dag`: A `petgraph::DiGraph` where node payloads are UUIDs and edges are
|
||||
/// `ProvenanceEdge`s.
|
||||
/// - `node_map`: A fast lookup map from node UUID to the corresponding
|
||||
/// `petgraph::NodeIndex`.
|
||||
/// * **HOW** – Provides methods to create nodes (`record_node`) and edges
|
||||
/// (`record_link`). These methods insert into the in‑memory graph and then
|
||||
/// persist the data in Dolt tables using simple `INSERT` statements followed by
|
||||
/// a `commit`.
|
||||
/// * **WHY** – Separating the transient in‑memory representation from durable
|
||||
/// storage gives fast runtime queries while guaranteeing that the provenance
|
||||
/// graph can survive process restarts and be inspected via Dolt tools.
|
||||
pub struct ProvenanceGraph {
|
||||
persistence: DoltGraph,
|
||||
dag: DiGraph<Uuid, ProvenanceEdge>,
|
||||
@@ -30,6 +91,13 @@ pub struct ProvenanceGraph {
|
||||
}
|
||||
|
||||
impl ProvenanceGraph {
|
||||
/// Creates a new `ProvenanceGraph` backed by a pre‑initialised `DoltGraph`.
|
||||
///
|
||||
/// * **WHAT** – Returns a fresh instance with empty in‑memory structures.
|
||||
/// * **HOW** – Stores the supplied `persistence` and constructs a new `DiGraph`
|
||||
/// and empty `HashMap`.
|
||||
/// * **WHY** – Allows callers to decide where the Dolt repository lives (e.g.
|
||||
/// a temporary directory for tests or a permanent location for production).
|
||||
pub fn new(persistence: DoltGraph) -> Self {
|
||||
Self {
|
||||
persistence,
|
||||
@@ -38,33 +106,73 @@ impl ProvenanceGraph {
|
||||
}
|
||||
}
|
||||
|
||||
/// Records a provenance node with a unique `Uuid` and an associated address.
|
||||
///
|
||||
/// * **WHAT** – Persists the node both in‑memory (`dag` + `node_map`) and in a
|
||||
/// Dolt table called `provenance_nodes`.
|
||||
/// * **HOW** – If the node does not already exist, it is added to the DAG and a
|
||||
/// row is inserted via `persistence.insert_node`. A commit is performed with a
|
||||
/// descriptive message.
|
||||
/// * **WHY** – Storing the address (typically a UCXL address) allows later
|
||||
/// resolution of where the artifact originated.
|
||||
pub fn record_node(&mut self, id: Uuid, address: &str) -> Result<(), BubbleError> {
|
||||
if !self.node_map.contains_key(&id) {
|
||||
let idx = self.dag.add_node(id);
|
||||
self.node_map.insert(id, idx);
|
||||
|
||||
// Persist
|
||||
self.persistence.create_table("provenance_nodes", "id VARCHAR(255) PRIMARY KEY, address TEXT")
|
||||
// Ensure the backing table exists – ignore errors if it already does.
|
||||
self.persistence
|
||||
.create_table(
|
||||
"provenance_nodes",
|
||||
"id VARCHAR(255) PRIMARY KEY, address TEXT",
|
||||
)
|
||||
.ok();
|
||||
|
||||
let data = serde_json::json!({
|
||||
"id": id.to_string(),
|
||||
"address": address
|
||||
"address": address,
|
||||
});
|
||||
self.persistence.insert_node("provenance_nodes", data)?;
|
||||
self.persistence.commit(&format!("Record provenance node: {}", id))?;
|
||||
self.persistence
|
||||
.commit(&format!("Record provenance node: {}", id))?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn record_link(&mut self, source: Uuid, target: Uuid, edge: ProvenanceEdge) -> Result<(), BubbleError> {
|
||||
let source_idx = *self.node_map.get(&source).ok_or(BubbleError::NodeNotFound(source))?;
|
||||
let target_idx = *self.node_map.get(&target).ok_or(BubbleError::NodeNotFound(target))?;
|
||||
/// Records a directed edge between two existing nodes.
|
||||
///
|
||||
/// * **WHAT** – Adds an edge of type `ProvenanceEdge` to the DAG and stores a
|
||||
/// corresponding row in the `provenance_links` Dolt table.
|
||||
/// * **HOW** – Retrieves the `NodeIndex` for each UUID (erroring with
|
||||
/// `BubbleError::NodeNotFound` if missing), adds the edge to `dag`, then
|
||||
/// inserts a row containing a new link UUID, source/target IDs and the edge
|
||||
/// type as a string.
|
||||
/// * **WHY** – Persisting links allows the full provenance graph to be queried
|
||||
/// outside the process, while the in‑memory representation keeps runtime
|
||||
/// operations cheap.
|
||||
pub fn record_link(
|
||||
&mut self,
|
||||
source: Uuid,
|
||||
target: Uuid,
|
||||
edge: ProvenanceEdge,
|
||||
) -> Result<(), BubbleError> {
|
||||
let source_idx = *self
|
||||
.node_map
|
||||
.get(&source)
|
||||
.ok_or(BubbleError::NodeNotFound(source))?;
|
||||
let target_idx = *self
|
||||
.node_map
|
||||
.get(&target)
|
||||
.ok_or(BubbleError::NodeNotFound(target))?;
|
||||
|
||||
self.dag.add_edge(source_idx, target_idx, edge);
|
||||
|
||||
// Persist
|
||||
self.persistence.create_table("provenance_links", "id VARCHAR(255) PRIMARY KEY, source_id TEXT, target_id TEXT, edge_type TEXT")
|
||||
// Ensure the links table exists.
|
||||
self.persistence
|
||||
.create_table(
|
||||
"provenance_links",
|
||||
"id VARCHAR(255) PRIMARY KEY, source_id TEXT, target_id TEXT, edge_type TEXT",
|
||||
)
|
||||
.ok();
|
||||
|
||||
let link_id = Uuid::new_v4();
|
||||
@@ -72,12 +180,11 @@ impl ProvenanceGraph {
|
||||
"id": link_id.to_string(),
|
||||
"source_id": source.to_string(),
|
||||
"target_id": target.to_string(),
|
||||
"edge_type": format!("{:?}", edge)
|
||||
"edge_type": format!("{:?}", edge),
|
||||
});
|
||||
|
||||
self.persistence.insert_node("provenance_links", data)?;
|
||||
self.persistence.commit(&format!("Record provenance link: {} -> {}", source, target))?;
|
||||
|
||||
self.persistence
|
||||
.commit(&format!("Record provenance link: {} -> {}", source, target))?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -96,9 +203,15 @@ mod tests {
|
||||
let id1 = Uuid::new_v4();
|
||||
let id2 = Uuid::new_v4();
|
||||
|
||||
graph.record_node(id1, "ucxl://agent:1@proj:task/#/file1.txt").unwrap();
|
||||
graph.record_node(id2, "ucxl://agent:1@proj:task/#/file2.txt").unwrap();
|
||||
graph
|
||||
.record_node(id1, "ucxl://agent:1@proj:task/#/file1.txt")
|
||||
.unwrap();
|
||||
graph
|
||||
.record_node(id2, "ucxl://agent:1@proj:task/#/file2.txt")
|
||||
.unwrap();
|
||||
|
||||
graph.record_link(id1, id2, ProvenanceEdge::DerivedFrom).unwrap();
|
||||
graph
|
||||
.record_link(id1, id2, ProvenanceEdge::DerivedFrom)
|
||||
.unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,26 +1,53 @@
|
||||
//! chrs-graph library implementation using Dolt for graph persistence.
|
||||
|
||||
use chrono::Utc;
|
||||
use serde_json::Value;
|
||||
use std::{path::Path, process::Command};
|
||||
use thiserror::Error;
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Enumeration of possible errors that can arise while interacting with the `DoltGraph`.
|
||||
///
|
||||
/// Each variant wraps an underlying error source, making it easier for callers to
|
||||
/// understand the failure context and decide on remedial actions.
|
||||
#[derive(Error, Debug)]
|
||||
pub enum GraphError {
|
||||
/// Propagates I/O errors from the standard library (e.g., filesystem access).
|
||||
#[error("IO error: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
/// Represents a failure when executing a Dolt command.
|
||||
#[error("Command failed: {0}")]
|
||||
CommandFailed(String),
|
||||
/// Propagates JSON (de)serialization errors from `serde_json`.
|
||||
#[error("Serde JSON error: {0}")]
|
||||
SerdeJson(#[from] serde_json::Error),
|
||||
/// A generic catch‑all for errors that don't fit the other categories.
|
||||
#[error("Other error: {0}")]
|
||||
Other(String),
|
||||
}
|
||||
|
||||
/// Wrapper around a Dolt repository that stores graph data.
|
||||
///
|
||||
/// The `DoltGraph` type encapsulates a path to a Dolt repo and provides high‑level
|
||||
/// operations such as initializing the repo, committing changes, creating tables, and
|
||||
/// inserting nodes expressed as JSON objects.
|
||||
///
|
||||
/// # Architectural Rationale
|
||||
/// Dolt offers a Git‑like version‑controlled SQL database, which aligns well with CHORUS's
|
||||
/// need for an immutable, query‑able history of graph mutations. By wrapping Dolt commands in
|
||||
/// this struct we isolate the rest of the codebase from the command‑line interface, making the
|
||||
/// graph layer portable and easier to test.
|
||||
pub struct DoltGraph {
|
||||
/// Filesystem path to the root of the Dolt repository.
|
||||
pub repo_path: std::path::PathBuf,
|
||||
}
|
||||
|
||||
impl DoltGraph {
|
||||
/// Initialise (or open) a Dolt repository at the given `path`.
|
||||
///
|
||||
/// If the directory does not already contain a `.dolt` sub‑directory, the function runs
|
||||
/// `dolt init` to create a new repository. Errors from the underlying command are wrapped in
|
||||
/// `GraphError::CommandFailed`.
|
||||
pub fn init(path: &Path) -> Result<Self, GraphError> {
|
||||
if !path.join(".dolt").exists() {
|
||||
let status = Command::new("dolt")
|
||||
@@ -39,6 +66,11 @@ impl DoltGraph {
|
||||
})
|
||||
}
|
||||
|
||||
/// Execute a Dolt command with the specified arguments.
|
||||
///
|
||||
/// This helper centralises command execution and error handling. It runs `dolt` with the
|
||||
/// provided argument slice, captures stdout/stderr, and returns `GraphError::CommandFailed`
|
||||
/// when the command exits with a non‑zero status.
|
||||
fn run_cmd(&self, args: &[&str]) -> Result<(), GraphError> {
|
||||
let output = Command::new("dolt")
|
||||
.args(args)
|
||||
@@ -51,16 +83,25 @@ impl DoltGraph {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Stage all changes and commit them with the provided `message`.
|
||||
///
|
||||
/// The method first runs `dolt add -A` to stage modifications, then `dolt commit -m`.
|
||||
/// Any failure in these steps propagates as a `GraphError`.
|
||||
pub fn commit(&self, message: &str) -> Result<(), GraphError> {
|
||||
self.run_cmd(&["add", "-A"])?;
|
||||
self.run_cmd(&["commit", "-m", message])?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Create a SQL table within the Dolt repository.
|
||||
///
|
||||
/// `schema` should be a comma‑separated column definition list (e.g., `"id INT PRIMARY KEY, name TEXT"`).
|
||||
/// If the table already exists, the function treats it as a no‑op and returns `Ok(())`.
|
||||
pub fn create_table(&self, table_name: &str, schema: &str) -> Result<(), GraphError> {
|
||||
let query = format!("CREATE TABLE {} ({})", table_name, schema);
|
||||
if let Err(e) = self.run_cmd(&["sql", "-q", &query]) {
|
||||
if e.to_string().contains("already exists") {
|
||||
// Table is already present – not an error for our use‑case.
|
||||
return Ok(());
|
||||
}
|
||||
return Err(e);
|
||||
@@ -69,6 +110,11 @@ impl DoltGraph {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Insert a node represented by a JSON object into the specified `table`.
|
||||
///
|
||||
/// The JSON `data` must be an object where keys correspond to column names. Supported value
|
||||
/// types are strings, numbers, booleans, and null. Complex JSON structures are rejected because
|
||||
/// they cannot be directly mapped to SQL scalar columns.
|
||||
pub fn insert_node(&self, table: &str, data: Value) -> Result<(), GraphError> {
|
||||
let obj = data
|
||||
.as_object()
|
||||
@@ -111,7 +157,11 @@ mod tests {
|
||||
#[test]
|
||||
fn test_init_create_table_and_commit() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
// Initialise a Dolt repository in a temporary directory.
|
||||
let graph = DoltGraph::init(dir.path()).expect("init failed");
|
||||
graph.create_table("nodes", "id INT PRIMARY KEY, name TEXT").expect("create table failed");
|
||||
// Create a simple `nodes` table.
|
||||
graph
|
||||
.create_table("nodes", "id INT PRIMARY KEY, name TEXT")
|
||||
.expect("create table failed");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
// chrs-mail library implementation
|
||||
//! chrs-mail library implementation
|
||||
|
||||
use std::path::Path;
|
||||
use chrono::{DateTime, Utc};
|
||||
@@ -9,42 +9,84 @@ use thiserror::Error;
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Represents a mail message stored in the mailbox.
|
||||
///
|
||||
/// # Definition
|
||||
/// `Message` is a data structure that models a single mail exchange between two peers.
|
||||
/// It contains a unique identifier, sender and recipient identifiers, a topic string, a JSON payload,
|
||||
/// and timestamps for when the message was sent and optionally when it was read.
|
||||
///
|
||||
/// # Implementation Details
|
||||
/// - `id` is a **Uuid** generated by the caller to guarantee global uniqueness.
|
||||
/// - `payload` uses `serde_json::Value` so arbitrary JSON can be attached to the message.
|
||||
/// - `sent_at` and `read_at` are stored as `chrono::DateTime<Utc>` to provide timezone‑agnostic timestamps.
|
||||
///
|
||||
/// # Rationale
|
||||
/// This struct provides a lightweight, serialisable representation of a message that can be persisted
|
||||
/// in the SQLite‑backed mailbox (see `Mailbox`). Keeping the payload as JSON allows different subsystems
|
||||
/// of the CHORUS platform to embed domain‑specific data without requiring a rigid schema.
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct Message {
|
||||
/// Globally unique identifier for the message.
|
||||
pub id: Uuid,
|
||||
/// Identifier of the sending peer.
|
||||
pub from_peer: String,
|
||||
/// Identifier of the receiving peer.
|
||||
pub to_peer: String,
|
||||
/// Topic or channel of the message; used for routing/filters.
|
||||
pub topic: String,
|
||||
/// Arbitrary JSON payload containing the message body.
|
||||
pub payload: JsonValue,
|
||||
/// Timestamp (UTC) when the message was sent.
|
||||
pub sent_at: DateTime<Utc>,
|
||||
/// Optional timestamp (UTC) when the recipient read the message.
|
||||
pub read_at: Option<DateTime<Utc>>,
|
||||
}
|
||||
|
||||
/// Errors that can occur while using the Mailbox.
|
||||
/// Errors that can occur while using the `Mailbox`.
|
||||
///
|
||||
/// Each variant wraps an underlying error type from a dependency, allowing callers to
|
||||
/// react appropriately (e.g., retry on SQLite errors, surface serialization problems, etc.).
|
||||
#[derive(Debug, Error)]
|
||||
pub enum MailError {
|
||||
/// Propagates any `rusqlite::Error` encountered while interacting with the SQLite DB.
|
||||
#[error("SQLite error: {0}")]
|
||||
Sqlite(#[from] rusqlite::Error),
|
||||
/// Propagates JSON (de)serialization errors from `serde_json`.
|
||||
#[error("JSON serialization error: {0}")]
|
||||
Json(#[from] serde_json::Error),
|
||||
/// Propagates UUID parsing errors.
|
||||
#[error("UUID parsing error: {0}")]
|
||||
Uuid(#[from] uuid::Error),
|
||||
/// Propagates chrono parsing errors, primarily when deserialising timestamps from string.
|
||||
#[error("Chrono parsing error: {0}")]
|
||||
ChronoParse(#[from] chrono::ParseError),
|
||||
}
|
||||
|
||||
/// Wrapper around a SQLite connection providing mail-box functionalities.
|
||||
/// Wrapper around a SQLite connection providing mailbox‑style functionalities.
|
||||
///
|
||||
/// The `Mailbox` abstracts a SQLite database that stores `Message` records. It offers a minimal
|
||||
/// API for opening/creating the DB, sending messages, receiving pending messages for a peer, and
|
||||
/// marking messages as read.
|
||||
///
|
||||
/// # Architectural Rationale
|
||||
/// Using SQLite (via `rusqlite`) provides a zero‑configuration, file‑based persistence layer that is
|
||||
/// portable across the various environments where CHORUS components may run. The wrapper isolates the
|
||||
/// rest of the codebase from raw SQL handling, ensuring a single place for schema evolution and error
|
||||
/// mapping.
|
||||
pub struct Mailbox {
|
||||
conn: Connection,
|
||||
}
|
||||
|
||||
impl Mailbox {
|
||||
/// Open (or create) a mailbox database at `path`.
|
||||
///
|
||||
/// The function creates the SQLite file if it does not exist, enables WAL mode for better
|
||||
/// concurrency, and ensures the `messages` table is present.
|
||||
pub fn open<P: AsRef<Path>>(path: P) -> Result<Self, MailError> {
|
||||
let conn = Connection::open(path)?;
|
||||
// Enable WAL mode.
|
||||
// Enable WAL mode for improved concurrency and durability.
|
||||
conn.pragma_update(None, "journal_mode", &"WAL")?;
|
||||
// Create table.
|
||||
// Create the `messages` table if it does not already exist.
|
||||
conn.execute(
|
||||
"CREATE TABLE IF NOT EXISTS messages (
|
||||
id TEXT PRIMARY KEY,
|
||||
@@ -61,6 +103,9 @@ impl Mailbox {
|
||||
}
|
||||
|
||||
/// Store a new message in the mailbox.
|
||||
///
|
||||
/// The `payload` field is serialised to a JSON string before insertion. The `read_at` column is
|
||||
/// initialised to `NULL` because the message has not yet been consumed.
|
||||
pub fn send(&self, msg: &Message) -> Result<(), MailError> {
|
||||
let payload_str = serde_json::to_string(&msg.payload)?;
|
||||
self.conn.execute(
|
||||
@@ -79,6 +124,9 @@ impl Mailbox {
|
||||
}
|
||||
|
||||
/// Retrieve all unread messages addressed to `peer_id`.
|
||||
///
|
||||
/// The query filters on `to_peer` and `read_at IS NULL`. Returned rows are transformed back into
|
||||
/// `Message` structs, parsing the UUID, JSON payload, and RFC3339 timestamps.
|
||||
pub fn receive_pending(&self, peer_id: &str) -> Result<Vec<Message>, MailError> {
|
||||
let mut stmt = self.conn.prepare(
|
||||
"SELECT id, from_peer, to_peer, topic, payload, sent_at, read_at
|
||||
@@ -97,16 +145,13 @@ impl Mailbox {
|
||||
// Parse Uuid
|
||||
let id = Uuid::parse_str(&id_str)
|
||||
.map_err(|e| rusqlite::Error::FromSqlConversionFailure(0, rusqlite::types::Type::Text, Box::new(e)))?;
|
||||
|
||||
// Parse JSON
|
||||
// Parse JSON payload
|
||||
let payload: JsonValue = serde_json::from_str(&payload_str)
|
||||
.map_err(|e| rusqlite::Error::FromSqlConversionFailure(4, rusqlite::types::Type::Text, Box::new(e)))?;
|
||||
|
||||
// Parse Timestamps
|
||||
// Parse timestamps
|
||||
let sent_at = DateTime::parse_from_rfc3339(&sent_at_str)
|
||||
.map_err(|e| rusqlite::Error::FromSqlConversionFailure(5, rusqlite::types::Type::Text, Box::new(e)))?
|
||||
.with_timezone(&Utc);
|
||||
|
||||
let read_at = match read_at_opt {
|
||||
Some(s) => Some(
|
||||
DateTime::parse_from_rfc3339(&s)
|
||||
@@ -135,6 +180,8 @@ impl Mailbox {
|
||||
}
|
||||
|
||||
/// Mark a message as read by setting its `read_at` timestamp.
|
||||
///
|
||||
/// The current UTC time is stored in the `read_at` column for the row with the matching `id`.
|
||||
pub fn mark_read(&self, msg_id: Uuid) -> Result<(), MailError> {
|
||||
let now = Utc::now().to_rfc3339();
|
||||
self.conn.execute(
|
||||
|
||||
@@ -1,3 +1,18 @@
|
||||
/// chrs-poc crate provides an end‑to‑end proof‑of‑concept demonstration of the CHORUS
|
||||
/// system. It wires together the core components:
|
||||
///
|
||||
/// * `Mailbox` – message‑passing layer (`chrs_mail`).
|
||||
/// * `DoltGraph` – persistent state graph (`chrs_graph`).
|
||||
/// * `ProvenanceGraph` – provenance tracking (`chrs_bubble`).
|
||||
/// * `SecretSentinel` – secret scrubbing (`chrs_shhh`).
|
||||
/// * `CurationEngine` – decision record curation (`chrs_slurp`).
|
||||
///
|
||||
/// The flow mirrors a realistic task lifecycle: a client dispatches a task
|
||||
/// message, an agent processes it, generates reasoning (with a deliberately
|
||||
/// injected secret), the secret is scrubbed, a decision record is curated, and
|
||||
/// provenance links are recorded. The final state is persisted in a Dolt
|
||||
/// repository.
|
||||
|
||||
use chrs_bubble::{ProvenanceGraph, ProvenanceEdge};
|
||||
use chrs_graph::DoltGraph;
|
||||
use chrs_mail::{Mailbox, Message};
|
||||
@@ -8,11 +23,25 @@ use std::fs;
|
||||
use std::path::Path;
|
||||
use uuid::Uuid;
|
||||
|
||||
/// Entry point for the proof‑of‑concept binary.
|
||||
///
|
||||
/// The function performs the following high‑level steps, each documented inline:
|
||||
/// 1. Sets up a temporary workspace.
|
||||
/// 2. Initialises all required components.
|
||||
/// 3. Simulates a client sending an audit task to an agent.
|
||||
/// 4. Processes the task as the agent would, including secret scrubbing.
|
||||
/// 5. Curates a `DecisionRecord` via the SLURP engine.
|
||||
/// 6. Records provenance relationships in the BUBBLE graph.
|
||||
/// 7. Prints a success banner and the path to the persisted Dolt state.
|
||||
///
|
||||
/// Errors from any component propagate via `?` and are reported as a boxed error.
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
println!("=== CHORUS End-to-End Proof of Concept ===");
|
||||
|
||||
// ---------------------------------------------------------------------
|
||||
// 1. Setup paths
|
||||
// ---------------------------------------------------------------------
|
||||
let base_path = Path::new("/tmp/chrs_poc");
|
||||
if base_path.exists() {
|
||||
fs::remove_dir_all(base_path)?;
|
||||
@@ -23,20 +52,25 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
let graph_path = base_path.join("state_graph");
|
||||
fs::create_dir_all(&graph_path)?;
|
||||
|
||||
// 2. Initialize Components
|
||||
// ---------------------------------------------------------------------
|
||||
// 2. Initialise Components
|
||||
// ---------------------------------------------------------------------
|
||||
let mailbox = Mailbox::open(&mail_path)?;
|
||||
let persistence = DoltGraph::init(&graph_path)?;
|
||||
let mut provenance = ProvenanceGraph::new(persistence);
|
||||
|
||||
// We need a fresh DoltGraph handle for SLURP because ProvenanceGraph moved 'persistence'
|
||||
// In a real app, we'd use Arc<Mutex<DoltGraph>> or similar.
|
||||
// A separate graph handle is needed for the SLURP engine because the
|
||||
// provenance graph consumes the original `DoltGraph`. In production we would
|
||||
// share via `Arc<Mutex<>>`.
|
||||
let slurp_persistence = DoltGraph::init(&graph_path)?;
|
||||
let curator = CurationEngine::new(slurp_persistence);
|
||||
let sentinel = SecretSentinel::new_default();
|
||||
|
||||
println!("[POC] Components initialized.");
|
||||
|
||||
// ---------------------------------------------------------------------
|
||||
// 3. Dispatch Task (simulate client sending message to Agent-A)
|
||||
// ---------------------------------------------------------------------
|
||||
let task_id = Uuid::new_v4();
|
||||
let task_msg = Message {
|
||||
id: task_id,
|
||||
@@ -50,19 +84,21 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
mailbox.send(&task_msg)?;
|
||||
println!("[POC] Task dispatched to Agent-A: {}", task_id);
|
||||
|
||||
// ---------------------------------------------------------------------
|
||||
// 4. Process Task (Agent-A logic)
|
||||
// ---------------------------------------------------------------------
|
||||
let pending = mailbox.receive_pending("agent-a")?;
|
||||
for msg in pending {
|
||||
println!("[POC] Agent-A received task: {}", msg.topic);
|
||||
|
||||
// Generate reasoning with an accidental secret
|
||||
// Simulated reasoning that accidentally contains a secret.
|
||||
let raw_reasoning = "Audit complete. Verified UCXL address parsing. My secret key is sk-1234567890abcdef1234567890abcdef1234567890abcdef";
|
||||
|
||||
// 5. SHHH: Scrub secrets
|
||||
// 5. SHHH: Scrub secrets from the reasoning output.
|
||||
let clean_reasoning = sentinel.scrub_text(raw_reasoning);
|
||||
println!("[POC] SHHH scrubbed reasoning: {}", clean_reasoning);
|
||||
|
||||
// 6. SLURP: Create and Curate Decision Record
|
||||
// 6. SLURP: Create and curate a DecisionRecord.
|
||||
let dr = DecisionRecord {
|
||||
id: Uuid::new_v4(),
|
||||
author: "agent-a".into(),
|
||||
@@ -72,7 +108,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
};
|
||||
curator.curate_decision(dr.clone())?;
|
||||
|
||||
// 7. BUBBLE: Record Provenance
|
||||
// 7. BUBBLE: Record provenance relationships.
|
||||
provenance.record_node(task_id, "ucxl://client:user@poc:task/#/audit_request")?;
|
||||
provenance.record_node(dr.id, "ucxl://agent-a:worker@poc:task/#/audit_result")?;
|
||||
provenance.record_link(dr.id, task_id, ProvenanceEdge::DerivedFrom)?;
|
||||
@@ -82,8 +118,10 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
mailbox.mark_read(msg.id)?;
|
||||
}
|
||||
|
||||
println!("
|
||||
=== POC SUCCESSFUL ===");
|
||||
// ---------------------------------------------------------------------
|
||||
// 8. Final output
|
||||
// ---------------------------------------------------------------------
|
||||
println!("\n=== POC SUCCESSFUL ===");
|
||||
println!("Final State is persisted in Dolt at: {:?}", graph_path);
|
||||
|
||||
Ok(())
|
||||
|
||||
@@ -1,23 +1,63 @@
|
||||
use regex::Regex;
|
||||
use lazy_static::lazy_static;
|
||||
/// # chrs-shhh
|
||||
///
|
||||
/// This crate provides utilities for redacting sensitive information from text.
|
||||
/// It defines a set of **redaction rules** that match secret patterns (like API keys)
|
||||
/// and replace them with a placeholder. The crate is deliberately lightweight – it
|
||||
/// only depends on `regex` and `lazy_static` – and can be embedded in any larger
|
||||
/// application that needs to scrub logs or user‑provided data before storage or
|
||||
/// transmission.
|
||||
use regex::Regex;
|
||||
|
||||
/// Represents a single rule used to redact a secret.
|
||||
///
|
||||
/// * **WHAT** – The name of the rule (e.g. "OpenAI API Key"), the compiled
|
||||
/// regular‑expression pattern that matches the secret, and the replacement string
|
||||
/// that will be inserted.
|
||||
/// * **HOW** – The `pattern` is a `Regex` that is applied to an input string. When a
|
||||
/// match is found the `replacement` is inserted using `replace_all`.
|
||||
/// * **WHY** – Decoupling the rule definition from the redaction logic makes the
|
||||
/// sanitizer extensible; new patterns can be added without changing the core
|
||||
/// implementation.
|
||||
pub struct RedactionRule {
|
||||
/// Human‑readable name for the rule.
|
||||
pub name: String,
|
||||
/// Compiled regular expression that matches the secret.
|
||||
pub pattern: Regex,
|
||||
/// Text that will replace the matched secret.
|
||||
pub replacement: String,
|
||||
}
|
||||
|
||||
/// The main entry point for secret detection and redaction.
|
||||
///
|
||||
/// * **WHAT** – Holds a collection of `RedactionRule`s.
|
||||
/// * **HOW** – Provides methods to scrub a string (`scrub_text`) and to simply
|
||||
/// check whether any secret is present (`contains_secrets`).
|
||||
/// * **WHY** – Centralising the rules in a struct enables reuse and makes testing
|
||||
/// straightforward.
|
||||
pub struct SecretSentinel {
|
||||
rules: Vec<RedactionRule>,
|
||||
}
|
||||
|
||||
lazy_static! {
|
||||
/// Matches OpenAI API keys of the form `sk-<48 alphanumeric chars>`.
|
||||
static ref OPENAI_KEY: Regex = Regex::new(r"sk-[a-zA-Z0-9]{48}").unwrap();
|
||||
/// Matches AWS access keys that start with `AKIA` followed by 16 uppercase letters or digits.
|
||||
static ref AWS_KEY: Regex = Regex::new(r"AKIA[0-9A-Z]{16}").unwrap();
|
||||
/// Generic secret pattern that captures common keywords like password, secret, key or token.
|
||||
/// The capture group (`$1`) is retained so that the surrounding identifier is preserved.
|
||||
static ref GENERIC_SECRET: Regex = Regex::new(r"(?i)(password|secret|key|token)\s*[:=]\s*[^\s]+").unwrap();
|
||||
}
|
||||
|
||||
impl SecretSentinel {
|
||||
/// Constructs a `SecretSentinel` pre‑populated with a sensible default set of rules.
|
||||
///
|
||||
/// * **WHAT** – Returns a sentinel containing three rules: OpenAI, AWS and a generic
|
||||
/// secret matcher.
|
||||
/// * **HOW** – Instantiates `RedactionRule`s using the lazily‑initialised regexes
|
||||
/// above and stores them in the `rules` vector.
|
||||
/// * **WHY** – Provides a ready‑to‑use configuration for typical development
|
||||
/// environments while still allowing callers to create custom instances.
|
||||
pub fn new_default() -> Self {
|
||||
let rules = vec![
|
||||
RedactionRule {
|
||||
@@ -33,20 +73,36 @@ impl SecretSentinel {
|
||||
RedactionRule {
|
||||
name: "Generic Secret".into(),
|
||||
pattern: GENERIC_SECRET.clone(),
|
||||
// $1 refers to the captured keyword (password, secret, …).
|
||||
replacement: "$1: [REDACTED]".into(),
|
||||
},
|
||||
];
|
||||
Self { rules }
|
||||
}
|
||||
|
||||
/// Redacts all secrets found in `input` according to the configured rules.
|
||||
///
|
||||
/// * **WHAT** – Returns a new `String` where each match has been replaced.
|
||||
/// * **HOW** – Iterates over the rules and applies `replace_all` for each.
|
||||
/// * **WHY** – Performing the replacements sequentially ensures that overlapping
|
||||
/// patterns are handled deterministically.
|
||||
pub fn scrub_text(&self, input: &str) -> String {
|
||||
let mut scrubbed = input.to_string();
|
||||
for rule in &self.rules {
|
||||
scrubbed = rule.pattern.replace_all(&scrubbed, &rule.replacement).to_string();
|
||||
scrubbed = rule
|
||||
.pattern
|
||||
.replace_all(&scrubbed, &rule.replacement)
|
||||
.to_string();
|
||||
}
|
||||
scrubbed
|
||||
}
|
||||
|
||||
/// Checks whether any of the configured rules match `input`.
|
||||
///
|
||||
/// * **WHAT** – Returns `true` if at least one rule's pattern matches.
|
||||
/// * **HOW** – Uses `Iter::any` over `self.rules` with `is_match`.
|
||||
/// * **WHY** – A quick predicate useful for short‑circuiting logging or error
|
||||
/// handling before performing the full redaction.
|
||||
pub fn contains_secrets(&self, input: &str) -> bool {
|
||||
self.rules.iter().any(|rule| rule.pattern.is_match(input))
|
||||
}
|
||||
|
||||
@@ -1,19 +1,82 @@
|
||||
use chrs_graph::{DoltGraph, GraphError};
|
||||
use ucxl::UCXLAddress;
|
||||
//! # chrs-slurp
|
||||
//!
|
||||
//! **Intelligence Crate** – Provides the *curation* layer for the CHORUS system.
|
||||
//!
|
||||
//! The purpose of this crate is to take **Decision Records** generated by autonomous
|
||||
//! agents, validate them, and persist them into the graph database. It isolates the
|
||||
//! validation and storage concerns so that other components (e.g. provenance, security)
|
||||
//! can work with a clean, audited data model.
|
||||
//!
|
||||
//! ## Architectural Rationale
|
||||
//!
|
||||
//! * **Separation of concerns** – Agents produce raw decisions; this crate is the
|
||||
//! single source of truth for how those decisions are stored.
|
||||
//! * **Auditability** – By persisting to a Dolt‑backed graph each decision is versioned
|
||||
//! and can be replay‑backed, satisfying CHORUS’s requirement for reproducible
|
||||
//! reasoning.
|
||||
//! * **Extensibility** – The `CurationEngine` can be extended with additional validation
|
||||
//! steps (e.g. policy checks) without touching the agents themselves.
|
||||
//!
|
||||
//! The crate depends on:
|
||||
//! * `chrs-graph` – a thin wrapper around a Dolt‑backed graph implementation.
|
||||
//! * `ucxl` – for addressing external knowledge artefacts.
|
||||
//! * `chrono`, `serde`, `uuid` – standard utilities for timestamps, (de)serialization
|
||||
//! and unique identifiers.
|
||||
//!
|
||||
//! ---
|
||||
//!
|
||||
//! # Public API
|
||||
//!
|
||||
//! The public surface consists of three items:
|
||||
//!
|
||||
//! * `DecisionRecord` – data structure representing a curated decision.
|
||||
//! * `SlurpError` – enumeration of possible errors while curating.
|
||||
//! * `CurationEngine` – the engine that validates and persists `DecisionRecord`s.
|
||||
//!
|
||||
//! Each item is documented in‑line below.
|
||||
|
||||
use chrono::{DateTime, Utc};
|
||||
use chrs_graph::{DoltGraph, GraphError};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use thiserror::Error;
|
||||
use ucxl::UCXLAddress;
|
||||
use uuid::Uuid;
|
||||
|
||||
/// A record representing a curated decision within the CHORUS system.
|
||||
///
|
||||
/// # What
|
||||
///
|
||||
/// This struct captures the essential metadata of a decision made by an
|
||||
/// autonomous agent, including who authored it, the reasoning behind it, any
|
||||
/// citations to external knowledge, and a timestamp.
|
||||
///
|
||||
/// # Why
|
||||
///
|
||||
/// Decision records are persisted in the graph database so that downstream
|
||||
/// components (e.g., provenance analysis) can reason about the provenance and
|
||||
/// justification of actions. Storing them as a dedicated table enables
|
||||
/// reproducibility and auditability across the CHORUS architecture.
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct DecisionRecord {
|
||||
/// Unique identifier for the decision.
|
||||
pub id: Uuid,
|
||||
/// Identifier of the agent or human that authored the decision.
|
||||
pub author: String,
|
||||
/// Free‑form textual reasoning explaining the decision.
|
||||
pub reasoning: String,
|
||||
pub citations: Vec<String>, // Serialized UCXL addresses
|
||||
/// Serialized UCXL addresses that serve as citations for the decision.
|
||||
/// Each entry should be a valid `UCXLAddress` string.
|
||||
pub citations: Vec<String>,
|
||||
/// The moment the decision was created.
|
||||
pub timestamp: DateTime<Utc>,
|
||||
}
|
||||
|
||||
/// Errors that can arise while slurping (curating) a decision record.
|
||||
///
|
||||
/// * `Graph` – underlying graph database operation failed.
|
||||
/// * `Serde` – (de)serialization of the decision data failed.
|
||||
/// * `ValidationError` – a supplied citation could not be parsed as a
|
||||
/// `UCXLAddress`.
|
||||
#[derive(Debug, Error)]
|
||||
pub enum SlurpError {
|
||||
#[error("Graph error: {0}")]
|
||||
@@ -24,39 +87,70 @@ pub enum SlurpError {
|
||||
ValidationError(String),
|
||||
}
|
||||
|
||||
/// Core engine that validates and persists `DecisionRecord`s into the
|
||||
/// Dolt‑backed graph.
|
||||
///
|
||||
/// # Why
|
||||
///
|
||||
/// Centralising curation logic ensures a single place for validation and
|
||||
/// storage semantics, keeping the rest of the codebase agnostic of the graph
|
||||
/// implementation details.
|
||||
pub struct CurationEngine {
|
||||
graph: DoltGraph,
|
||||
}
|
||||
|
||||
impl CurationEngine {
|
||||
/// Creates a new `CurationEngine` bound to the supplied `DoltGraph`.
|
||||
///
|
||||
/// The engine holds a reference to the graph for the lifetime of the
|
||||
/// instance; callers are responsible for providing a correctly initialised
|
||||
/// graph.
|
||||
pub fn new(graph: DoltGraph) -> Self {
|
||||
Self { graph }
|
||||
}
|
||||
|
||||
/// Validates the citations in `dr` and persists the decision into the
|
||||
/// graph.
|
||||
///
|
||||
/// The method performs three steps:
|
||||
/// 1. **Citation validation** – each citation string is parsed into a
|
||||
/// `UCXLAddress`. Invalid citations produce a `ValidationError`.
|
||||
/// 2. **Table assurance** – attempts to create the `curated_decisions`
|
||||
/// table if it does not already exist. Errors are ignored because the
|
||||
/// table may already be present.
|
||||
/// 3. **Insertion & commit** – the decision is serialised to JSON and
|
||||
/// inserted as a node, then the graph transaction is committed.
|
||||
///
|
||||
/// # Errors
|
||||
/// Propagates any `GraphError`, `serde_json::Error`, or custom
|
||||
/// validation failures.
|
||||
pub fn curate_decision(&self, dr: DecisionRecord) -> Result<(), SlurpError> {
|
||||
// 1. Validate Citations
|
||||
for citation in &dr.citations {
|
||||
use std::str::FromStr;
|
||||
UCXLAddress::from_str(citation)
|
||||
.map_err(|e| SlurpError::ValidationError(format!("Invalid citation {}: {}", citation, e)))?;
|
||||
UCXLAddress::from_str(citation).map_err(|e| {
|
||||
SlurpError::ValidationError(format!("Invalid citation {}: {}", citation, e))
|
||||
})?;
|
||||
}
|
||||
|
||||
// 2. Log DR into Graph (create table if needed handled by insert_node in future,
|
||||
// but for now let's ensure it's there).
|
||||
// If it fails because it exists, that's fine.
|
||||
let _ = self.graph.create_table("curated_decisions", "id VARCHAR(255) PRIMARY KEY, author TEXT, reasoning TEXT, citations TEXT, curated_at TEXT");
|
||||
// 2. Ensure the table exists; ignore error if it already does.
|
||||
let _ = self.graph.create_table(
|
||||
"curated_decisions",
|
||||
"id VARCHAR(255) PRIMARY KEY, author TEXT, reasoning TEXT, citations TEXT, curated_at TEXT",
|
||||
);
|
||||
|
||||
// 3. Serialize the record and insert it.
|
||||
let data = serde_json::json!({
|
||||
"id": dr.id.to_string(),
|
||||
"author": dr.author,
|
||||
"reasoning": dr.reasoning,
|
||||
"citations": serde_json::to_string(&dr.citations)?,
|
||||
"curated_at": dr.timestamp.to_rfc3339()
|
||||
"curated_at": dr.timestamp.to_rfc3339(),
|
||||
});
|
||||
|
||||
self.graph.insert_node("curated_decisions", data)?;
|
||||
self.graph.commit(&format!("Curation complete for DR: {}", dr.id))?;
|
||||
|
||||
self.graph
|
||||
.commit(&format!("Curation complete for DR: {}", dr.id))?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -66,6 +160,8 @@ mod tests {
|
||||
use super::*;
|
||||
use tempfile::TempDir;
|
||||
|
||||
/// Integration test that exercises the full curation flow on a temporary
|
||||
/// Dolt graph.
|
||||
#[test]
|
||||
fn test_curation_flow() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
|
||||
@@ -1,20 +1,61 @@
|
||||
use chrs_mail::{Mailbox, Message};
|
||||
use chrono::Utc;
|
||||
/// chrs-sync crate provides synchronization utilities for the CHORUS system.
|
||||
///
|
||||
/// It uses a `Mailbox` for message passing between peers and a Dolt repository
|
||||
/// to track state hashes. The primary abstraction is `SyncManager`, which can
|
||||
/// broadcast the current repository hash to peers and handle incoming sync
|
||||
/// signals.
|
||||
use chrs_mail::{Mailbox, Message};
|
||||
use std::path::PathBuf;
|
||||
use std::process::Command;
|
||||
use uuid::Uuid;
|
||||
use std::path::PathBuf;
|
||||
|
||||
/// Manages synchronization of a Dolt repository across peers.
|
||||
///
|
||||
/// # Fields
|
||||
/// * `mailbox` – The `Mailbox` instance used to send and receive messages.
|
||||
/// * `repo_path` – Filesystem path to the local Dolt repository.
|
||||
///
|
||||
/// # Rationale
|
||||
/// The CHORUS architecture relies on deterministic state replication. By
|
||||
/// broadcasting the latest commit hash (`sync_signal`) each peer can decide
|
||||
/// whether to pull updates. This struct encapsulates that behaviour, keeping the
|
||||
/// rest of the system agnostic of the underlying VCS commands.
|
||||
pub struct SyncManager {
|
||||
mailbox: Mailbox,
|
||||
repo_path: PathBuf,
|
||||
}
|
||||
|
||||
impl SyncManager {
|
||||
/// Creates a new `SyncManager`.
|
||||
///
|
||||
/// # Parameters
|
||||
/// * `mailbox` – An already‑opened `Mailbox` for peer communication.
|
||||
/// * `repo_path` – Path to the Dolt repository that should be kept in sync.
|
||||
///
|
||||
/// Returns a fully‑initialised manager ready to broadcast or handle sync
|
||||
/// signals.
|
||||
pub fn new(mailbox: Mailbox, repo_path: PathBuf) -> Self {
|
||||
Self { mailbox, repo_path }
|
||||
}
|
||||
|
||||
pub fn broadcast_state(&self, from_peer: &str, to_peer: &str) -> Result<(), Box<dyn std::error::Error>> {
|
||||
/// Broadcasts the current repository state to a remote peer.
|
||||
///
|
||||
/// The method executes `dolt log -n 1 --format %H` to obtain the most recent
|
||||
/// commit hash, constructs a `Message` with topic `"sync_signal"` and sends it
|
||||
/// via the mailbox.
|
||||
///
|
||||
/// * `from_peer` – Identifier of the sender.
|
||||
/// * `to_peer` – Identifier of the intended recipient.
|
||||
///
|
||||
/// # Errors
|
||||
/// Returns any I/O or command‑execution error wrapped in a boxed `dyn
|
||||
/// Error`.
|
||||
pub fn broadcast_state(
|
||||
&self,
|
||||
from_peer: &str,
|
||||
to_peer: &str,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Get current dolt hash
|
||||
let output = Command::new("dolt")
|
||||
.args(&["log", "-n", "1", "--format", "%H"])
|
||||
@@ -34,10 +75,23 @@ impl SyncManager {
|
||||
};
|
||||
|
||||
self.mailbox.send(&msg)?;
|
||||
println!("Broadcasted sync signal: {} from {}", current_hash, from_peer);
|
||||
println!(
|
||||
"Broadcasted sync signal: {} from {}",
|
||||
current_hash, from_peer
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Handles an incoming `sync_signal` message.
|
||||
///
|
||||
/// If the message topic is not `"sync_signal"` the function returns `Ok(())`
|
||||
/// immediately. Otherwise it extracts the remote commit hash and attempts a
|
||||
/// `dolt pull origin` to bring the local repository up‑to‑date. In a real
|
||||
/// P2P deployment the remote URL would be derived from the sender, but the
|
||||
/// current implementation uses the default remote configuration.
|
||||
///
|
||||
/// # Errors
|
||||
/// Propagates any command execution failures.
|
||||
pub fn handle_sync_signal(&self, msg: &Message) -> Result<(), Box<dyn std::error::Error>> {
|
||||
if msg.topic != "sync_signal" {
|
||||
return Ok(());
|
||||
|
||||
15
logs/.fdbbbc7a24b00979ca9dea2720178eb798c332a1-audit.json
Normal file
15
logs/.fdbbbc7a24b00979ca9dea2720178eb798c332a1-audit.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"keep": {
|
||||
"days": true,
|
||||
"amount": 14
|
||||
},
|
||||
"auditLog": "/home/tony/rust/projects/reset/CHORUS/logs/.fdbbbc7a24b00979ca9dea2720178eb798c332a1-audit.json",
|
||||
"files": [
|
||||
{
|
||||
"date": 1772509985271,
|
||||
"name": "/home/tony/rust/projects/reset/CHORUS/logs/mcp-puppeteer-2026-03-03.log",
|
||||
"hash": "286a30d8143c8f454bd29cbdf024c0c200b33224f63473e4573b57a44bcd24ae"
|
||||
}
|
||||
],
|
||||
"hashType": "sha256"
|
||||
}
|
||||
21
logs/mcp-puppeteer-2026-03-03.log
Normal file
21
logs/mcp-puppeteer-2026-03-03.log
Normal file
@@ -0,0 +1,21 @@
|
||||
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 14:53:05.305"}
|
||||
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 14:53:05.306"}
|
||||
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 14:53:15.193"}
|
||||
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:13:25.848"}
|
||||
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:13:25.849"}
|
||||
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:14:29.005"}
|
||||
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:24:21.669"}
|
||||
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:24:21.670"}
|
||||
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:24:23.947"}
|
||||
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:31:31.620"}
|
||||
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:31:31.622"}
|
||||
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:31:37.753"}
|
||||
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:34:39.279"}
|
||||
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:34:39.280"}
|
||||
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:34:40.724"}
|
||||
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:38:24.580"}
|
||||
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:38:24.582"}
|
||||
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:38:27.355"}
|
||||
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:39:42.436"}
|
||||
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:39:42.437"}
|
||||
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:39:53.406"}
|
||||
1
target/.rustc_info.json
Normal file
1
target/.rustc_info.json
Normal file
@@ -0,0 +1 @@
|
||||
{"rustc_fingerprint":15256376128064635560,"outputs":{"7971740275564407648":{"success":true,"status":"","code":0,"stdout":"___\nlib___.rlib\nlib___.so\nlib___.so\nlib___.a\nlib___.so\n/home/tony/.rustup/toolchains/stable-x86_64-unknown-linux-gnu\noff\npacked\nunpacked\n___\ndebug_assertions\npanic=\"unwind\"\nproc_macro\ntarget_abi=\"\"\ntarget_arch=\"x86_64\"\ntarget_endian=\"little\"\ntarget_env=\"gnu\"\ntarget_family=\"unix\"\ntarget_feature=\"fxsr\"\ntarget_feature=\"sse\"\ntarget_feature=\"sse2\"\ntarget_has_atomic=\"16\"\ntarget_has_atomic=\"32\"\ntarget_has_atomic=\"64\"\ntarget_has_atomic=\"8\"\ntarget_has_atomic=\"ptr\"\ntarget_os=\"linux\"\ntarget_pointer_width=\"64\"\ntarget_vendor=\"unknown\"\nunix\n","stderr":""},"17747080675513052775":{"success":true,"status":"","code":0,"stdout":"rustc 1.87.0 (17067e9ac 2025-05-09)\nbinary: rustc\ncommit-hash: 17067e9ac6d7ecb70e50f92c1944e545188d2359\ncommit-date: 2025-05-09\nhost: x86_64-unknown-linux-gnu\nrelease: 1.87.0\nLLVM version: 20.1.1\n","stderr":""}},"successes":{}}
|
||||
1
target/.rustdoc_fingerprint.json
Normal file
1
target/.rustdoc_fingerprint.json
Normal file
@@ -0,0 +1 @@
|
||||
{"rustc_vv":"rustc 1.87.0 (17067e9ac 2025-05-09)\nbinary: rustc\ncommit-hash: 17067e9ac6d7ecb70e50f92c1944e545188d2359\ncommit-date: 2025-05-09\nhost: x86_64-unknown-linux-gnu\nrelease: 1.87.0\nLLVM version: 20.1.1\n"}
|
||||
3
target/CACHEDIR.TAG
Normal file
3
target/CACHEDIR.TAG
Normal file
@@ -0,0 +1,3 @@
|
||||
Signature: 8a477f597d28d172789f06886806bc55
|
||||
# This file is a cache directory tag created by cargo.
|
||||
# For information about cache directory tags see https://bford.info/cachedir/
|
||||
0
target/debug/.cargo-lock
Normal file
0
target/debug/.cargo-lock
Normal file
@@ -0,0 +1 @@
|
||||
b2b03782ee782d2c
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"","declared_features":"","target":0,"profile":0,"path":0,"deps":[[966925859616469517,"build_script_build",false,877995191091226123]],"local":[{"RerunIfChanged":{"output":"debug/build/ahash-3e4ac5f2a9eb4c58/output","paths":["build.rs"]}}],"rustflags":[],"config":0,"compile_kind":0}
|
||||
@@ -0,0 +1 @@
|
||||
0b667e77f9432f0c
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"atomic-polyfill\", \"compile-time-rng\", \"const-random\", \"default\", \"getrandom\", \"nightly-arm-aes\", \"no-rng\", \"runtime-rng\", \"serde\", \"std\"]","target":17883862002600103897,"profile":2225463790103693989,"path":15500462139455470991,"deps":[[5398981501050481332,"version_check",false,18375983552046171052]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/ahash-6d979e8091fade67/dep-build-script-build-script-build","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
BIN
target/debug/.fingerprint/ahash-76b39812343abad0/dep-lib-ahash
Normal file
BIN
target/debug/.fingerprint/ahash-76b39812343abad0/dep-lib-ahash
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
2fc16307e7a770a1
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"atomic-polyfill\", \"compile-time-rng\", \"const-random\", \"default\", \"getrandom\", \"nightly-arm-aes\", \"no-rng\", \"runtime-rng\", \"serde\", \"std\"]","target":8470944000320059508,"profile":2241668132362809309,"path":1076126273874160872,"deps":[[966925859616469517,"build_script_build",false,3183333477403046066],[3722963349756955755,"once_cell",false,5072081620029175829],[7667230146095136825,"cfg_if",false,17019820836644139335],[17375358419629610217,"zerocopy",false,3569098339783736861]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/ahash-76b39812343abad0/dep-lib-ahash","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
BIN
target/debug/.fingerprint/ahash-c93df6ab5026e370/dep-lib-ahash
Normal file
BIN
target/debug/.fingerprint/ahash-c93df6ab5026e370/dep-lib-ahash
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
ce752f9bcb753e84
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"atomic-polyfill\", \"compile-time-rng\", \"const-random\", \"default\", \"getrandom\", \"nightly-arm-aes\", \"no-rng\", \"runtime-rng\", \"serde\", \"std\"]","target":8470944000320059508,"profile":15657897354478470176,"path":1076126273874160872,"deps":[[966925859616469517,"build_script_build",false,3183333477403046066],[3722963349756955755,"once_cell",false,13894614411759489994],[7667230146095136825,"cfg_if",false,13273392571467671403],[17375358419629610217,"zerocopy",false,10689620513817343668]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/ahash-c93df6ab5026e370/dep-lib-ahash","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
BIN
target/debug/.fingerprint/ahash-cb61483b0241026b/dep-lib-ahash
Normal file
BIN
target/debug/.fingerprint/ahash-cb61483b0241026b/dep-lib-ahash
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
2ce34c1176049056
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"atomic-polyfill\", \"compile-time-rng\", \"const-random\", \"default\", \"getrandom\", \"nightly-arm-aes\", \"no-rng\", \"runtime-rng\", \"serde\", \"std\"]","target":8470944000320059508,"profile":2241668132362809309,"path":1076126273874160872,"deps":[[966925859616469517,"build_script_build",false,3183333477403046066],[3722963349756955755,"once_cell",false,9688418222178548709],[7667230146095136825,"cfg_if",false,17019820836644139335],[17375358419629610217,"zerocopy",false,3569098339783736861]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/ahash-cb61483b0241026b/dep-lib-ahash","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
c78c2f31a626d615
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"perf-literal\", \"std\"]","declared_features":"[\"default\", \"logging\", \"perf-literal\", \"std\"]","target":7534583537114156500,"profile":15657897354478470176,"path":13713000725416766643,"deps":[[1363051979936526615,"memchr",false,3045848091962468255]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/aho-corasick-2e495f0cb4e7b702/dep-lib-aho_corasick","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
309e4cbc95c7514b
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"perf-literal\", \"std\"]","declared_features":"[\"default\", \"logging\", \"perf-literal\", \"std\"]","target":7534583537114156500,"profile":2241668132362809309,"path":13713000725416766643,"deps":[[1363051979936526615,"memchr",false,8804445397282048046]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/aho-corasick-839afa8844e27b73/dep-lib-aho_corasick","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
30f3873b30636c8d
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"alloc\"]","declared_features":"[\"alloc\", \"default\", \"fresh-rust\", \"nightly\", \"serde\", \"std\"]","target":5388200169723499962,"profile":187265481308423917,"path":14211365667724319390,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/allocator-api2-3d50966576ccee68/dep-lib-allocator_api2","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
9b4c5d7bd2c7f8e1
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"alloc\"]","declared_features":"[\"alloc\", \"default\", \"fresh-rust\", \"nightly\", \"serde\", \"std\"]","target":5388200169723499962,"profile":12994027242049262075,"path":14211365667724319390,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/allocator-api2-824e74dd2449b33c/dep-lib-allocator_api2","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
16cc041b04df1b37
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[]","target":6962977057026645649,"profile":2225463790103693989,"path":8656731720886155905,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/autocfg-770003ab709e53c1/dep-lib-autocfg","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
48a15db9394058cb
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":15657897354478470176,"path":9884266009885954648,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-30d5974a3f0545f1/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
0b351a49aa4f73cb
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"default\"]","declared_features":"[\"compiler_builtins\", \"core\", \"default\", \"example_generated\", \"rustc-dep-of-std\"]","target":12919857562465245259,"profile":15657897354478470176,"path":16315214553382879237,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-40752e70762f6d2c/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
b7534994b82da610
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"default\"]","declared_features":"[\"compiler_builtins\", \"core\", \"default\", \"example_generated\", \"rustc-dep-of-std\"]","target":12919857562465245259,"profile":2241668132362809309,"path":16315214553382879237,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-57ee71d4871509d4/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
0d36b8e52b96582e
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"std\"]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":15657897354478470176,"path":9884266009885954648,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-82666fbcd3cbd175/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
699758da1f6ea19c
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"std\"]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":2241668132362809309,"path":9884266009885954648,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-bc78e388b3eb4775/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
9374167e11cc2383
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":2241668132362809309,"path":9884266009885954648,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-e5a07805fd49de02/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
BIN
target/debug/.fingerprint/bytes-6105032c4d786163/dep-lib-bytes
Normal file
BIN
target/debug/.fingerprint/bytes-6105032c4d786163/dep-lib-bytes
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
7bdafb5c6c75155f
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"default\", \"std\"]","declared_features":"[\"default\", \"extra-platforms\", \"serde\", \"std\"]","target":11402411492164584411,"profile":5585765287293540646,"path":3940745598343284350,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bytes-6105032c4d786163/dep-lib-bytes","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
BIN
target/debug/.fingerprint/bytes-b0b6678ba8cc5218/dep-lib-bytes
Normal file
BIN
target/debug/.fingerprint/bytes-b0b6678ba8cc5218/dep-lib-bytes
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
2fa4f36cef0a4a7d
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[\"default\", \"std\"]","declared_features":"[\"default\", \"extra-platforms\", \"serde\", \"std\"]","target":11402411492164584411,"profile":13827760451848848284,"path":3940745598343284350,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bytes-b0b6678ba8cc5218/dep-lib-bytes","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
BIN
target/debug/.fingerprint/cc-54c52371c85641fb/dep-lib-cc
Normal file
BIN
target/debug/.fingerprint/cc-54c52371c85641fb/dep-lib-cc
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
1
target/debug/.fingerprint/cc-54c52371c85641fb/lib-cc
Normal file
1
target/debug/.fingerprint/cc-54c52371c85641fb/lib-cc
Normal file
@@ -0,0 +1 @@
|
||||
4f342b6e8db2a031
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"jobserver\", \"parallel\"]","target":11042037588551934598,"profile":4333757155065362140,"path":3313463817543785650,"deps":[[8410525223747752176,"shlex",false,10044392746864901954],[9159843920629750842,"find_msvc_tools",false,17497982296998823564]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/cc-54c52371c85641fb/dep-lib-cc","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
BIN
target/debug/.fingerprint/cfg-if-94696fbcf4a2b312/dep-lib-cfg_if
Normal file
BIN
target/debug/.fingerprint/cfg-if-94696fbcf4a2b312/dep-lib-cfg_if
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
6ba7a3e2379034b8
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"core\", \"rustc-dep-of-std\"]","target":13840298032947503755,"profile":15657897354478470176,"path":12940094294345282402,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/cfg-if-94696fbcf4a2b312/dep-lib-cfg_if","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user