Docs: Comprehensive inline rustdoc and architectural summary PDF

This commit is contained in:
anthonyrawlins
2026-03-03 18:05:53 +11:00
parent cc03616918
commit 0f28e4b669
2932 changed files with 14552 additions and 74 deletions

1
.serena/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/cache

126
.serena/project.yml Normal file
View File

@@ -0,0 +1,126 @@
# the name by which the project can be referenced within Serena
project_name: "CHORUS"
# list of languages for which language servers are started; choose from:
# al bash clojure cpp csharp
# csharp_omnisharp dart elixir elm erlang
# fortran fsharp go groovy haskell
# java julia kotlin lua markdown
# matlab nix pascal perl php
# php_phpactor powershell python python_jedi r
# rego ruby ruby_solargraph rust scala
# swift terraform toml typescript typescript_vts
# vue yaml zig
# (This list may be outdated. For the current list, see values of Language enum here:
# https://github.com/oraios/serena/blob/main/src/solidlsp/ls_config.py
# For some languages, there are alternative language servers, e.g. csharp_omnisharp, ruby_solargraph.)
# Note:
# - For C, use cpp
# - For JavaScript, use typescript
# - For Free Pascal/Lazarus, use pascal
# Special requirements:
# Some languages require additional setup/installations.
# See here for details: https://oraios.github.io/serena/01-about/020_programming-languages.html#language-servers
# When using multiple languages, the first language server that supports a given file will be used for that file.
# The first language is the default language and the respective language server will be used as a fallback.
# Note that when using the JetBrains backend, language servers are not used and this list is correspondingly ignored.
languages:
- rust
# the encoding used by text files in the project
# For a list of possible encodings, see https://docs.python.org/3.11/library/codecs.html#standard-encodings
encoding: "utf-8"
# The language backend to use for this project.
# If not set, the global setting from serena_config.yml is used.
# Valid values: LSP, JetBrains
# Note: the backend is fixed at startup. If a project with a different backend
# is activated post-init, an error will be returned.
language_backend:
# whether to use project's .gitignore files to ignore files
ignore_all_files_in_gitignore: true
# list of additional paths to ignore in this project.
# Same syntax as gitignore, so you can use * and **.
# Note: global ignored_paths from serena_config.yml are also applied additively.
ignored_paths: []
# whether the project is in read-only mode
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
# Added on 2025-04-18
read_only: false
# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
# Below is the complete list of tools for convenience.
# To make sure you have the latest list of tools, and to view their descriptions,
# execute `uv run scripts/print_tool_overview.py`.
#
# * `activate_project`: Activates a project by name.
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
# * `create_text_file`: Creates/overwrites a file in the project directory.
# * `delete_lines`: Deletes a range of lines within a file.
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
# * `execute_shell_command`: Executes a shell command.
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
# * `initial_instructions`: Gets the initial instructions for the current project.
# Should only be used in settings where the system prompt cannot be set,
# e.g. in clients you have no control over, like Claude Desktop.
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
# * `insert_at_line`: Inserts content at a given line in a file.
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
# * `list_memories`: Lists memories in Serena's project-specific memory store.
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
# * `read_file`: Reads a file within the project directory.
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
# * `remove_project`: Removes a project from the Serena configuration.
# * `replace_lines`: Replaces a range of lines within a file with new content.
# * `replace_symbol_body`: Replaces the full definition of a symbol.
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
# * `search_for_pattern`: Performs a search for a pattern in the project.
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
# * `switch_modes`: Activates modes by providing a list of their names
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
excluded_tools: []
# list of tools to include that would otherwise be disabled (particularly optional tools that are disabled by default)
included_optional_tools: []
# fixed set of tools to use as the base tool set (if non-empty), replacing Serena's default set of tools.
# This cannot be combined with non-empty excluded_tools or included_optional_tools.
fixed_tools: []
# list of mode names to that are always to be included in the set of active modes
# The full set of modes to be activated is base_modes + default_modes.
# If the setting is undefined, the base_modes from the global configuration (serena_config.yml) apply.
# Otherwise, this setting overrides the global configuration.
# Set this to [] to disable base modes for this project.
# Set this to a list of mode names to always include the respective modes for this project.
base_modes:
# list of mode names that are to be activated by default.
# The full set of modes to be activated is base_modes + default_modes.
# If the setting is undefined, the default_modes from the global configuration (serena_config.yml) apply.
# Otherwise, this overrides the setting from the global configuration (serena_config.yml).
# This setting can, in turn, be overridden by CLI parameters (--mode).
default_modes:
# initial prompt for the project. It will always be given to the LLM upon activating the project
# (contrary to the memories, which are loaded on demand).
initial_prompt: ""
# time budget (seconds) per tool call for the retrieval of additional symbol information
# such as docstrings or parameter information.
# This overrides the corresponding setting in the global configuration; see the documentation there.
# If null or missing, use the setting from the global configuration.
symbol_info_budget:

1404
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

13
Cargo.toml Normal file
View File

@@ -0,0 +1,13 @@
[workspace]
members = [
"UCXL",
"chrs-mail",
"chrs-graph",
"chrs-agent",
"chrs-sync",
"chrs-slurp",
"chrs-shhh",
"chrs-bubble",
"chrs-poc"
]
resolver = "2"

1
UCXL/.serena/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
/cache

126
UCXL/.serena/project.yml Normal file
View File

@@ -0,0 +1,126 @@
# the name by which the project can be referenced within Serena
project_name: "UCXL"
# list of languages for which language servers are started; choose from:
# al bash clojure cpp csharp
# csharp_omnisharp dart elixir elm erlang
# fortran fsharp go groovy haskell
# java julia kotlin lua markdown
# matlab nix pascal perl php
# php_phpactor powershell python python_jedi r
# rego ruby ruby_solargraph rust scala
# swift terraform toml typescript typescript_vts
# vue yaml zig
# (This list may be outdated. For the current list, see values of Language enum here:
# https://github.com/oraios/serena/blob/main/src/solidlsp/ls_config.py
# For some languages, there are alternative language servers, e.g. csharp_omnisharp, ruby_solargraph.)
# Note:
# - For C, use cpp
# - For JavaScript, use typescript
# - For Free Pascal/Lazarus, use pascal
# Special requirements:
# Some languages require additional setup/installations.
# See here for details: https://oraios.github.io/serena/01-about/020_programming-languages.html#language-servers
# When using multiple languages, the first language server that supports a given file will be used for that file.
# The first language is the default language and the respective language server will be used as a fallback.
# Note that when using the JetBrains backend, language servers are not used and this list is correspondingly ignored.
languages:
- rust
# the encoding used by text files in the project
# For a list of possible encodings, see https://docs.python.org/3.11/library/codecs.html#standard-encodings
encoding: "utf-8"
# The language backend to use for this project.
# If not set, the global setting from serena_config.yml is used.
# Valid values: LSP, JetBrains
# Note: the backend is fixed at startup. If a project with a different backend
# is activated post-init, an error will be returned.
language_backend:
# whether to use project's .gitignore files to ignore files
ignore_all_files_in_gitignore: true
# list of additional paths to ignore in this project.
# Same syntax as gitignore, so you can use * and **.
# Note: global ignored_paths from serena_config.yml are also applied additively.
ignored_paths: []
# whether the project is in read-only mode
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
# Added on 2025-04-18
read_only: false
# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
# Below is the complete list of tools for convenience.
# To make sure you have the latest list of tools, and to view their descriptions,
# execute `uv run scripts/print_tool_overview.py`.
#
# * `activate_project`: Activates a project by name.
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
# * `create_text_file`: Creates/overwrites a file in the project directory.
# * `delete_lines`: Deletes a range of lines within a file.
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
# * `execute_shell_command`: Executes a shell command.
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
# * `initial_instructions`: Gets the initial instructions for the current project.
# Should only be used in settings where the system prompt cannot be set,
# e.g. in clients you have no control over, like Claude Desktop.
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
# * `insert_at_line`: Inserts content at a given line in a file.
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
# * `list_memories`: Lists memories in Serena's project-specific memory store.
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
# * `read_file`: Reads a file within the project directory.
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
# * `remove_project`: Removes a project from the Serena configuration.
# * `replace_lines`: Replaces a range of lines within a file with new content.
# * `replace_symbol_body`: Replaces the full definition of a symbol.
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
# * `search_for_pattern`: Performs a search for a pattern in the project.
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
# * `switch_modes`: Activates modes by providing a list of their names
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
excluded_tools: []
# list of tools to include that would otherwise be disabled (particularly optional tools that are disabled by default)
included_optional_tools: []
# fixed set of tools to use as the base tool set (if non-empty), replacing Serena's default set of tools.
# This cannot be combined with non-empty excluded_tools or included_optional_tools.
fixed_tools: []
# list of mode names to that are always to be included in the set of active modes
# The full set of modes to be activated is base_modes + default_modes.
# If the setting is undefined, the base_modes from the global configuration (serena_config.yml) apply.
# Otherwise, this setting overrides the global configuration.
# Set this to [] to disable base modes for this project.
# Set this to a list of mode names to always include the respective modes for this project.
base_modes:
# list of mode names that are to be activated by default.
# The full set of modes to be activated is base_modes + default_modes.
# If the setting is undefined, the default_modes from the global configuration (serena_config.yml) apply.
# Otherwise, this overrides the setting from the global configuration (serena_config.yml).
# This setting can, in turn, be overridden by CLI parameters (--mode).
default_modes:
# initial prompt for the project. It will always be given to the LLM upon activating the project
# (contrary to the memories, which are loaded on demand).
initial_prompt: ""
# time budget (seconds) per tool call for the retrieval of additional symbol information
# such as docstrings or parameter information.
# This overrides the corresponding setting in the global configuration; see the documentation there.
# If null or missing, use the setting from the global configuration.
symbol_info_budget:

View File

@@ -1,4 +1,9 @@
// UCXL Core Data Structures //! UCXL core data structures and utilities.
//!
//! This module provides the fundamental types used throughout the CHORUS
//! system for addressing resources (UCXL addresses), handling temporal axes,
//! and storing lightweight metadata. The implementation is deliberately
//! lightweight and inmemory to keep the core fast and dependencyfree.
pub mod watcher; pub mod watcher;
@@ -7,18 +12,41 @@ use std::fmt;
use std::str::FromStr; use std::str::FromStr;
/// Represents the temporal axis in a UCXL address. /// Represents the temporal axis in a UCXL address.
///
/// **What**: An enumeration of the three supported temporal positions
/// present, past, and future each represented by a symbolic string in the
/// address format.
///
/// **How**: The enum derives `Debug`, `PartialEq`, `Eq`, `Clone`, and `Copy`
/// for ergonomic usage. Conversions to and from strings are provided via the
/// `FromStr` and `fmt::Display` implementations.
///
/// **Why**: Temporal axes enable UCXL to refer to data at different points in
/// time (e.g. versioned resources). The simple threestate model matches the
/// CHURUS architectural decision to keep addressing lightweight while still
/// supporting historical and speculative queries.
#[derive(Debug, PartialEq, Eq, Clone, Copy)] #[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub enum TemporalAxis { pub enum TemporalAxis {
/// Present ("#") /// Present ("#") the current version of a resource.
Present, Present,
/// Past ("~~") /// Past ("~~") a historical snapshot of a resource.
Past, Past,
/// Future ("^^") /// Future ("^^") a speculative or planned version of a resource.
Future, Future,
} }
impl FromStr for TemporalAxis { impl FromStr for TemporalAxis {
type Err = String; type Err = String;
/// Parses a temporal axis token from its textual representation.
///
/// **What**: Accepts "#", "~~" or "^^" and maps them to the corresponding
/// enum variant.
///
/// **How**: A simple `match` statement is used; an error string is
/// returned for any unrecognised token.
///
/// **Why**: Centralises validation of temporal markers used throughout the
/// address parsing logic, ensuring consistency.
fn from_str(s: &str) -> Result<Self, Self::Err> { fn from_str(s: &str) -> Result<Self, Self::Err> {
match s { match s {
"#" => Ok(TemporalAxis::Present), "#" => Ok(TemporalAxis::Present),
@@ -30,6 +58,15 @@ impl FromStr for TemporalAxis {
} }
impl fmt::Display for TemporalAxis { impl fmt::Display for TemporalAxis {
/// Formats the temporal axis back to its string token.
///
/// **What**: Returns "#", "~~" or "^^" depending on the variant.
///
/// **How**: Matches on `self` and writes the corresponding string to the
/// formatter.
///
/// **Why**: Required for serialising a `UCXLAddress` back to its textual
/// representation.
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let s = match self { let s = match self {
TemporalAxis::Present => "#", TemporalAxis::Present => "#",
@@ -41,18 +78,48 @@ impl fmt::Display for TemporalAxis {
} }
/// Represents a parsed UCXL address. /// Represents a parsed UCXL address.
///
/// **What**: Holds the components extracted from a UCXL URI the agent, an
/// optional role, the project identifier, task name, temporal axis, and the
/// resource path within the project.
///
/// **How**: The struct is constructed via the `FromStr` implementation which
/// validates the scheme, splits the address into its constituent parts and
/// populates the fields. The `Display` implementation performs the inverse
/// operation.
///
/// **Why**: UCXL addresses are the primary routing mechanism inside CHORUS.
/// Encapsulating them in a dedicated type provides typesafety and makes it
/// easy to work with address components in the rest of the codebase.
#[derive(Debug, PartialEq, Eq, Clone)] #[derive(Debug, PartialEq, Eq, Clone)]
pub struct UCXLAddress { pub struct UCXLAddress {
/// The identifier of the agent (e.g., a user or system component).
pub agent: String, pub agent: String,
/// Optional role associated with the agent (e.g., "admin").
pub role: Option<String>, pub role: Option<String>,
/// The project namespace this address belongs to.
pub project: String, pub project: String,
/// The specific task within the project.
pub task: String, pub task: String,
/// Temporal axis indicating present, past or future.
pub temporal: TemporalAxis, pub temporal: TemporalAxis,
/// Path to the resource relative to the project root.
pub path: String, pub path: String,
} }
impl FromStr for UCXLAddress { impl FromStr for UCXLAddress {
type Err = String; type Err = String;
/// Parses a full UCXL address string into a `UCXLAddress` value.
///
/// **What**: Validates the scheme (`ucxl://`), extracts the agent, optional
/// role, project, task, temporal axis and the trailing resource path.
///
/// **How**: The implementation performs a series of `split` operations,
/// handling optional components and converting the temporal token via
/// `TemporalAxis::from_str`. Errors are surfaced as descriptive strings.
///
/// **Why**: Centralises address parsing logic, ensuring that all parts of
/// the system interpret UCXL URIs consistently.
fn from_str(address: &str) -> Result<Self, Self::Err> { fn from_str(address: &str) -> Result<Self, Self::Err> {
// Ensure the scheme is correct // Ensure the scheme is correct
let scheme_split: Vec<&str> = address.splitn(2, "://").collect(); let scheme_split: Vec<&str> = address.splitn(2, "://").collect();
@@ -102,6 +169,16 @@ impl FromStr for UCXLAddress {
} }
impl fmt::Display for UCXLAddress { impl fmt::Display for UCXLAddress {
/// Serialises the address back to its canonical string form.
///
/// **What**: Constructs a `ucxl://` URI including optional role and path.
///
/// **How**: Conditionally inserts the role component, then formats the
/// project, task, temporal token and optional path using standard `write!`
/// semantics.
///
/// **Why**: Needed when emitting addresses (e.g., logging events or
/// generating links) so that external tools can consume them.
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let role_part = if let Some(r) = &self.role { let role_part = if let Some(r) = &self.role {
format!(":{}", r) format!(":{}", r)
@@ -125,21 +202,51 @@ impl fmt::Display for UCXLAddress {
} }
} }
/// Simple inmemory metadata store mapping a file path to a metadata string. /// Trait defining a simple keyvalue metadata store.
///
/// **What**: Provides read, write and removal operations for associating a
/// string of metadata with a filesystem path.
///
/// **How**: The trait abstracts over concrete storage implementations
/// currently an inmemory `HashMap` allowing callers to depend on the trait
/// rather than a specific type.
///
/// **Why**: CHORUS needs a lightweight way to attach auxiliary information to
/// files without persisting to a database; the trait makes it easy to swap in a
/// persistent backend later if required.
pub trait MetadataStore { pub trait MetadataStore {
/// Retrieves the metadata for `path` if it exists.
fn get(&self, path: &str) -> Option<&String>; fn get(&self, path: &str) -> Option<&String>;
/// Stores `metadata` for `path`, overwriting any existing value.
fn set(&mut self, path: &str, metadata: String); fn set(&mut self, path: &str, metadata: String);
/// Removes the metadata entry for `path`, returning the old value if any.
fn remove(&mut self, path: &str) -> Option<String> { fn remove(&mut self, path: &str) -> Option<String> {
None None
} }
} }
/// A concrete inmemory implementation using a HashMap. /// Inmemory implementation of `MetadataStore` backed by a `HashMap`.
///
/// **What**: Holds metadata in a hash map where the key is the file path.
///
/// **How**: Provides a `new` constructor and implements the `MetadataStore`
/// trait methods by delegating to the underlying map.
///
/// **Why**: Offers a zerocost, dependencyfree store suitable for unit tests
/// and simple scenarios. It can be replaced with a persistent store without
/// changing callers.
pub struct InMemoryMetadataStore { pub struct InMemoryMetadataStore {
map: HashMap<String, String>, map: HashMap<String, String>,
} }
impl InMemoryMetadataStore { impl InMemoryMetadataStore {
/// Creates a fresh, empty `InMemoryMetadataStore`.
///
/// **What**: Returns a struct with an empty internal map.
///
/// **How**: Calls `HashMap::new`.
///
/// **Why**: Convenience constructor for callers.
pub fn new() -> Self { pub fn new() -> Self {
InMemoryMetadataStore { InMemoryMetadataStore {
map: HashMap::new(), map: HashMap::new(),

View File

@@ -1,20 +1,63 @@
//! UCXL filesystem watcher.
//!
//! This module provides a thin wrapper around the `notify` crate to watch a
//! directory (or "project") for filesystem events. When a change is detected,
//! the watcher attempts to construct a corresponding `UCXLAddress` using a
//! simple heuristic and logs the event. This is primarily used by CHORUS for
//! reactive workflows such as automatically updating metadata when files are
//! added, modified or removed.
use notify::{Config, RecommendedWatcher, RecursiveMode, Watcher}; use notify::{Config, RecommendedWatcher, RecursiveMode, Watcher};
use std::path::Path; use std::path::Path;
use std::sync::mpsc::channel; use std::sync::mpsc::channel;
use crate::{UCXLAddress, TemporalAxis}; use crate::UCXLAddress;
use std::str::FromStr; use std::str::FromStr;
/// Represents a watcher rooted at a specific base path.
///
/// **What**: Holds the absolute path that the watcher monitors.
///
/// **How**: The path is stored as a `PathBuf`. The watcher is created via the
/// `new` constructor which accepts any type that can be referenced as a `Path`.
/// The underlying `notify::RecommendedWatcher` is configured with the default
/// `Config` and set to watch recursively.
///
/// **Why**: Encapsulating the watcher logic in a dedicated struct makes it easy
/// to instantiate multiple independent watchers and keeps the public API tidy.
pub struct UCXLWatcher { pub struct UCXLWatcher {
base_path: std::path::PathBuf, base_path: std::path::PathBuf,
} }
impl UCXLWatcher { impl UCXLWatcher {
/// Creates a new `UCXLWatcher` for the given path.
///
/// **What**: Accepts any generic `AsRef<Path>` so callers can pass a `&str`,
/// `Path`, or `PathBuf`.
///
/// **How**: The provided path is converted to a `PathBuf` and stored.
///
/// **Why**: Convenience constructor used throughout CHORUS when a watcher is
/// needed for a project directory.
pub fn new<P: AsRef<Path>>(path: P) -> Self { pub fn new<P: AsRef<Path>>(path: P) -> Self {
Self { Self {
base_path: path.as_ref().to_path_buf(), base_path: path.as_ref().to_path_buf(),
} }
} }
/// Starts the watch loop, blocking indefinitely while handling events.
///
/// **What**: Sets up a channel, creates a `RecommendedWatcher`, and begins
/// watching the `base_path` recursively. For each incoming event, it
/// attempts to map the filesystem path to a UCXL address and prints a log.
///
/// **How**: Uses the `notify` crate's event API. The heuristic address
/// format is `ucxl://system:watcher@local:filesystem/#/<relative_path>`.
/// It parses this string with `UCXLAddress::from_str` and logs the result.
/// Errors from parsing are ignored (they simply aren't printed).
///
/// **Why**: Provides a simple, observable bridge between raw filesystem
/// changes and the UCXL addressing scheme, allowing other components to react
/// to changes using a uniform identifier.
pub fn watch_loop(&self) -> Result<(), Box<dyn std::error::Error>> { pub fn watch_loop(&self) -> Result<(), Box<dyn std::error::Error>> {
let (tx, rx) = channel(); let (tx, rx) = channel();
@@ -29,8 +72,11 @@ impl UCXLWatcher {
for path in event.paths { for path in event.paths {
if let Some(rel_path) = path.strip_prefix(&self.base_path).ok() { if let Some(rel_path) = path.strip_prefix(&self.base_path).ok() {
let rel_str = rel_path.to_string_lossy(); let rel_str = rel_path.to_string_lossy();
// Attempt a heuristic address mapping: ucxl://system:watcher@local:filesystem/#/path // Heuristic address mapping: ucxl://system:watcher@local:filesystem/#/path
let addr_str = format!("ucxl://system:watcher@local:filesystem/#/{}", rel_str); let addr_str = format!(
"ucxl://system:watcher@local:filesystem/#/{}",
rel_str
);
if let Ok(addr) = UCXLAddress::from_str(&addr_str) { if let Ok(addr) = UCXLAddress::from_str(&addr_str) {
println!("[UCXL EVENT] {:?} -> {}", event.kind, addr); println!("[UCXL EVENT] {:?} -> {}", event.kind, addr);
} }

View File

@@ -1,3 +1,11 @@
/// chrs-agent crate implements the core CHORUS agent runtime.
///
/// An agent runs a message loop that receives tasks from a `Mailbox`, logs them to a
/// `DoltGraph` (the persistent state graph), and marks them as read. The design
/// follows the CHORUS architectural pattern where agents are autonomous workers
/// that interact through the `chrs_mail` messaging layer and maintain a provable
/// execution history in the graph.
use chrs_graph::DoltGraph; use chrs_graph::DoltGraph;
use chrs_mail::{Mailbox, Message}; use chrs_mail::{Mailbox, Message};
use chrono::Utc; use chrono::Utc;
@@ -6,13 +14,36 @@ use std::time::Duration;
use tokio::time::sleep; use tokio::time::sleep;
use uuid::Uuid; use uuid::Uuid;
struct CHORUSAgent { /// Represents a running CHORUS agent.
///
/// # Fields
/// * `id` Logical identifier for the agent (e.g., "agent-001").
/// * `mailbox` The `Mailbox` used for interagent communication.
/// * `graph` Persistence layer (`DoltGraph`) where task logs are stored.
///
/// # Rationale
/// Agents are isolated units of work. By keeping a dedicated mailbox and a graph
/// per agent we guarantee that each agent can be started, stopped, and reasoned
/// about independently while still contributing to the global CHORUS state.
pub struct CHORUSAgent {
id: String, id: String,
mailbox: Mailbox, mailbox: Mailbox,
graph: DoltGraph, graph: DoltGraph,
} }
impl CHORUSAgent { impl CHORUSAgent {
/// Initializes a new `CHORUSAgent`.
///
/// This creates the filesystem layout under `base_path`, opens or creates the
/// SQLite mailbox, and initialises a `DoltGraph` for state persistence.
/// It also ensures that a `task_log` table exists for recording incoming
/// messages.
///
/// # Parameters
/// * `id` Identifier for the agent instance.
/// * `base_path` Directory where the agent stores its data.
///
/// Returns an instance ready to run its event loop.
async fn init(id: &str, base_path: &Path) -> Result<Self, Box<dyn std::error::Error>> { async fn init(id: &str, base_path: &Path) -> Result<Self, Box<dyn std::error::Error>> {
let mail_path = base_path.join("mail.sqlite"); let mail_path = base_path.join("mail.sqlite");
let graph_path = base_path.join("state_graph"); let graph_path = base_path.join("state_graph");
@@ -32,6 +63,12 @@ impl CHORUSAgent {
}) })
} }
/// Main event loop of the agent.
///
/// It repeatedly polls the mailbox for pending messages addressed to this
/// agent, logs each message into the `task_log` table, commits the graph, and
/// acknowledges the message. The loop sleeps for a configurable interval to
/// avoid busywaiting.
async fn run_loop(&self) { async fn run_loop(&self) {
println!("Agent {} starting run loop...", self.id); println!("Agent {} starting run loop...", self.id);
loop { loop {
@@ -60,6 +97,11 @@ impl CHORUSAgent {
} }
} }
/// Entry point for the CHORUS agent binary.
///
/// It creates a data directory under `/home/Tony/rust/projects/reset/CHORUS/data`
/// (note the capitalised `Tony` matches the original path), initialises the
/// `CHORUSAgent`, and starts its run loop.
#[tokio::main] #[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> { async fn main() -> Result<(), Box<dyn std::error::Error>> {
let agent_id = "agent-001"; let agent_id = "agent-001";

View File

@@ -1,18 +1,63 @@
/// # chrs-bubble
///
/// A provenancetracking crate that records nodes and edges in a directed acyclic
/// graph (DAG) and persists them using a Doltbacked graph implementation.
/// The crate is deliberately small it only pulls in `petgraph` for the inmemory
/// DAG, `serde` for serialization, `uuid` for unique identifiers and `thiserror`
/// for ergonomic error handling. It is used by higherlevel components that need
/// to capture the provenance of generated artifacts (e.g. files, messages, or
/// results) and later query that history.
///
/// The public API is organised around three concepts:
/// * **ProvenanceEdge** The type of relationship between two nodes.
/// * **BubbleError** Errors that can occur when interacting with the underlying
/// Dolt graph or when a node cannot be found.
/// * **ProvenanceGraph** The façade that holds an inmemory DAG and a
/// `DoltGraph` persistence layer, exposing methods to record nodes and links.
///
/// Each item is documented with a *WHAT*, *HOW* and *WHY* section so that users can
/// quickly understand its purpose, its implementation details and the design
/// rationale.
use chrs_graph::{DoltGraph, GraphError}; use chrs_graph::{DoltGraph, GraphError};
use ucxl::UCXLAddress;
use serde::{Deserialize, Serialize};
use thiserror::Error;
use uuid::Uuid;
use petgraph::graph::{DiGraph, NodeIndex}; use petgraph::graph::{DiGraph, NodeIndex};
use serde::{Deserialize, Serialize};
use std::collections::HashMap; use std::collections::HashMap;
use thiserror::Error;
use ucxl::UCXLAddress;
use uuid::Uuid;
/// Represents the kind of relationship between two provenance nodes.
///
/// * **WHAT** An enumeration of supported edge types. Currently we support:
/// - `DerivedFrom` Indicates that the target was derived from the source.
/// - `Cites` A citation relationship.
/// - `InfluencedBy` Denotes influence without direct derivation.
/// * **HOW** Used as the edge payload in the `petgraph::DiGraph`. The enum is
/// `#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq)]` so it
/// can be serialised when persisting the graph.
/// * **WHY** Encoding edge semantics as a dedicated enum makes provenance
/// queries expressive and typesafe, while keeping the ondisk representation
/// simple (a stringified variant).
#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq)] #[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq)]
pub enum ProvenanceEdge { pub enum ProvenanceEdge {
/// The target node was *derived* from the source node.
DerivedFrom, DerivedFrom,
/// The target node *cites* the source node.
Cites, Cites,
/// The target node was *influenced* by the source node.
InfluencedBy, InfluencedBy,
} }
/// Errors that can arise when working with a `ProvenanceGraph`.
///
/// * **WHAT** Enumerates possible failure modes:
/// - Graphlevel errors (`GraphError`).
/// - Serde JSON errors (`serde_json::Error`).
/// - A lookup failure when a node identifier cannot be resolved.
/// * **HOW** Implements `std::error::Error` via the `thiserror::Error` derive
/// macro, forwarding underlying error sources with `#[from]`.
/// * **WHY** A single error type simplifies error propagation for callers and
/// retains the original context for debugging.
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum BubbleError { pub enum BubbleError {
#[error("Graph error: {0}")] #[error("Graph error: {0}")]
@@ -23,6 +68,22 @@ pub enum BubbleError {
NodeNotFound(Uuid), NodeNotFound(Uuid),
} }
/// Core structure that maintains an inmemory DAG of provenance nodes and a
/// persistent `DoltGraph` backend.
///
/// * **WHAT** Holds:
/// - `persistence`: The Doltbased storage implementation.
/// - `dag`: A `petgraph::DiGraph` where node payloads are UUIDs and edges are
/// `ProvenanceEdge`s.
/// - `node_map`: A fast lookup map from node UUID to the corresponding
/// `petgraph::NodeIndex`.
/// * **HOW** Provides methods to create nodes (`record_node`) and edges
/// (`record_link`). These methods insert into the inmemory graph and then
/// persist the data in Dolt tables using simple `INSERT` statements followed by
/// a `commit`.
/// * **WHY** Separating the transient inmemory representation from durable
/// storage gives fast runtime queries while guaranteeing that the provenance
/// graph can survive process restarts and be inspected via Dolt tools.
pub struct ProvenanceGraph { pub struct ProvenanceGraph {
persistence: DoltGraph, persistence: DoltGraph,
dag: DiGraph<Uuid, ProvenanceEdge>, dag: DiGraph<Uuid, ProvenanceEdge>,
@@ -30,6 +91,13 @@ pub struct ProvenanceGraph {
} }
impl ProvenanceGraph { impl ProvenanceGraph {
/// Creates a new `ProvenanceGraph` backed by a preinitialised `DoltGraph`.
///
/// * **WHAT** Returns a fresh instance with empty inmemory structures.
/// * **HOW** Stores the supplied `persistence` and constructs a new `DiGraph`
/// and empty `HashMap`.
/// * **WHY** Allows callers to decide where the Dolt repository lives (e.g.
/// a temporary directory for tests or a permanent location for production).
pub fn new(persistence: DoltGraph) -> Self { pub fn new(persistence: DoltGraph) -> Self {
Self { Self {
persistence, persistence,
@@ -38,33 +106,73 @@ impl ProvenanceGraph {
} }
} }
/// Records a provenance node with a unique `Uuid` and an associated address.
///
/// * **WHAT** Persists the node both inmemory (`dag` + `node_map`) and in a
/// Dolt table called `provenance_nodes`.
/// * **HOW** If the node does not already exist, it is added to the DAG and a
/// row is inserted via `persistence.insert_node`. A commit is performed with a
/// descriptive message.
/// * **WHY** Storing the address (typically a UCXL address) allows later
/// resolution of where the artifact originated.
pub fn record_node(&mut self, id: Uuid, address: &str) -> Result<(), BubbleError> { pub fn record_node(&mut self, id: Uuid, address: &str) -> Result<(), BubbleError> {
if !self.node_map.contains_key(&id) { if !self.node_map.contains_key(&id) {
let idx = self.dag.add_node(id); let idx = self.dag.add_node(id);
self.node_map.insert(id, idx); self.node_map.insert(id, idx);
// Persist // Ensure the backing table exists ignore errors if it already does.
self.persistence.create_table("provenance_nodes", "id VARCHAR(255) PRIMARY KEY, address TEXT") self.persistence
.create_table(
"provenance_nodes",
"id VARCHAR(255) PRIMARY KEY, address TEXT",
)
.ok(); .ok();
let data = serde_json::json!({ let data = serde_json::json!({
"id": id.to_string(), "id": id.to_string(),
"address": address "address": address,
}); });
self.persistence.insert_node("provenance_nodes", data)?; self.persistence.insert_node("provenance_nodes", data)?;
self.persistence.commit(&format!("Record provenance node: {}", id))?; self.persistence
.commit(&format!("Record provenance node: {}", id))?;
} }
Ok(()) Ok(())
} }
pub fn record_link(&mut self, source: Uuid, target: Uuid, edge: ProvenanceEdge) -> Result<(), BubbleError> { /// Records a directed edge between two existing nodes.
let source_idx = *self.node_map.get(&source).ok_or(BubbleError::NodeNotFound(source))?; ///
let target_idx = *self.node_map.get(&target).ok_or(BubbleError::NodeNotFound(target))?; /// * **WHAT** Adds an edge of type `ProvenanceEdge` to the DAG and stores a
/// corresponding row in the `provenance_links` Dolt table.
/// * **HOW** Retrieves the `NodeIndex` for each UUID (erroring with
/// `BubbleError::NodeNotFound` if missing), adds the edge to `dag`, then
/// inserts a row containing a new link UUID, source/target IDs and the edge
/// type as a string.
/// * **WHY** Persisting links allows the full provenance graph to be queried
/// outside the process, while the inmemory representation keeps runtime
/// operations cheap.
pub fn record_link(
&mut self,
source: Uuid,
target: Uuid,
edge: ProvenanceEdge,
) -> Result<(), BubbleError> {
let source_idx = *self
.node_map
.get(&source)
.ok_or(BubbleError::NodeNotFound(source))?;
let target_idx = *self
.node_map
.get(&target)
.ok_or(BubbleError::NodeNotFound(target))?;
self.dag.add_edge(source_idx, target_idx, edge); self.dag.add_edge(source_idx, target_idx, edge);
// Persist // Ensure the links table exists.
self.persistence.create_table("provenance_links", "id VARCHAR(255) PRIMARY KEY, source_id TEXT, target_id TEXT, edge_type TEXT") self.persistence
.create_table(
"provenance_links",
"id VARCHAR(255) PRIMARY KEY, source_id TEXT, target_id TEXT, edge_type TEXT",
)
.ok(); .ok();
let link_id = Uuid::new_v4(); let link_id = Uuid::new_v4();
@@ -72,12 +180,11 @@ impl ProvenanceGraph {
"id": link_id.to_string(), "id": link_id.to_string(),
"source_id": source.to_string(), "source_id": source.to_string(),
"target_id": target.to_string(), "target_id": target.to_string(),
"edge_type": format!("{:?}", edge) "edge_type": format!("{:?}", edge),
}); });
self.persistence.insert_node("provenance_links", data)?; self.persistence.insert_node("provenance_links", data)?;
self.persistence.commit(&format!("Record provenance link: {} -> {}", source, target))?; self.persistence
.commit(&format!("Record provenance link: {} -> {}", source, target))?;
Ok(()) Ok(())
} }
} }
@@ -96,9 +203,15 @@ mod tests {
let id1 = Uuid::new_v4(); let id1 = Uuid::new_v4();
let id2 = Uuid::new_v4(); let id2 = Uuid::new_v4();
graph.record_node(id1, "ucxl://agent:1@proj:task/#/file1.txt").unwrap(); graph
graph.record_node(id2, "ucxl://agent:1@proj:task/#/file2.txt").unwrap(); .record_node(id1, "ucxl://agent:1@proj:task/#/file1.txt")
.unwrap();
graph
.record_node(id2, "ucxl://agent:1@proj:task/#/file2.txt")
.unwrap();
graph.record_link(id1, id2, ProvenanceEdge::DerivedFrom).unwrap(); graph
.record_link(id1, id2, ProvenanceEdge::DerivedFrom)
.unwrap();
} }
} }

View File

@@ -1,26 +1,53 @@
//! chrs-graph library implementation using Dolt for graph persistence.
use chrono::Utc; use chrono::Utc;
use serde_json::Value; use serde_json::Value;
use std::{path::Path, process::Command}; use std::{path::Path, process::Command};
use thiserror::Error; use thiserror::Error;
use uuid::Uuid; use uuid::Uuid;
/// Enumeration of possible errors that can arise while interacting with the `DoltGraph`.
///
/// Each variant wraps an underlying error source, making it easier for callers to
/// understand the failure context and decide on remedial actions.
#[derive(Error, Debug)] #[derive(Error, Debug)]
pub enum GraphError { pub enum GraphError {
/// Propagates I/O errors from the standard library (e.g., filesystem access).
#[error("IO error: {0}")] #[error("IO error: {0}")]
Io(#[from] std::io::Error), Io(#[from] std::io::Error),
/// Represents a failure when executing a Dolt command.
#[error("Command failed: {0}")] #[error("Command failed: {0}")]
CommandFailed(String), CommandFailed(String),
/// Propagates JSON (de)serialization errors from `serde_json`.
#[error("Serde JSON error: {0}")] #[error("Serde JSON error: {0}")]
SerdeJson(#[from] serde_json::Error), SerdeJson(#[from] serde_json::Error),
/// A generic catchall for errors that don't fit the other categories.
#[error("Other error: {0}")] #[error("Other error: {0}")]
Other(String), Other(String),
} }
/// Wrapper around a Dolt repository that stores graph data.
///
/// The `DoltGraph` type encapsulates a path to a Dolt repo and provides highlevel
/// operations such as initializing the repo, committing changes, creating tables, and
/// inserting nodes expressed as JSON objects.
///
/// # Architectural Rationale
/// Dolt offers a Gitlike versioncontrolled SQL database, which aligns well with CHORUS's
/// need for an immutable, queryable history of graph mutations. By wrapping Dolt commands in
/// this struct we isolate the rest of the codebase from the commandline interface, making the
/// graph layer portable and easier to test.
pub struct DoltGraph { pub struct DoltGraph {
/// Filesystem path to the root of the Dolt repository.
pub repo_path: std::path::PathBuf, pub repo_path: std::path::PathBuf,
} }
impl DoltGraph { impl DoltGraph {
/// Initialise (or open) a Dolt repository at the given `path`.
///
/// If the directory does not already contain a `.dolt` subdirectory, the function runs
/// `dolt init` to create a new repository. Errors from the underlying command are wrapped in
/// `GraphError::CommandFailed`.
pub fn init(path: &Path) -> Result<Self, GraphError> { pub fn init(path: &Path) -> Result<Self, GraphError> {
if !path.join(".dolt").exists() { if !path.join(".dolt").exists() {
let status = Command::new("dolt") let status = Command::new("dolt")
@@ -39,6 +66,11 @@ impl DoltGraph {
}) })
} }
/// Execute a Dolt command with the specified arguments.
///
/// This helper centralises command execution and error handling. It runs `dolt` with the
/// provided argument slice, captures stdout/stderr, and returns `GraphError::CommandFailed`
/// when the command exits with a nonzero status.
fn run_cmd(&self, args: &[&str]) -> Result<(), GraphError> { fn run_cmd(&self, args: &[&str]) -> Result<(), GraphError> {
let output = Command::new("dolt") let output = Command::new("dolt")
.args(args) .args(args)
@@ -51,16 +83,25 @@ impl DoltGraph {
Ok(()) Ok(())
} }
/// Stage all changes and commit them with the provided `message`.
///
/// The method first runs `dolt add -A` to stage modifications, then `dolt commit -m`.
/// Any failure in these steps propagates as a `GraphError`.
pub fn commit(&self, message: &str) -> Result<(), GraphError> { pub fn commit(&self, message: &str) -> Result<(), GraphError> {
self.run_cmd(&["add", "-A"])?; self.run_cmd(&["add", "-A"])?;
self.run_cmd(&["commit", "-m", message])?; self.run_cmd(&["commit", "-m", message])?;
Ok(()) Ok(())
} }
/// Create a SQL table within the Dolt repository.
///
/// `schema` should be a commaseparated column definition list (e.g., `"id INT PRIMARY KEY, name TEXT"`).
/// If the table already exists, the function treats it as a noop and returns `Ok(())`.
pub fn create_table(&self, table_name: &str, schema: &str) -> Result<(), GraphError> { pub fn create_table(&self, table_name: &str, schema: &str) -> Result<(), GraphError> {
let query = format!("CREATE TABLE {} ({})", table_name, schema); let query = format!("CREATE TABLE {} ({})", table_name, schema);
if let Err(e) = self.run_cmd(&["sql", "-q", &query]) { if let Err(e) = self.run_cmd(&["sql", "-q", &query]) {
if e.to_string().contains("already exists") { if e.to_string().contains("already exists") {
// Table is already present not an error for our usecase.
return Ok(()); return Ok(());
} }
return Err(e); return Err(e);
@@ -69,6 +110,11 @@ impl DoltGraph {
Ok(()) Ok(())
} }
/// Insert a node represented by a JSON object into the specified `table`.
///
/// The JSON `data` must be an object where keys correspond to column names. Supported value
/// types are strings, numbers, booleans, and null. Complex JSON structures are rejected because
/// they cannot be directly mapped to SQL scalar columns.
pub fn insert_node(&self, table: &str, data: Value) -> Result<(), GraphError> { pub fn insert_node(&self, table: &str, data: Value) -> Result<(), GraphError> {
let obj = data let obj = data
.as_object() .as_object()
@@ -111,7 +157,11 @@ mod tests {
#[test] #[test]
fn test_init_create_table_and_commit() { fn test_init_create_table_and_commit() {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();
// Initialise a Dolt repository in a temporary directory.
let graph = DoltGraph::init(dir.path()).expect("init failed"); let graph = DoltGraph::init(dir.path()).expect("init failed");
graph.create_table("nodes", "id INT PRIMARY KEY, name TEXT").expect("create table failed"); // Create a simple `nodes` table.
graph
.create_table("nodes", "id INT PRIMARY KEY, name TEXT")
.expect("create table failed");
} }
} }

View File

@@ -1,4 +1,4 @@
// chrs-mail library implementation //! chrs-mail library implementation
use std::path::Path; use std::path::Path;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
@@ -9,42 +9,84 @@ use thiserror::Error;
use uuid::Uuid; use uuid::Uuid;
/// Represents a mail message stored in the mailbox. /// Represents a mail message stored in the mailbox.
///
/// # Definition
/// `Message` is a data structure that models a single mail exchange between two peers.
/// It contains a unique identifier, sender and recipient identifiers, a topic string, a JSON payload,
/// and timestamps for when the message was sent and optionally when it was read.
///
/// # Implementation Details
/// - `id` is a **Uuid** generated by the caller to guarantee global uniqueness.
/// - `payload` uses `serde_json::Value` so arbitrary JSON can be attached to the message.
/// - `sent_at` and `read_at` are stored as `chrono::DateTime<Utc>` to provide timezoneagnostic timestamps.
///
/// # Rationale
/// This struct provides a lightweight, serialisable representation of a message that can be persisted
/// in the SQLitebacked mailbox (see `Mailbox`). Keeping the payload as JSON allows different subsystems
/// of the CHORUS platform to embed domainspecific data without requiring a rigid schema.
#[derive(Debug, Serialize, Deserialize, Clone)] #[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Message { pub struct Message {
/// Globally unique identifier for the message.
pub id: Uuid, pub id: Uuid,
/// Identifier of the sending peer.
pub from_peer: String, pub from_peer: String,
/// Identifier of the receiving peer.
pub to_peer: String, pub to_peer: String,
/// Topic or channel of the message; used for routing/filters.
pub topic: String, pub topic: String,
/// Arbitrary JSON payload containing the message body.
pub payload: JsonValue, pub payload: JsonValue,
/// Timestamp (UTC) when the message was sent.
pub sent_at: DateTime<Utc>, pub sent_at: DateTime<Utc>,
/// Optional timestamp (UTC) when the recipient read the message.
pub read_at: Option<DateTime<Utc>>, pub read_at: Option<DateTime<Utc>>,
} }
/// Errors that can occur while using the Mailbox. /// Errors that can occur while using the `Mailbox`.
///
/// Each variant wraps an underlying error type from a dependency, allowing callers to
/// react appropriately (e.g., retry on SQLite errors, surface serialization problems, etc.).
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum MailError { pub enum MailError {
/// Propagates any `rusqlite::Error` encountered while interacting with the SQLite DB.
#[error("SQLite error: {0}")] #[error("SQLite error: {0}")]
Sqlite(#[from] rusqlite::Error), Sqlite(#[from] rusqlite::Error),
/// Propagates JSON (de)serialization errors from `serde_json`.
#[error("JSON serialization error: {0}")] #[error("JSON serialization error: {0}")]
Json(#[from] serde_json::Error), Json(#[from] serde_json::Error),
/// Propagates UUID parsing errors.
#[error("UUID parsing error: {0}")] #[error("UUID parsing error: {0}")]
Uuid(#[from] uuid::Error), Uuid(#[from] uuid::Error),
/// Propagates chrono parsing errors, primarily when deserialising timestamps from string.
#[error("Chrono parsing error: {0}")] #[error("Chrono parsing error: {0}")]
ChronoParse(#[from] chrono::ParseError), ChronoParse(#[from] chrono::ParseError),
} }
/// Wrapper around a SQLite connection providing mail-box functionalities. /// Wrapper around a SQLite connection providing mailboxstyle functionalities.
///
/// The `Mailbox` abstracts a SQLite database that stores `Message` records. It offers a minimal
/// API for opening/creating the DB, sending messages, receiving pending messages for a peer, and
/// marking messages as read.
///
/// # Architectural Rationale
/// Using SQLite (via `rusqlite`) provides a zeroconfiguration, filebased persistence layer that is
/// portable across the various environments where CHORUS components may run. The wrapper isolates the
/// rest of the codebase from raw SQL handling, ensuring a single place for schema evolution and error
/// mapping.
pub struct Mailbox { pub struct Mailbox {
conn: Connection, conn: Connection,
} }
impl Mailbox { impl Mailbox {
/// Open (or create) a mailbox database at `path`. /// Open (or create) a mailbox database at `path`.
///
/// The function creates the SQLite file if it does not exist, enables WAL mode for better
/// concurrency, and ensures the `messages` table is present.
pub fn open<P: AsRef<Path>>(path: P) -> Result<Self, MailError> { pub fn open<P: AsRef<Path>>(path: P) -> Result<Self, MailError> {
let conn = Connection::open(path)?; let conn = Connection::open(path)?;
// Enable WAL mode. // Enable WAL mode for improved concurrency and durability.
conn.pragma_update(None, "journal_mode", &"WAL")?; conn.pragma_update(None, "journal_mode", &"WAL")?;
// Create table. // Create the `messages` table if it does not already exist.
conn.execute( conn.execute(
"CREATE TABLE IF NOT EXISTS messages ( "CREATE TABLE IF NOT EXISTS messages (
id TEXT PRIMARY KEY, id TEXT PRIMARY KEY,
@@ -61,6 +103,9 @@ impl Mailbox {
} }
/// Store a new message in the mailbox. /// Store a new message in the mailbox.
///
/// The `payload` field is serialised to a JSON string before insertion. The `read_at` column is
/// initialised to `NULL` because the message has not yet been consumed.
pub fn send(&self, msg: &Message) -> Result<(), MailError> { pub fn send(&self, msg: &Message) -> Result<(), MailError> {
let payload_str = serde_json::to_string(&msg.payload)?; let payload_str = serde_json::to_string(&msg.payload)?;
self.conn.execute( self.conn.execute(
@@ -79,6 +124,9 @@ impl Mailbox {
} }
/// Retrieve all unread messages addressed to `peer_id`. /// Retrieve all unread messages addressed to `peer_id`.
///
/// The query filters on `to_peer` and `read_at IS NULL`. Returned rows are transformed back into
/// `Message` structs, parsing the UUID, JSON payload, and RFC3339 timestamps.
pub fn receive_pending(&self, peer_id: &str) -> Result<Vec<Message>, MailError> { pub fn receive_pending(&self, peer_id: &str) -> Result<Vec<Message>, MailError> {
let mut stmt = self.conn.prepare( let mut stmt = self.conn.prepare(
"SELECT id, from_peer, to_peer, topic, payload, sent_at, read_at "SELECT id, from_peer, to_peer, topic, payload, sent_at, read_at
@@ -97,16 +145,13 @@ impl Mailbox {
// Parse Uuid // Parse Uuid
let id = Uuid::parse_str(&id_str) let id = Uuid::parse_str(&id_str)
.map_err(|e| rusqlite::Error::FromSqlConversionFailure(0, rusqlite::types::Type::Text, Box::new(e)))?; .map_err(|e| rusqlite::Error::FromSqlConversionFailure(0, rusqlite::types::Type::Text, Box::new(e)))?;
// Parse JSON payload
// Parse JSON
let payload: JsonValue = serde_json::from_str(&payload_str) let payload: JsonValue = serde_json::from_str(&payload_str)
.map_err(|e| rusqlite::Error::FromSqlConversionFailure(4, rusqlite::types::Type::Text, Box::new(e)))?; .map_err(|e| rusqlite::Error::FromSqlConversionFailure(4, rusqlite::types::Type::Text, Box::new(e)))?;
// Parse timestamps
// Parse Timestamps
let sent_at = DateTime::parse_from_rfc3339(&sent_at_str) let sent_at = DateTime::parse_from_rfc3339(&sent_at_str)
.map_err(|e| rusqlite::Error::FromSqlConversionFailure(5, rusqlite::types::Type::Text, Box::new(e)))? .map_err(|e| rusqlite::Error::FromSqlConversionFailure(5, rusqlite::types::Type::Text, Box::new(e)))?
.with_timezone(&Utc); .with_timezone(&Utc);
let read_at = match read_at_opt { let read_at = match read_at_opt {
Some(s) => Some( Some(s) => Some(
DateTime::parse_from_rfc3339(&s) DateTime::parse_from_rfc3339(&s)
@@ -135,6 +180,8 @@ impl Mailbox {
} }
/// Mark a message as read by setting its `read_at` timestamp. /// Mark a message as read by setting its `read_at` timestamp.
///
/// The current UTC time is stored in the `read_at` column for the row with the matching `id`.
pub fn mark_read(&self, msg_id: Uuid) -> Result<(), MailError> { pub fn mark_read(&self, msg_id: Uuid) -> Result<(), MailError> {
let now = Utc::now().to_rfc3339(); let now = Utc::now().to_rfc3339();
self.conn.execute( self.conn.execute(
@@ -177,11 +224,11 @@ mod tests {
let pending = mailbox.receive_pending("bob")?; let pending = mailbox.receive_pending("bob")?;
assert_eq!(pending.len(), 1); assert_eq!(pending.len(), 1);
assert_eq!(pending[0].id, msg.id); assert_eq!(pending[0].id, msg.id);
mailbox.mark_read(msg.id)?; mailbox.mark_read(msg.id)?;
let pending2 = mailbox.receive_pending("bob")?; let pending2 = mailbox.receive_pending("bob")?;
assert!(pending2.is_empty()); assert!(pending2.is_empty());
fs::remove_file(db_path).unwrap(); fs::remove_file(db_path).unwrap();
Ok(()) Ok(())
} }

View File

@@ -1,3 +1,18 @@
/// chrs-poc crate provides an endtoend proofofconcept demonstration of the CHORUS
/// system. It wires together the core components:
///
/// * `Mailbox` messagepassing layer (`chrs_mail`).
/// * `DoltGraph` persistent state graph (`chrs_graph`).
/// * `ProvenanceGraph` provenance tracking (`chrs_bubble`).
/// * `SecretSentinel` secret scrubbing (`chrs_shhh`).
/// * `CurationEngine` decision record curation (`chrs_slurp`).
///
/// The flow mirrors a realistic task lifecycle: a client dispatches a task
/// message, an agent processes it, generates reasoning (with a deliberately
/// injected secret), the secret is scrubbed, a decision record is curated, and
/// provenance links are recorded. The final state is persisted in a Dolt
/// repository.
use chrs_bubble::{ProvenanceGraph, ProvenanceEdge}; use chrs_bubble::{ProvenanceGraph, ProvenanceEdge};
use chrs_graph::DoltGraph; use chrs_graph::DoltGraph;
use chrs_mail::{Mailbox, Message}; use chrs_mail::{Mailbox, Message};
@@ -8,11 +23,25 @@ use std::fs;
use std::path::Path; use std::path::Path;
use uuid::Uuid; use uuid::Uuid;
/// Entry point for the proofofconcept binary.
///
/// The function performs the following highlevel steps, each documented inline:
/// 1. Sets up a temporary workspace.
/// 2. Initialises all required components.
/// 3. Simulates a client sending an audit task to an agent.
/// 4. Processes the task as the agent would, including secret scrubbing.
/// 5. Curates a `DecisionRecord` via the SLURP engine.
/// 6. Records provenance relationships in the BUBBLE graph.
/// 7. Prints a success banner and the path to the persisted Dolt state.
///
/// Errors from any component propagate via `?` and are reported as a boxed error.
#[tokio::main] #[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> { async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("=== CHORUS End-to-End Proof of Concept ==="); println!("=== CHORUS End-to-End Proof of Concept ===");
// ---------------------------------------------------------------------
// 1. Setup paths // 1. Setup paths
// ---------------------------------------------------------------------
let base_path = Path::new("/tmp/chrs_poc"); let base_path = Path::new("/tmp/chrs_poc");
if base_path.exists() { if base_path.exists() {
fs::remove_dir_all(base_path)?; fs::remove_dir_all(base_path)?;
@@ -23,20 +52,25 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
let graph_path = base_path.join("state_graph"); let graph_path = base_path.join("state_graph");
fs::create_dir_all(&graph_path)?; fs::create_dir_all(&graph_path)?;
// 2. Initialize Components // ---------------------------------------------------------------------
// 2. Initialise Components
// ---------------------------------------------------------------------
let mailbox = Mailbox::open(&mail_path)?; let mailbox = Mailbox::open(&mail_path)?;
let persistence = DoltGraph::init(&graph_path)?; let persistence = DoltGraph::init(&graph_path)?;
let mut provenance = ProvenanceGraph::new(persistence); let mut provenance = ProvenanceGraph::new(persistence);
// We need a fresh DoltGraph handle for SLURP because ProvenanceGraph moved 'persistence' // A separate graph handle is needed for the SLURP engine because the
// In a real app, we'd use Arc<Mutex<DoltGraph>> or similar. // provenance graph consumes the original `DoltGraph`. In production we would
// share via `Arc<Mutex<>>`.
let slurp_persistence = DoltGraph::init(&graph_path)?; let slurp_persistence = DoltGraph::init(&graph_path)?;
let curator = CurationEngine::new(slurp_persistence); let curator = CurationEngine::new(slurp_persistence);
let sentinel = SecretSentinel::new_default(); let sentinel = SecretSentinel::new_default();
println!("[POC] Components initialized."); println!("[POC] Components initialized.");
// ---------------------------------------------------------------------
// 3. Dispatch Task (simulate client sending message to Agent-A) // 3. Dispatch Task (simulate client sending message to Agent-A)
// ---------------------------------------------------------------------
let task_id = Uuid::new_v4(); let task_id = Uuid::new_v4();
let task_msg = Message { let task_msg = Message {
id: task_id, id: task_id,
@@ -50,19 +84,21 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
mailbox.send(&task_msg)?; mailbox.send(&task_msg)?;
println!("[POC] Task dispatched to Agent-A: {}", task_id); println!("[POC] Task dispatched to Agent-A: {}", task_id);
// ---------------------------------------------------------------------
// 4. Process Task (Agent-A logic) // 4. Process Task (Agent-A logic)
// ---------------------------------------------------------------------
let pending = mailbox.receive_pending("agent-a")?; let pending = mailbox.receive_pending("agent-a")?;
for msg in pending { for msg in pending {
println!("[POC] Agent-A received task: {}", msg.topic); println!("[POC] Agent-A received task: {}", msg.topic);
// Generate reasoning with an accidental secret // Simulated reasoning that accidentally contains a secret.
let raw_reasoning = "Audit complete. Verified UCXL address parsing. My secret key is sk-1234567890abcdef1234567890abcdef1234567890abcdef"; let raw_reasoning = "Audit complete. Verified UCXL address parsing. My secret key is sk-1234567890abcdef1234567890abcdef1234567890abcdef";
// 5. SHHH: Scrub secrets // 5. SHHH: Scrub secrets from the reasoning output.
let clean_reasoning = sentinel.scrub_text(raw_reasoning); let clean_reasoning = sentinel.scrub_text(raw_reasoning);
println!("[POC] SHHH scrubbed reasoning: {}", clean_reasoning); println!("[POC] SHHH scrubbed reasoning: {}", clean_reasoning);
// 6. SLURP: Create and Curate Decision Record // 6. SLURP: Create and curate a DecisionRecord.
let dr = DecisionRecord { let dr = DecisionRecord {
id: Uuid::new_v4(), id: Uuid::new_v4(),
author: "agent-a".into(), author: "agent-a".into(),
@@ -72,7 +108,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
}; };
curator.curate_decision(dr.clone())?; curator.curate_decision(dr.clone())?;
// 7. BUBBLE: Record Provenance // 7. BUBBLE: Record provenance relationships.
provenance.record_node(task_id, "ucxl://client:user@poc:task/#/audit_request")?; provenance.record_node(task_id, "ucxl://client:user@poc:task/#/audit_request")?;
provenance.record_node(dr.id, "ucxl://agent-a:worker@poc:task/#/audit_result")?; provenance.record_node(dr.id, "ucxl://agent-a:worker@poc:task/#/audit_result")?;
provenance.record_link(dr.id, task_id, ProvenanceEdge::DerivedFrom)?; provenance.record_link(dr.id, task_id, ProvenanceEdge::DerivedFrom)?;
@@ -82,8 +118,10 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
mailbox.mark_read(msg.id)?; mailbox.mark_read(msg.id)?;
} }
println!(" // ---------------------------------------------------------------------
=== POC SUCCESSFUL ==="); // 8. Final output
// ---------------------------------------------------------------------
println!("\n=== POC SUCCESSFUL ===");
println!("Final State is persisted in Dolt at: {:?}", graph_path); println!("Final State is persisted in Dolt at: {:?}", graph_path);
Ok(()) Ok(())

View File

@@ -1,23 +1,63 @@
use regex::Regex;
use lazy_static::lazy_static; use lazy_static::lazy_static;
/// # chrs-shhh
///
/// This crate provides utilities for redacting sensitive information from text.
/// It defines a set of **redaction rules** that match secret patterns (like API keys)
/// and replace them with a placeholder. The crate is deliberately lightweight it
/// only depends on `regex` and `lazy_static` and can be embedded in any larger
/// application that needs to scrub logs or userprovided data before storage or
/// transmission.
use regex::Regex;
/// Represents a single rule used to redact a secret.
///
/// * **WHAT** The name of the rule (e.g. "OpenAI API Key"), the compiled
/// regularexpression pattern that matches the secret, and the replacement string
/// that will be inserted.
/// * **HOW** The `pattern` is a `Regex` that is applied to an input string. When a
/// match is found the `replacement` is inserted using `replace_all`.
/// * **WHY** Decoupling the rule definition from the redaction logic makes the
/// sanitizer extensible; new patterns can be added without changing the core
/// implementation.
pub struct RedactionRule { pub struct RedactionRule {
/// Humanreadable name for the rule.
pub name: String, pub name: String,
/// Compiled regular expression that matches the secret.
pub pattern: Regex, pub pattern: Regex,
/// Text that will replace the matched secret.
pub replacement: String, pub replacement: String,
} }
/// The main entry point for secret detection and redaction.
///
/// * **WHAT** Holds a collection of `RedactionRule`s.
/// * **HOW** Provides methods to scrub a string (`scrub_text`) and to simply
/// check whether any secret is present (`contains_secrets`).
/// * **WHY** Centralising the rules in a struct enables reuse and makes testing
/// straightforward.
pub struct SecretSentinel { pub struct SecretSentinel {
rules: Vec<RedactionRule>, rules: Vec<RedactionRule>,
} }
lazy_static! { lazy_static! {
/// Matches OpenAI API keys of the form `sk-<48 alphanumeric chars>`.
static ref OPENAI_KEY: Regex = Regex::new(r"sk-[a-zA-Z0-9]{48}").unwrap(); static ref OPENAI_KEY: Regex = Regex::new(r"sk-[a-zA-Z0-9]{48}").unwrap();
/// Matches AWS access keys that start with `AKIA` followed by 16 uppercase letters or digits.
static ref AWS_KEY: Regex = Regex::new(r"AKIA[0-9A-Z]{16}").unwrap(); static ref AWS_KEY: Regex = Regex::new(r"AKIA[0-9A-Z]{16}").unwrap();
/// Generic secret pattern that captures common keywords like password, secret, key or token.
/// The capture group (`$1`) is retained so that the surrounding identifier is preserved.
static ref GENERIC_SECRET: Regex = Regex::new(r"(?i)(password|secret|key|token)\s*[:=]\s*[^\s]+").unwrap(); static ref GENERIC_SECRET: Regex = Regex::new(r"(?i)(password|secret|key|token)\s*[:=]\s*[^\s]+").unwrap();
} }
impl SecretSentinel { impl SecretSentinel {
/// Constructs a `SecretSentinel` prepopulated with a sensible default set of rules.
///
/// * **WHAT** Returns a sentinel containing three rules: OpenAI, AWS and a generic
/// secret matcher.
/// * **HOW** Instantiates `RedactionRule`s using the lazilyinitialised regexes
/// above and stores them in the `rules` vector.
/// * **WHY** Provides a readytouse configuration for typical development
/// environments while still allowing callers to create custom instances.
pub fn new_default() -> Self { pub fn new_default() -> Self {
let rules = vec![ let rules = vec![
RedactionRule { RedactionRule {
@@ -33,20 +73,36 @@ impl SecretSentinel {
RedactionRule { RedactionRule {
name: "Generic Secret".into(), name: "Generic Secret".into(),
pattern: GENERIC_SECRET.clone(), pattern: GENERIC_SECRET.clone(),
// $1 refers to the captured keyword (password, secret, …).
replacement: "$1: [REDACTED]".into(), replacement: "$1: [REDACTED]".into(),
}, },
]; ];
Self { rules } Self { rules }
} }
/// Redacts all secrets found in `input` according to the configured rules.
///
/// * **WHAT** Returns a new `String` where each match has been replaced.
/// * **HOW** Iterates over the rules and applies `replace_all` for each.
/// * **WHY** Performing the replacements sequentially ensures that overlapping
/// patterns are handled deterministically.
pub fn scrub_text(&self, input: &str) -> String { pub fn scrub_text(&self, input: &str) -> String {
let mut scrubbed = input.to_string(); let mut scrubbed = input.to_string();
for rule in &self.rules { for rule in &self.rules {
scrubbed = rule.pattern.replace_all(&scrubbed, &rule.replacement).to_string(); scrubbed = rule
.pattern
.replace_all(&scrubbed, &rule.replacement)
.to_string();
} }
scrubbed scrubbed
} }
/// Checks whether any of the configured rules match `input`.
///
/// * **WHAT** Returns `true` if at least one rule's pattern matches.
/// * **HOW** Uses `Iter::any` over `self.rules` with `is_match`.
/// * **WHY** A quick predicate useful for shortcircuiting logging or error
/// handling before performing the full redaction.
pub fn contains_secrets(&self, input: &str) -> bool { pub fn contains_secrets(&self, input: &str) -> bool {
self.rules.iter().any(|rule| rule.pattern.is_match(input)) self.rules.iter().any(|rule| rule.pattern.is_match(input))
} }

View File

@@ -1,19 +1,82 @@
use chrs_graph::{DoltGraph, GraphError}; //! # chrs-slurp
use ucxl::UCXLAddress; //!
//! **Intelligence Crate** Provides the *curation* layer for the CHORUS system.
//!
//! The purpose of this crate is to take **Decision Records** generated by autonomous
//! agents, validate them, and persist them into the graph database. It isolates the
//! validation and storage concerns so that other components (e.g. provenance, security)
//! can work with a clean, audited data model.
//!
//! ## Architectural Rationale
//!
//! * **Separation of concerns** Agents produce raw decisions; this crate is the
//! single source of truth for how those decisions are stored.
//! * **Auditability** By persisting to a Doltbacked graph each decision is versioned
//! and can be replaybacked, satisfying CHORUSs requirement for reproducible
//! reasoning.
//! * **Extensibility** The `CurationEngine` can be extended with additional validation
//! steps (e.g. policy checks) without touching the agents themselves.
//!
//! The crate depends on:
//! * `chrs-graph` a thin wrapper around a Doltbacked graph implementation.
//! * `ucxl` for addressing external knowledge artefacts.
//! * `chrono`, `serde`, `uuid` standard utilities for timestamps, (de)serialization
//! and unique identifiers.
//!
//! ---
//!
//! # Public API
//!
//! The public surface consists of three items:
//!
//! * `DecisionRecord` data structure representing a curated decision.
//! * `SlurpError` enumeration of possible errors while curating.
//! * `CurationEngine` the engine that validates and persists `DecisionRecord`s.
//!
//! Each item is documented inline below.
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use chrs_graph::{DoltGraph, GraphError};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use thiserror::Error; use thiserror::Error;
use ucxl::UCXLAddress;
use uuid::Uuid; use uuid::Uuid;
/// A record representing a curated decision within the CHORUS system.
///
/// # What
///
/// This struct captures the essential metadata of a decision made by an
/// autonomous agent, including who authored it, the reasoning behind it, any
/// citations to external knowledge, and a timestamp.
///
/// # Why
///
/// Decision records are persisted in the graph database so that downstream
/// components (e.g., provenance analysis) can reason about the provenance and
/// justification of actions. Storing them as a dedicated table enables
/// reproducibility and auditability across the CHORUS architecture.
#[derive(Debug, Serialize, Deserialize, Clone)] #[derive(Debug, Serialize, Deserialize, Clone)]
pub struct DecisionRecord { pub struct DecisionRecord {
/// Unique identifier for the decision.
pub id: Uuid, pub id: Uuid,
/// Identifier of the agent or human that authored the decision.
pub author: String, pub author: String,
/// Freeform textual reasoning explaining the decision.
pub reasoning: String, pub reasoning: String,
pub citations: Vec<String>, // Serialized UCXL addresses /// Serialized UCXL addresses that serve as citations for the decision.
/// Each entry should be a valid `UCXLAddress` string.
pub citations: Vec<String>,
/// The moment the decision was created.
pub timestamp: DateTime<Utc>, pub timestamp: DateTime<Utc>,
} }
/// Errors that can arise while slurping (curating) a decision record.
///
/// * `Graph` underlying graph database operation failed.
/// * `Serde` (de)serialization of the decision data failed.
/// * `ValidationError` a supplied citation could not be parsed as a
/// `UCXLAddress`.
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum SlurpError { pub enum SlurpError {
#[error("Graph error: {0}")] #[error("Graph error: {0}")]
@@ -24,39 +87,70 @@ pub enum SlurpError {
ValidationError(String), ValidationError(String),
} }
/// Core engine that validates and persists `DecisionRecord`s into the
/// Doltbacked graph.
///
/// # Why
///
/// Centralising curation logic ensures a single place for validation and
/// storage semantics, keeping the rest of the codebase agnostic of the graph
/// implementation details.
pub struct CurationEngine { pub struct CurationEngine {
graph: DoltGraph, graph: DoltGraph,
} }
impl CurationEngine { impl CurationEngine {
/// Creates a new `CurationEngine` bound to the supplied `DoltGraph`.
///
/// The engine holds a reference to the graph for the lifetime of the
/// instance; callers are responsible for providing a correctly initialised
/// graph.
pub fn new(graph: DoltGraph) -> Self { pub fn new(graph: DoltGraph) -> Self {
Self { graph } Self { graph }
} }
/// Validates the citations in `dr` and persists the decision into the
/// graph.
///
/// The method performs three steps:
/// 1. **Citation validation** each citation string is parsed into a
/// `UCXLAddress`. Invalid citations produce a `ValidationError`.
/// 2. **Table assurance** attempts to create the `curated_decisions`
/// table if it does not already exist. Errors are ignored because the
/// table may already be present.
/// 3. **Insertion & commit** the decision is serialised to JSON and
/// inserted as a node, then the graph transaction is committed.
///
/// # Errors
/// Propagates any `GraphError`, `serde_json::Error`, or custom
/// validation failures.
pub fn curate_decision(&self, dr: DecisionRecord) -> Result<(), SlurpError> { pub fn curate_decision(&self, dr: DecisionRecord) -> Result<(), SlurpError> {
// 1. Validate Citations // 1. Validate Citations
for citation in &dr.citations { for citation in &dr.citations {
use std::str::FromStr; use std::str::FromStr;
UCXLAddress::from_str(citation) UCXLAddress::from_str(citation).map_err(|e| {
.map_err(|e| SlurpError::ValidationError(format!("Invalid citation {}: {}", citation, e)))?; SlurpError::ValidationError(format!("Invalid citation {}: {}", citation, e))
})?;
} }
// 2. Log DR into Graph (create table if needed handled by insert_node in future, // 2. Ensure the table exists; ignore error if it already does.
// but for now let's ensure it's there). let _ = self.graph.create_table(
// If it fails because it exists, that's fine. "curated_decisions",
let _ = self.graph.create_table("curated_decisions", "id VARCHAR(255) PRIMARY KEY, author TEXT, reasoning TEXT, citations TEXT, curated_at TEXT"); "id VARCHAR(255) PRIMARY KEY, author TEXT, reasoning TEXT, citations TEXT, curated_at TEXT",
);
// 3. Serialize the record and insert it.
let data = serde_json::json!({ let data = serde_json::json!({
"id": dr.id.to_string(), "id": dr.id.to_string(),
"author": dr.author, "author": dr.author,
"reasoning": dr.reasoning, "reasoning": dr.reasoning,
"citations": serde_json::to_string(&dr.citations)?, "citations": serde_json::to_string(&dr.citations)?,
"curated_at": dr.timestamp.to_rfc3339() "curated_at": dr.timestamp.to_rfc3339(),
}); });
self.graph.insert_node("curated_decisions", data)?; self.graph.insert_node("curated_decisions", data)?;
self.graph.commit(&format!("Curation complete for DR: {}", dr.id))?; self.graph
.commit(&format!("Curation complete for DR: {}", dr.id))?;
Ok(()) Ok(())
} }
} }
@@ -66,6 +160,8 @@ mod tests {
use super::*; use super::*;
use tempfile::TempDir; use tempfile::TempDir;
/// Integration test that exercises the full curation flow on a temporary
/// Dolt graph.
#[test] #[test]
fn test_curation_flow() { fn test_curation_flow() {
let dir = TempDir::new().unwrap(); let dir = TempDir::new().unwrap();

View File

@@ -1,28 +1,69 @@
use chrs_mail::{Mailbox, Message};
use chrono::Utc; use chrono::Utc;
/// chrs-sync crate provides synchronization utilities for the CHORUS system.
///
/// It uses a `Mailbox` for message passing between peers and a Dolt repository
/// to track state hashes. The primary abstraction is `SyncManager`, which can
/// broadcast the current repository hash to peers and handle incoming sync
/// signals.
use chrs_mail::{Mailbox, Message};
use std::path::PathBuf;
use std::process::Command; use std::process::Command;
use uuid::Uuid; use uuid::Uuid;
use std::path::PathBuf;
/// Manages synchronization of a Dolt repository across peers.
///
/// # Fields
/// * `mailbox` The `Mailbox` instance used to send and receive messages.
/// * `repo_path` Filesystem path to the local Dolt repository.
///
/// # Rationale
/// The CHORUS architecture relies on deterministic state replication. By
/// broadcasting the latest commit hash (`sync_signal`) each peer can decide
/// whether to pull updates. This struct encapsulates that behaviour, keeping the
/// rest of the system agnostic of the underlying VCS commands.
pub struct SyncManager { pub struct SyncManager {
mailbox: Mailbox, mailbox: Mailbox,
repo_path: PathBuf, repo_path: PathBuf,
} }
impl SyncManager { impl SyncManager {
/// Creates a new `SyncManager`.
///
/// # Parameters
/// * `mailbox` An alreadyopened `Mailbox` for peer communication.
/// * `repo_path` Path to the Dolt repository that should be kept in sync.
///
/// Returns a fullyinitialised manager ready to broadcast or handle sync
/// signals.
pub fn new(mailbox: Mailbox, repo_path: PathBuf) -> Self { pub fn new(mailbox: Mailbox, repo_path: PathBuf) -> Self {
Self { mailbox, repo_path } Self { mailbox, repo_path }
} }
pub fn broadcast_state(&self, from_peer: &str, to_peer: &str) -> Result<(), Box<dyn std::error::Error>> { /// Broadcasts the current repository state to a remote peer.
///
/// The method executes `dolt log -n 1 --format %H` to obtain the most recent
/// commit hash, constructs a `Message` with topic `"sync_signal"` and sends it
/// via the mailbox.
///
/// * `from_peer` Identifier of the sender.
/// * `to_peer` Identifier of the intended recipient.
///
/// # Errors
/// Returns any I/O or commandexecution error wrapped in a boxed `dyn
/// Error`.
pub fn broadcast_state(
&self,
from_peer: &str,
to_peer: &str,
) -> Result<(), Box<dyn std::error::Error>> {
// Get current dolt hash // Get current dolt hash
let output = Command::new("dolt") let output = Command::new("dolt")
.args(&["log", "-n", "1", "--format", "%H"]) .args(&["log", "-n", "1", "--format", "%H"])
.current_dir(&self.repo_path) .current_dir(&self.repo_path)
.output()?; .output()?;
let current_hash = String::from_utf8_lossy(&output.stdout).trim().to_string(); let current_hash = String::from_utf8_lossy(&output.stdout).trim().to_string();
let msg = Message { let msg = Message {
id: Uuid::new_v4(), id: Uuid::new_v4(),
from_peer: from_peer.into(), from_peer: from_peer.into(),
@@ -34,10 +75,23 @@ impl SyncManager {
}; };
self.mailbox.send(&msg)?; self.mailbox.send(&msg)?;
println!("Broadcasted sync signal: {} from {}", current_hash, from_peer); println!(
"Broadcasted sync signal: {} from {}",
current_hash, from_peer
);
Ok(()) Ok(())
} }
/// Handles an incoming `sync_signal` message.
///
/// If the message topic is not `"sync_signal"` the function returns `Ok(())`
/// immediately. Otherwise it extracts the remote commit hash and attempts a
/// `dolt pull origin` to bring the local repository uptodate. In a real
/// P2P deployment the remote URL would be derived from the sender, but the
/// current implementation uses the default remote configuration.
///
/// # Errors
/// Propagates any command execution failures.
pub fn handle_sync_signal(&self, msg: &Message) -> Result<(), Box<dyn std::error::Error>> { pub fn handle_sync_signal(&self, msg: &Message) -> Result<(), Box<dyn std::error::Error>> {
if msg.topic != "sync_signal" { if msg.topic != "sync_signal" {
return Ok(()); return Ok(());

View File

@@ -0,0 +1,15 @@
{
"keep": {
"days": true,
"amount": 14
},
"auditLog": "/home/tony/rust/projects/reset/CHORUS/logs/.fdbbbc7a24b00979ca9dea2720178eb798c332a1-audit.json",
"files": [
{
"date": 1772509985271,
"name": "/home/tony/rust/projects/reset/CHORUS/logs/mcp-puppeteer-2026-03-03.log",
"hash": "286a30d8143c8f454bd29cbdf024c0c200b33224f63473e4573b57a44bcd24ae"
}
],
"hashType": "sha256"
}

View File

@@ -0,0 +1,21 @@
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 14:53:05.305"}
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 14:53:05.306"}
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 14:53:15.193"}
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:13:25.848"}
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:13:25.849"}
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:14:29.005"}
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:24:21.669"}
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:24:21.670"}
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:24:23.947"}
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:31:31.620"}
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:31:31.622"}
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:31:37.753"}
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:34:39.279"}
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:34:39.280"}
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:34:40.724"}
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:38:24.580"}
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:38:24.582"}
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:38:27.355"}
{"level":"info","message":"Starting MCP server","service":"mcp-puppeteer","timestamp":"2026-03-03 17:39:42.436"}
{"level":"info","message":"MCP server started successfully","service":"mcp-puppeteer","timestamp":"2026-03-03 17:39:42.437"}
{"level":"info","message":"Puppeteer MCP Server closing","service":"mcp-puppeteer","timestamp":"2026-03-03 17:39:53.406"}

1
target/.rustc_info.json Normal file
View File

@@ -0,0 +1 @@
{"rustc_fingerprint":15256376128064635560,"outputs":{"7971740275564407648":{"success":true,"status":"","code":0,"stdout":"___\nlib___.rlib\nlib___.so\nlib___.so\nlib___.a\nlib___.so\n/home/tony/.rustup/toolchains/stable-x86_64-unknown-linux-gnu\noff\npacked\nunpacked\n___\ndebug_assertions\npanic=\"unwind\"\nproc_macro\ntarget_abi=\"\"\ntarget_arch=\"x86_64\"\ntarget_endian=\"little\"\ntarget_env=\"gnu\"\ntarget_family=\"unix\"\ntarget_feature=\"fxsr\"\ntarget_feature=\"sse\"\ntarget_feature=\"sse2\"\ntarget_has_atomic=\"16\"\ntarget_has_atomic=\"32\"\ntarget_has_atomic=\"64\"\ntarget_has_atomic=\"8\"\ntarget_has_atomic=\"ptr\"\ntarget_os=\"linux\"\ntarget_pointer_width=\"64\"\ntarget_vendor=\"unknown\"\nunix\n","stderr":""},"17747080675513052775":{"success":true,"status":"","code":0,"stdout":"rustc 1.87.0 (17067e9ac 2025-05-09)\nbinary: rustc\ncommit-hash: 17067e9ac6d7ecb70e50f92c1944e545188d2359\ncommit-date: 2025-05-09\nhost: x86_64-unknown-linux-gnu\nrelease: 1.87.0\nLLVM version: 20.1.1\n","stderr":""}},"successes":{}}

View File

@@ -0,0 +1 @@
{"rustc_vv":"rustc 1.87.0 (17067e9ac 2025-05-09)\nbinary: rustc\ncommit-hash: 17067e9ac6d7ecb70e50f92c1944e545188d2359\ncommit-date: 2025-05-09\nhost: x86_64-unknown-linux-gnu\nrelease: 1.87.0\nLLVM version: 20.1.1\n"}

3
target/CACHEDIR.TAG Normal file
View File

@@ -0,0 +1,3 @@
Signature: 8a477f597d28d172789f06886806bc55
# This file is a cache directory tag created by cargo.
# For information about cache directory tags see https://bford.info/cachedir/

0
target/debug/.cargo-lock Normal file
View File

View File

@@ -0,0 +1 @@
b2b03782ee782d2c

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"","declared_features":"","target":0,"profile":0,"path":0,"deps":[[966925859616469517,"build_script_build",false,877995191091226123]],"local":[{"RerunIfChanged":{"output":"debug/build/ahash-3e4ac5f2a9eb4c58/output","paths":["build.rs"]}}],"rustflags":[],"config":0,"compile_kind":0}

View File

@@ -0,0 +1 @@
0b667e77f9432f0c

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"atomic-polyfill\", \"compile-time-rng\", \"const-random\", \"default\", \"getrandom\", \"nightly-arm-aes\", \"no-rng\", \"runtime-rng\", \"serde\", \"std\"]","target":17883862002600103897,"profile":2225463790103693989,"path":15500462139455470991,"deps":[[5398981501050481332,"version_check",false,18375983552046171052]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/ahash-6d979e8091fade67/dep-build-script-build-script-build","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
2fc16307e7a770a1

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"atomic-polyfill\", \"compile-time-rng\", \"const-random\", \"default\", \"getrandom\", \"nightly-arm-aes\", \"no-rng\", \"runtime-rng\", \"serde\", \"std\"]","target":8470944000320059508,"profile":2241668132362809309,"path":1076126273874160872,"deps":[[966925859616469517,"build_script_build",false,3183333477403046066],[3722963349756955755,"once_cell",false,5072081620029175829],[7667230146095136825,"cfg_if",false,17019820836644139335],[17375358419629610217,"zerocopy",false,3569098339783736861]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/ahash-76b39812343abad0/dep-lib-ahash","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
ce752f9bcb753e84

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"atomic-polyfill\", \"compile-time-rng\", \"const-random\", \"default\", \"getrandom\", \"nightly-arm-aes\", \"no-rng\", \"runtime-rng\", \"serde\", \"std\"]","target":8470944000320059508,"profile":15657897354478470176,"path":1076126273874160872,"deps":[[966925859616469517,"build_script_build",false,3183333477403046066],[3722963349756955755,"once_cell",false,13894614411759489994],[7667230146095136825,"cfg_if",false,13273392571467671403],[17375358419629610217,"zerocopy",false,10689620513817343668]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/ahash-c93df6ab5026e370/dep-lib-ahash","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
2ce34c1176049056

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"atomic-polyfill\", \"compile-time-rng\", \"const-random\", \"default\", \"getrandom\", \"nightly-arm-aes\", \"no-rng\", \"runtime-rng\", \"serde\", \"std\"]","target":8470944000320059508,"profile":2241668132362809309,"path":1076126273874160872,"deps":[[966925859616469517,"build_script_build",false,3183333477403046066],[3722963349756955755,"once_cell",false,9688418222178548709],[7667230146095136825,"cfg_if",false,17019820836644139335],[17375358419629610217,"zerocopy",false,3569098339783736861]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/ahash-cb61483b0241026b/dep-lib-ahash","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
c78c2f31a626d615

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"perf-literal\", \"std\"]","declared_features":"[\"default\", \"logging\", \"perf-literal\", \"std\"]","target":7534583537114156500,"profile":15657897354478470176,"path":13713000725416766643,"deps":[[1363051979936526615,"memchr",false,3045848091962468255]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/aho-corasick-2e495f0cb4e7b702/dep-lib-aho_corasick","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
309e4cbc95c7514b

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"perf-literal\", \"std\"]","declared_features":"[\"default\", \"logging\", \"perf-literal\", \"std\"]","target":7534583537114156500,"profile":2241668132362809309,"path":13713000725416766643,"deps":[[1363051979936526615,"memchr",false,8804445397282048046]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/aho-corasick-839afa8844e27b73/dep-lib-aho_corasick","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
30f3873b30636c8d

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"alloc\"]","declared_features":"[\"alloc\", \"default\", \"fresh-rust\", \"nightly\", \"serde\", \"std\"]","target":5388200169723499962,"profile":187265481308423917,"path":14211365667724319390,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/allocator-api2-3d50966576ccee68/dep-lib-allocator_api2","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
9b4c5d7bd2c7f8e1

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"alloc\"]","declared_features":"[\"alloc\", \"default\", \"fresh-rust\", \"nightly\", \"serde\", \"std\"]","target":5388200169723499962,"profile":12994027242049262075,"path":14211365667724319390,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/allocator-api2-824e74dd2449b33c/dep-lib-allocator_api2","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
16cc041b04df1b37

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[]","target":6962977057026645649,"profile":2225463790103693989,"path":8656731720886155905,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/autocfg-770003ab709e53c1/dep-lib-autocfg","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
48a15db9394058cb

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":15657897354478470176,"path":9884266009885954648,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-30d5974a3f0545f1/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
0b351a49aa4f73cb

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"default\"]","declared_features":"[\"compiler_builtins\", \"core\", \"default\", \"example_generated\", \"rustc-dep-of-std\"]","target":12919857562465245259,"profile":15657897354478470176,"path":16315214553382879237,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-40752e70762f6d2c/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
b7534994b82da610

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"default\"]","declared_features":"[\"compiler_builtins\", \"core\", \"default\", \"example_generated\", \"rustc-dep-of-std\"]","target":12919857562465245259,"profile":2241668132362809309,"path":16315214553382879237,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-57ee71d4871509d4/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
0d36b8e52b96582e

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"std\"]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":15657897354478470176,"path":9884266009885954648,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-82666fbcd3cbd175/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
699758da1f6ea19c

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"std\"]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":2241668132362809309,"path":9884266009885954648,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-bc78e388b3eb4775/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
9374167e11cc2383

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":2241668132362809309,"path":9884266009885954648,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-e5a07805fd49de02/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
7bdafb5c6c75155f

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"default\", \"std\"]","declared_features":"[\"default\", \"extra-platforms\", \"serde\", \"std\"]","target":11402411492164584411,"profile":5585765287293540646,"path":3940745598343284350,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bytes-6105032c4d786163/dep-lib-bytes","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
2fa4f36cef0a4a7d

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[\"default\", \"std\"]","declared_features":"[\"default\", \"extra-platforms\", \"serde\", \"std\"]","target":11402411492164584411,"profile":13827760451848848284,"path":3940745598343284350,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bytes-b0b6678ba8cc5218/dep-lib-bytes","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
4f342b6e8db2a031

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"jobserver\", \"parallel\"]","target":11042037588551934598,"profile":4333757155065362140,"path":3313463817543785650,"deps":[[8410525223747752176,"shlex",false,10044392746864901954],[9159843920629750842,"find_msvc_tools",false,17497982296998823564]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/cc-54c52371c85641fb/dep-lib-cc","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

View File

@@ -0,0 +1 @@
This file has an mtime of when this was started.

View File

@@ -0,0 +1 @@
6ba7a3e2379034b8

View File

@@ -0,0 +1 @@
{"rustc":15597765236515928571,"features":"[]","declared_features":"[\"core\", \"rustc-dep-of-std\"]","target":13840298032947503755,"profile":15657897354478470176,"path":12940094294345282402,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/cfg-if-94696fbcf4a2b312/dep-lib-cfg_if","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}

Some files were not shown because too many files have changed in this diff Show More