Code reorg
This commit is contained in:
30
.obsidian/workspace.json
vendored
30
.obsidian/workspace.json
vendored
@@ -11,10 +11,14 @@
|
||||
"id": "472092e9ada7a8e6",
|
||||
"type": "leaf",
|
||||
"state": {
|
||||
"type": "empty",
|
||||
"state": {},
|
||||
"type": "markdown",
|
||||
"state": {
|
||||
"file": "brand-assets/CHORUS-BRAND-GUIDE.md",
|
||||
"mode": "source",
|
||||
"source": false
|
||||
},
|
||||
"icon": "lucide-file",
|
||||
"title": "New tab"
|
||||
"title": "CHORUS-BRAND-GUIDE"
|
||||
}
|
||||
}
|
||||
]
|
||||
@@ -74,7 +78,7 @@
|
||||
}
|
||||
],
|
||||
"direction": "horizontal",
|
||||
"width": 481.5
|
||||
"width": 252.5
|
||||
},
|
||||
"right": {
|
||||
"id": "a1ab5e22b95db49c",
|
||||
@@ -164,8 +168,15 @@
|
||||
"command-palette:Open command palette": false
|
||||
}
|
||||
},
|
||||
"active": "9001986372506f85",
|
||||
"active": "472092e9ada7a8e6",
|
||||
"lastOpenFiles": [
|
||||
"brand-assets/logos/moebius-ring.blend",
|
||||
"brand-assets/logos/moebius-ring.glb",
|
||||
"brand-assets/logos/moebius-ring.blend@",
|
||||
"brand-assets/logos/moebius-ring.blend1",
|
||||
"brand-assets/logos/chorus-logo-concept.md",
|
||||
"brand-assets/guidelines/brand-usage-guidelines.md",
|
||||
"brand-assets/colors/chorus-color-system.md",
|
||||
"modules/slurp/hcfs-python/hcfs/core/__pycache__/filesystem.cpython-310.pyc",
|
||||
"modules/slurp/hcfs-python/hcfs/core/__pycache__/context_db.cpython-310.pyc",
|
||||
"modules/slurp/hcfs-python/hcfs/core/__pycache__/__init__.cpython-310.pyc",
|
||||
@@ -174,10 +185,6 @@
|
||||
"modules/slurp/hcfs-python/hcfs/__pycache__",
|
||||
"modules/whoosh/EVENT_CONFIGURATION_SYSTEM.md",
|
||||
"modules/whoosh/EVENT_CONFIGURATION_SYSTEM.md.tmp.1675830.1754294063541",
|
||||
"modules/whoosh/frontend/src/test/event-config-integration.test.ts",
|
||||
"modules/whoosh/frontend/src/test/event-config-integration.test.ts.tmp.1675830.1754293976289",
|
||||
"modules/whoosh/frontend/src/components/projects/EventTypeConfiguration.tsx",
|
||||
"modules/whoosh/frontend/src/components/projects/EventTypeConfiguration.tsx.tmp.1675830.1754293868591",
|
||||
"homepage-content.md",
|
||||
"modules/posthuman/docs/operations.md",
|
||||
"modules/posthuman/docs/development.md",
|
||||
@@ -198,9 +205,6 @@
|
||||
"modules/whoosh/docs/phase4-completion-summary.md",
|
||||
"modules/whoosh/docs/phase5-completion-summary.md",
|
||||
"modules/whoosh/frontend/TESTING.md",
|
||||
"modules/whoosh/results/rosewood_qa_report_1751891435.md",
|
||||
"modules/whoosh/TESTING_STRATEGY.md",
|
||||
"modules/whoosh/REPORT.md",
|
||||
"modules/whoosh/README_DISTRIBUTED.md"
|
||||
"modules/whoosh/results/rosewood_qa_report_1751891435.md"
|
||||
]
|
||||
}
|
||||
241
Copywriting.md
241
Copywriting.md
@@ -1,241 +0,0 @@
|
||||
# Website Copy
|
||||
|
||||
## 🏠 **1. Home** (`/`)
|
||||
|
||||
### Hero Tagline:
|
||||
**CHORUS Services: Distributed AI Orchestration Without the Hallucinations.**
|
||||
|
||||
### Subheading:
|
||||
Your AI agents finally have persistent memory and collaborative intelligence. CHORUS Services eliminates context loss, reduces hallucinations, and enables true multi-agent coordination through intelligent context management and distributed reasoning.
|
||||
|
||||
### CTA Buttons:
|
||||
- 👉 Explore the Platform
|
||||
- ✨ See Context Management in Action
|
||||
- 📘 View Technical Documentation
|
||||
|
||||
## 🌐 **2. Ecosystem Overview** (`/ecosystem`)
|
||||
|
||||
### Section Tagline:
|
||||
**Context-Aware AI Coordination. Finally.**
|
||||
|
||||
### Intro Paragraph:
|
||||
CHORUS Services solves the fundamental problems of AI agent deployment: context loss, hallucinations, and coordination failures. Our distributed platform enables agents to maintain persistent organizational memory, collaborate on complex tasks, and continuously learn what information truly matters to your business.
|
||||
|
||||
### System Highlights:
|
||||
🧠 **Persistent Context Management** - Agents never forget critical information
|
||||
📡 **Multi-Agent Coordination** - True collaboration, not just parallel processing
|
||||
📈 **Adaptive Learning** - System improves based on real-world feedback
|
||||
|
||||
### Body Copy:
|
||||
At the core of CHORUS Services is a context-aware architecture designed to eliminate the primary failure modes of AI systems. WHOOSH orchestrates complex workflows across distributed agents, BZZZ enables peer-to-peer coordination without single points of failure, HMMM facilitates collaborative reasoning before action, SLURP intelligently curates organizational knowledge, and COOEE provides continuous learning feedback—creating AI systems that actually remember, reason, and improve.
|
||||
|
||||
## 📽️ **3. Scenarios** (`/scenarios`)
|
||||
|
||||
### Tagline:
|
||||
**Watch AI Agents Actually Collaborate. With Memory.**
|
||||
|
||||
### Intro Paragraph:
|
||||
See real-world scenarios where CHORUS Services eliminates common AI failures: agents losing context, repeating solved problems, making decisions without consultation, or hallucinating incorrect information. Every workflow is auditable, every decision is reasoned, and critical context is never lost.
|
||||
|
||||
### Scene Teasers:
|
||||
1. **Task Coordination** – WHOOSH distributes complex projects across specialized agents
|
||||
2. **Context Preservation** – Agents access full project history and organizational knowledge
|
||||
3. **Collaborative Reasoning** – HMMM ensures decisions are discussed before implementation
|
||||
4. **Intelligent Curation** – SLURP learns what information is valuable vs. noise
|
||||
5. **Continuous Learning** – COOEE feedback eliminates recurring mistakes
|
||||
6. **Audit Trail** – Complete transparency of agent decisions and context usage
|
||||
7. **Error Prevention** – Proactive identification of potential hallucinations or mistakes
|
||||
8. **Organizational Memory** – Knowledge accumulates and improves over time
|
||||
|
||||
## 🔧 **4. Modules** (`/modules`)
|
||||
|
||||
### Tagline:
|
||||
**Production-Ready Components for Enterprise AI Deployment.**
|
||||
|
||||
### Module Summaries:
|
||||
|
||||
#### WHOOSH Orchestrator
|
||||
**Enterprise workflow management for AI agents.** Task distribution, dependency management, and real-time monitoring with role-based agent assignment and performance tracking.
|
||||
|
||||
#### BZZZ P2P Coordination
|
||||
**Resilient agent communication without single points of failure.** Peer-to-peer task coordination, distributed consensus, and automatic failover when agents become unavailable.
|
||||
|
||||
#### HMMM Reasoning Layer
|
||||
**Collaborative decision-making that prevents costly mistakes.** Agents discuss approaches, identify risks, and reach consensus before executing critical tasks—eliminating hasty decisions.
|
||||
|
||||
#### SLURP Context Curator
|
||||
**Intelligent knowledge management that learns from experience.** Automatically identifies valuable information vs. noise, maintains organizational memory, and provides role-specific context to agents.
|
||||
|
||||
#### COOEE Feedback System
|
||||
**Continuous improvement through real-world performance data.** Agents and humans provide feedback on context relevance and decision quality, enabling the system to adapt and improve over time.
|
||||
|
||||
#### Hypercore Log
|
||||
**Immutable audit trail for compliance and debugging.** Every agent action, decision, and context access is permanently recorded with cryptographic integrity for forensic analysis.
|
||||
|
||||
#### SDK Ecosystem
|
||||
**Multi-language integration for existing development workflows.** Python, JavaScript, Go, Rust, Java, and C# libraries for seamless integration with current infrastructure.
|
||||
|
||||
## 📈 **5. How It Works** (`/how-it-works`)
|
||||
|
||||
### Tagline:
|
||||
**From Context Chaos to Coordinated Intelligence.**
|
||||
|
||||
### Process Steps:
|
||||
|
||||
1. **Task Assignment**
|
||||
WHOOSH analyzes requirements and assigns work to agents based on capabilities and current workload.
|
||||
|
||||
2. **Context Retrieval**
|
||||
Agents access relevant organizational knowledge through SLURP's curated context database—no more starting from scratch.
|
||||
|
||||
3. **Collaborative Planning**
|
||||
HMMM facilitates pre-execution discussion, identifying potential issues and optimizing approaches before work begins.
|
||||
|
||||
4. **Coordinated Execution**
|
||||
Agents use BZZZ for peer-to-peer updates, sharing progress and coordinating dependencies in real-time.
|
||||
|
||||
5. **Knowledge Capture**
|
||||
All decisions, outcomes, and learnings are logged to Hypercore and evaluated by SLURP for future reference.
|
||||
|
||||
6. **Performance Feedback**
|
||||
COOEE collects effectiveness signals from agents and humans, continuously tuning what information gets preserved and prioritized.
|
||||
|
||||
7. **Continuous Learning**
|
||||
The next similar task benefits from accumulated knowledge, better context, and improved coordination patterns.
|
||||
|
||||
## 👥 **6. About & Team** (`/about`)
|
||||
|
||||
### Mission Statement:
|
||||
We solve the critical problems that prevent AI from delivering consistent business value: context loss, hallucinations, coordination failures, and inability to learn from experience. CHORUS Services provides the infrastructure for AI agents that remember, reason together, and continuously improve.
|
||||
|
||||
### Values:
|
||||
- 🛠️ **Engineering Rigor** - Production-ready, not proof-of-concept
|
||||
- 📊 **Data-Driven Decisions** - Every feature backed by real-world performance data
|
||||
- 🔍 **Transparent Operations** - Complete auditability and explainable AI decisions
|
||||
- 📚 **Continuous Learning** - Systems that improve through experience, not just training
|
||||
|
||||
# Revised Investor Relations Copy
|
||||
|
||||
## Investor Relations
|
||||
**Solving AI's Context Problem at Scale.**
|
||||
|
||||
> Deep Black Cloud has built the infrastructure that makes AI agents actually useful in production environments.
|
||||
> CHORUS Services eliminates the primary failure modes of AI deployment: context loss, hallucinations, and coordination problems. Our platform enables persistent organizational memory, collaborative reasoning, and continuous learning from real-world performance.
|
||||
> The system isn't just working—it's already building production software with measurable quality improvements.
|
||||
|
||||
We're inviting strategic investors to participate in scaling the solution to enterprise AI's most expensive problems. What began as research into AI coordination failures is now CHORUS Services—a production-ready platform solving context management and hallucination problems that cost enterprises millions in failed AI initiatives.
|
||||
|
||||
## 🎯 The Problem We Solve
|
||||
|
||||
**AI deployment fails at scale because:**
|
||||
- **Context Loss**: Agents can't maintain organizational knowledge across sessions
|
||||
- **Hallucinations**: No mechanism to verify or correct AI-generated content
|
||||
- **Coordination Failures**: Multiple agents work in isolation, duplicating effort or creating conflicts
|
||||
- **No Learning**: Systems repeat the same mistakes without improvement mechanisms
|
||||
|
||||
**CHORUS Services addresses each failure mode:**
|
||||
- **Persistent Memory**: SLURP context curation maintains organizational knowledge
|
||||
- **Collaborative Verification**: HMMM reasoning layer prevents hasty decisions
|
||||
- **Coordinated Execution**: BZZZ enables true multi-agent collaboration
|
||||
- **Continuous Improvement**: COOEE feedback system learns from real-world performance
|
||||
|
||||
## 🛠 What We've Built
|
||||
|
||||
CHORUS Services is operational today, deployed across secure, distributed environments with demonstrated improvements in AI agent reliability and output quality.
|
||||
|
||||
**Production Components:**
|
||||
|
||||
- **WHOOSH Orchestrator** – Enterprise workflow management for multi-agent coordination
|
||||
- **BZZZ P2P Network** – Resilient agent communication without single points of failure
|
||||
- **HMMM Reasoning Layer** – Collaborative decision-making that prevents costly mistakes
|
||||
- **SLURP Context Curator** – Intelligent knowledge management with continuous learning
|
||||
- **COOEE Feedback System** – Performance-based system improvement and adaptation
|
||||
- **Hypercore Log** – Immutable audit trail for compliance and forensic analysis
|
||||
- **Multi-Language SDKs** – Enterprise-ready integration libraries
|
||||
|
||||
**Measurable Results**: Our autonomous software development project (Iggy Hops Home mobile game) demonstrates 40% fewer iterations, 60% reduction in duplicated work, and zero critical context loss events compared to traditional AI development approaches.
|
||||
|
||||
## 📈 Market Opportunity
|
||||
|
||||
| Category | Opportunity |
|
||||
|----------|-------------|
|
||||
| **Market Size** | AI operations market projected $50B by 2030, with context management as primary constraint |
|
||||
| **Problem Scale** | 78% of enterprise AI projects fail due to context/coordination issues (Gartner, 2024) |
|
||||
| **Technical Moat** | First production-ready solution for distributed AI context management |
|
||||
| **Revenue Model** | Platform licensing, managed services, and per-agent subscription tiers |
|
||||
| **Competitive Position** | 18-month technical lead over nearest competitor solutions |
|
||||
|
||||
## 🚀 Investment Use Cases
|
||||
|
||||
**Platform Scaling:**
|
||||
- Multi-tenant SaaS deployment for enterprise customers
|
||||
- Integration partnerships with major AI/ML platforms
|
||||
- Enhanced security and compliance features for regulated industries
|
||||
|
||||
**Market Expansion:**
|
||||
- Professional services for enterprise AI transformation
|
||||
- Developer ecosystem and marketplace for specialized agents
|
||||
- Research partnerships with academic institutions
|
||||
|
||||
**Product Development:**
|
||||
- Advanced hallucination detection and prevention
|
||||
- Multi-modal context management (documents, code, media)
|
||||
- Industry-specific knowledge templates and workflows
|
||||
|
||||
## 📊 Proven Performance Metrics
|
||||
|
||||
**Context Management Effectiveness:**
|
||||
- 92% reduction in context loss events
|
||||
- 67% improvement in multi-session task continuity
|
||||
- 45% decrease in redundant agent work
|
||||
|
||||
**Quality Improvements:**
|
||||
- 78% reduction in hallucinated information
|
||||
- 89% of agent decisions now include collaborative review
|
||||
- 56% improvement in task completion accuracy
|
||||
|
||||
**Operational Efficiency:**
|
||||
- 34% faster project completion through better coordination
|
||||
- 71% reduction in manual intervention requirements
|
||||
- 83% improvement in knowledge retention across projects
|
||||
|
||||
## 📥 Investment Process
|
||||
|
||||
**Current Status:** Series A preparation, strategic investor outreach
|
||||
**Use of Funds:** Platform scaling, enterprise sales, R&D expansion
|
||||
**Minimum Investment:** Available upon qualification
|
||||
|
||||
**Access exclusive materials:**
|
||||
- Technical architecture deep-dive
|
||||
- Customer case studies and ROI analysis
|
||||
- Competitive analysis and market positioning
|
||||
- Financial projections and scaling strategy
|
||||
|
||||
**[Register Interest →]**
|
||||
_Required: Investment focus, organization, technical background_
|
||||
|
||||
## 🌍 Deployment Ready
|
||||
|
||||
CHORUS Services supports flexible deployment across:
|
||||
- **Cloud-native**: AWS, Azure, GCP with auto-scaling
|
||||
- **Hybrid environments**: On-premises integration with cloud services
|
||||
- **Edge computing**: Distributed deployment for low-latency requirements
|
||||
- **Mesh networks**: Peer-to-peer coordination across geographic regions
|
||||
|
||||
**Security:** Enterprise-grade encryption, role-based access control, complete audit trails, and compliance-ready logging for regulated industries.
|
||||
|
||||
## 💼 The Bottom Line
|
||||
|
||||
**The AI industry has a $50 billion context problem.**
|
||||
Every failed AI deployment, every hallucinated response, every duplicated effort represents money lost to preventable technical failures.
|
||||
|
||||
We've built the infrastructure that fixes this.
|
||||
CHORUS Services delivers the persistent memory, collaborative reasoning, and continuous learning that makes AI agents actually reliable in production environments.
|
||||
|
||||
**We're not building another model.**
|
||||
**We're building the platform that makes models work together.**
|
||||
|
||||
> — Deep Black Cloud Development
|
||||
> **CHORUS Services**
|
||||
> Context-Aware · Collaborative · Continuously Learning
|
||||
|
||||
421
DESIGN.md
421
DESIGN.md
@@ -1,421 +0,0 @@
|
||||
## **Revised Implementation Guide: CHORUS Services with Ant Design**
|
||||
|
||||
### **Why Ant Design Works Well for CHORUS**
|
||||
|
||||
Based on my research, Ant Design is actually perfect for your use case:
|
||||
- **Enterprise-Grade Components**: Built specifically for complex, data-driven applications[1][2]
|
||||
- **Advanced Theming**: CSS-in-JS with dynamic theme capabilities[3][4]
|
||||
- **Performance Optimized**: When properly configured, can achieve 80% bundle size reduction[5][6]
|
||||
- **React Integration**: Seamless integration with Framer Motion for animations[7]
|
||||
|
||||
## **Updated Technology Stack**
|
||||
|
||||
```javascript
|
||||
// Core Dependencies
|
||||
- Next.js 13+ (App Router)
|
||||
- Ant Design 5.25+ (latest with CSS variables support)
|
||||
- Framer Motion (for parallax and animations)
|
||||
- @ant-design/cssinjs (for custom theming)
|
||||
- antd-style (for additional CSS-in-JS capabilities)
|
||||
```
|
||||
|
||||
## **Bundle Optimization Strategy**
|
||||
|
||||
First, let's implement tree-shaking to avoid the performance issues:
|
||||
|
||||
```javascript
|
||||
// ❌ Avoid: Full library import
|
||||
import { Button, Card, Layout } from 'antd';
|
||||
|
||||
// ✅ Optimized: Component-level imports
|
||||
import Button from 'antd/es/button';
|
||||
import Card from 'antd/es/card';
|
||||
import Layout from 'antd/es/layout';
|
||||
|
||||
// Or create a centralized import file
|
||||
// components/antd/index.js
|
||||
export { default as Button } from 'antd/es/button';
|
||||
export { default as Card } from 'antd/es/card';
|
||||
export { default as Layout } from 'antd/es/layout';
|
||||
```
|
||||
|
||||
## **Custom Theme Configuration**
|
||||
|
||||
Here's how to implement your technology-focused aesthetic with Ant Design theming:
|
||||
|
||||
```javascript
|
||||
// theme/chorousTheme.js
|
||||
import { theme } from 'antd';
|
||||
|
||||
export const chorusTheme = {
|
||||
algorithm: theme.darkAlgorithm, // Enable dark mode by default
|
||||
token: {
|
||||
// Color System (replacing our previous color palette)
|
||||
colorPrimary: '#007aff', // Electric blue
|
||||
colorSuccess: '#30d158', // Emerald green
|
||||
colorWarning: '#ff9f0a', // Amber orange
|
||||
colorError: '#ff453a', // System red
|
||||
colorInfo: '#007aff', // Electric blue
|
||||
|
||||
// Background Colors
|
||||
colorBgContainer: '#1a1a1a', // Deep charcoal
|
||||
colorBgElevated: '#2d2d30', // Cool gray
|
||||
colorBgLayout: '#000000', // Pure black
|
||||
|
||||
// Typography
|
||||
fontFamily: `-apple-system, BlinkMacSystemFont, 'SF Pro Text', 'Inter', sans-serif`,
|
||||
fontSize: 16,
|
||||
fontSizeHeading1: 84, // Large headlines
|
||||
fontSizeHeading2: 48, // Section headers
|
||||
fontSizeHeading3: 36, // Subsection headers
|
||||
|
||||
// Spacing & Layout
|
||||
borderRadius: 8, // Consistent 8px radius
|
||||
wireframe: false, // Enable modern styling
|
||||
|
||||
// Performance & Motion
|
||||
motionDurationSlow: '0.3s', // Matching Apple's timing
|
||||
motionDurationMid: '0.2s',
|
||||
motionDurationFast: '0.1s',
|
||||
},
|
||||
|
||||
components: {
|
||||
// Custom Button Styling
|
||||
Button: {
|
||||
primaryShadow: '0 12px 24px rgba(0, 122, 255, 0.3)',
|
||||
controlHeight: 48, // Larger touch targets
|
||||
fontWeight: 600,
|
||||
},
|
||||
|
||||
// Custom Card Styling
|
||||
Card: {
|
||||
borderRadiusLG: 12, // Slightly larger for cards
|
||||
paddingLG: 32,
|
||||
},
|
||||
|
||||
// Custom Layout
|
||||
Layout: {
|
||||
headerBg: 'rgba(26, 26, 26, 0.8)', // Semi-transparent header
|
||||
headerHeight: 72,
|
||||
},
|
||||
|
||||
// Typography Components
|
||||
Typography: {
|
||||
titleMarginTop: 0,
|
||||
titleMarginBottom: 24,
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## **Component Architecture**
|
||||
|
||||
Here's how to structure your components with Ant Design:
|
||||
|
||||
```javascript
|
||||
// components/ui/PerformanceCard.jsx
|
||||
import { Card, Typography, Space, Tag } from 'antd';
|
||||
import { motion } from 'framer-motion';
|
||||
|
||||
const { Title, Text } = Typography;
|
||||
|
||||
export const PerformanceCard = ({ title, description, metrics, delay = 0 }) => {
|
||||
return (
|
||||
<motion.div
|
||||
initial={{ opacity: 0, y: 40 }}
|
||||
whileInView={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.6, delay }}
|
||||
viewport={{ once: true }}
|
||||
>
|
||||
<Card
|
||||
hoverable
|
||||
className="performance-card"
|
||||
styles={{
|
||||
body: { padding: '32px' }
|
||||
}}
|
||||
>
|
||||
<Space direction="vertical" size="large" style={{ width: '100%' }}>
|
||||
<Title level={3} style={{ margin: 0, color: '#f2f2f7' }}>
|
||||
{title}
|
||||
</Title>
|
||||
|
||||
<Text style={{ fontSize: '16px', lineHeight: '1.6', color: '#a1a1a6' }}>
|
||||
{description}
|
||||
</Text>
|
||||
|
||||
<Space wrap>
|
||||
{metrics.map((metric, index) => (
|
||||
<Tag
|
||||
key={index}
|
||||
color="processing"
|
||||
style={{
|
||||
padding: '8px 16px',
|
||||
fontSize: '14px',
|
||||
fontWeight: 600,
|
||||
border: 'none',
|
||||
background: 'rgba(0, 122, 255, 0.1)',
|
||||
color: '#007aff'
|
||||
}}
|
||||
>
|
||||
{metric}
|
||||
</Tag>
|
||||
))}
|
||||
</Space>
|
||||
</Space>
|
||||
</Card>
|
||||
</motion.div>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
## **Framer Motion Integration**
|
||||
|
||||
Ant Design components work seamlessly with Framer Motion:
|
||||
|
||||
```javascript
|
||||
// components/sections/HeroSection.jsx
|
||||
import { Layout, Typography, Button, Space } from 'antd';
|
||||
import { motion } from 'framer-motion';
|
||||
|
||||
const { Header } = Layout;
|
||||
const { Title, Text } = Typography;
|
||||
|
||||
export const HeroSection = () => {
|
||||
return (
|
||||
<Header
|
||||
style={{
|
||||
height: '100vh',
|
||||
display: 'flex',
|
||||
alignItems: 'center',
|
||||
background: 'linear-gradient(135deg, #000000 0%, #1a1a1a 100%)'
|
||||
}}
|
||||
>
|
||||
<motion.div
|
||||
initial={{ opacity: 0 }}
|
||||
animate={{ opacity: 1 }}
|
||||
transition={{ duration: 1 }}
|
||||
style={{ maxWidth: '1200px', margin: '0 auto', padding: '0 24px' }}
|
||||
>
|
||||
<Space direction="vertical" size="large" align="center">
|
||||
<motion.div
|
||||
initial={{ y: 40, opacity: 0 }}
|
||||
animate={{ y: 0, opacity: 1 }}
|
||||
transition={{ duration: 0.8, delay: 0.2 }}
|
||||
>
|
||||
<Title
|
||||
level={1}
|
||||
style={{
|
||||
fontSize: '84px',
|
||||
fontWeight: 700,
|
||||
textAlign: 'center',
|
||||
margin: 0,
|
||||
background: 'linear-gradient(135deg, #f2f2f7 0%, #007aff 100%)',
|
||||
WebkitBackgroundClip: 'text',
|
||||
WebkitTextFillColor: 'transparent'
|
||||
}}
|
||||
>
|
||||
CHORUS Services
|
||||
</Title>
|
||||
</motion.div>
|
||||
|
||||
<motion.div
|
||||
initial={{ y: 40, opacity: 0 }}
|
||||
animate={{ y: 0, opacity: 1 }}
|
||||
transition={{ duration: 0.8, delay: 0.4 }}
|
||||
>
|
||||
<Text
|
||||
style={{
|
||||
fontSize: '36px',
|
||||
textAlign: 'center',
|
||||
color: '#a1a1a6'
|
||||
}}
|
||||
>
|
||||
Distributed AI Orchestration Without the Hallucinations
|
||||
</Text>
|
||||
</motion.div>
|
||||
|
||||
<motion.div
|
||||
initial={{ y: 40, opacity: 0 }}
|
||||
animate={{ y: 0, opacity: 1 }}
|
||||
transition={{ duration: 0.8, delay: 0.6 }}
|
||||
>
|
||||
<Space size="large">
|
||||
<Button
|
||||
type="primary"
|
||||
size="large"
|
||||
style={{ height: '56px', padding: '0 32px', fontSize: '16px' }}
|
||||
>
|
||||
Explore the Platform
|
||||
</Button>
|
||||
<Button
|
||||
size="large"
|
||||
style={{ height: '56px', padding: '0 32px', fontSize: '16px' }}
|
||||
>
|
||||
See Technical Demos
|
||||
</Button>
|
||||
</Space>
|
||||
</motion.div>
|
||||
</Space>
|
||||
</motion.div>
|
||||
</Header>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
## **Parallax Implementation with Ant Design**
|
||||
|
||||
```javascript
|
||||
// components/sections/ParallaxSection.jsx
|
||||
import { Layout, Row, Col, Card } from 'antd';
|
||||
import { motion, useScroll, useTransform } from 'framer-motion';
|
||||
import { useRef } from 'react';
|
||||
|
||||
const { Content } = Layout;
|
||||
|
||||
export const ParallaxSection = ({ children }) => {
|
||||
const ref = useRef(null);
|
||||
const { scrollYProgress } = useScroll({
|
||||
target: ref,
|
||||
offset: ["start end", "end start"]
|
||||
});
|
||||
|
||||
const y1 = useTransform(scrollYProgress, [0, 1], [0, -200]);
|
||||
const y2 = useTransform(scrollYProgress, [0, 1], [0, 200]);
|
||||
|
||||
return (
|
||||
<Layout ref={ref} style={{ minHeight: '100vh', position: 'relative' }}>
|
||||
{/* Background Layer */}
|
||||
<motion.div
|
||||
style={{
|
||||
position: 'absolute',
|
||||
top: 0,
|
||||
left: 0,
|
||||
right: 0,
|
||||
bottom: 0,
|
||||
background: 'radial-gradient(circle at 50% 50%, #1a1a1a 0%, #000000 100%)',
|
||||
y: y1
|
||||
}}
|
||||
/>
|
||||
|
||||
{/* Content Layer */}
|
||||
<Content style={{ position: 'relative', zIndex: 1, padding: '120px 24px' }}>
|
||||
<motion.div style={{ y: y2 }}>
|
||||
{children}
|
||||
</motion.div>
|
||||
</Content>
|
||||
</Layout>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
## **App Configuration**
|
||||
|
||||
```javascript
|
||||
// app/layout.jsx
|
||||
'use client';
|
||||
import { ConfigProvider } from 'antd';
|
||||
import { chorusTheme } from '../theme/chorusTheme';
|
||||
import 'antd/dist/reset.css'; // Use reset instead of full CSS
|
||||
|
||||
export default function RootLayout({ children }) {
|
||||
return (
|
||||
<html lang="en">
|
||||
<body>
|
||||
<ConfigProvider theme={chorusTheme}>
|
||||
{children}
|
||||
</ConfigProvider>
|
||||
</body>
|
||||
</html>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## **Performance Optimizations**
|
||||
|
||||
```javascript
|
||||
// next.config.js
|
||||
module.exports = {
|
||||
// Enable tree shaking for Ant Design
|
||||
transpilePackages: ['antd'],
|
||||
|
||||
// Optimize bundle splitting
|
||||
webpack: (config) => {
|
||||
config.optimization.splitChunks.cacheGroups.antd = {
|
||||
name: 'antd',
|
||||
test: /[\\/]node_modules[\\/]antd[\\/]/,
|
||||
chunks: 'all',
|
||||
priority: 10,
|
||||
};
|
||||
|
||||
return config;
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
## **Key Advantages of This Approach**
|
||||
|
||||
1. **Enterprise-Ready**: Ant Design's components are built for complex applications[8]
|
||||
2. **Consistent Design Language**: Built-in design system ensures consistency[1]
|
||||
3. **Performance**: Proper tree-shaking reduces bundle size by up to 80%[6]
|
||||
4. **Accessibility**: Ant Design has excellent built-in accessibility features[2]
|
||||
5. **Theming Power**: CSS-in-JS enables dynamic theme switching[4]
|
||||
6. **Animation Integration**: Works seamlessly with Framer Motion[7]
|
||||
|
||||
This approach gives you all the sophisticated theming and component capabilities of Ant Design while maintaining the Apple-inspired, technology-focused aesthetic we designed. The bundle optimization ensures performance remains excellent, and the CSS-in-JS theming provides the flexibility to create your unique visual identity.
|
||||
|
||||
Sources
|
||||
[1] Ant Design - The world's second most popular React UI framework https://ant.design
|
||||
[2] Ant Design 101 – Introduction to a Design System for Enterprises https://www.uxpin.com/studio/blog/ant-design-introduction/
|
||||
[3] ant design app development-en https://www.appleute.de/en/app-entwickler-bibliothek/app-development-with-ant-design/
|
||||
[4] Ant Design System Dark Mode and Other Theme Customization https://betterprogramming.pub/ant-design-system-dark-mode-and-other-theme-customization-fa4ff14359a4
|
||||
[5] Reduce Ant Design Bundle Size Multiple Times - Octa Labs Insights https://blog.octalabs.com/reduce-ant-design-bundle-size-multiple-times-simple-strategies-and-steps-for-smaller-bundles-66d5b7b898d3
|
||||
[6] Ant Design Bundle Size Optimization: The Tree Shaking Approach ... https://dev.to/anaselbahrawy/ant-design-bundle-size-optimization-the-tree-shaking-approach-every-react-developer-should-know-2l5a
|
||||
[7] How to use framer-motion with ant design - DEV Community https://dev.to/a4addel/how-to-use-framer-motion-with-ant-design-57j5
|
||||
[8] 2025's Best CSS Frameworks for React: Speed, Flexibility & UI Power https://www.linkedin.com/pulse/2025s-best-css-frameworks-react-speed-flexibility-ui-power-srikanth-r-s2pjc
|
||||
[9] Tailwind CSS vs Ant Design for React Component Styling - MoldStud https://moldstud.com/articles/p-comparing-tailwind-css-and-ant-design-for-react-component-styling-which-framework-wins
|
||||
[10] Changelog - Ant Design https://ant.design/changelog/
|
||||
[11] Tailwind CSS vs. Ant Design: Choosing the Right UI Framework for ... https://www.linkedin.com/pulse/tailwind-css-vs-ant-design-choosing-right-ui-your-project-lalwani-ckibf
|
||||
[12] Customize Theme - Ant Design https://ant.design/docs/react/customize-theme/
|
||||
[13] Updates - Ant Design System for Figma https://www.antforfigma.com/updates
|
||||
[14] Tailwind CSS+UI or Ant Design for an enterprise CMS built ... - Reddit https://www.reddit.com/r/webdev/comments/1af9kdx/tailwind_cssui_or_ant_design_for_an_enterprise/
|
||||
[15] Introduction to Customization - Ant Design System for Figma https://www.antforfigma.com/docs/customization-intro
|
||||
[16] Ant Design Select: Features, Benefits & Best Practices https://www.creolestudios.com/ant-design-select-guide/
|
||||
[17] Tailwind CSS vs Ant Design https://www.csshunter.com/tailwind-css-vs-ant-design/
|
||||
[18] Ant Design System for Figma - UI Kit https://www.antforfigma.com
|
||||
[19] Tailwind CSS vs Ant Design Comparison for React Styling | MoldStud https://moldstud.com/articles/p-tailwind-css-vs-ant-design-a-comprehensive-comparison-for-react-component-styling
|
||||
[20] CSS Frameworks 2024: Tailwind vs. Others - Tailkits https://tailkits.com/blog/popular-css-frameworks/
|
||||
[21] Customize theme with ConfigProvider - Ant Design Vue https://www.antdv.com/docs/vue/customize-theme
|
||||
[22] Best 19 React UI Component Libraries in 2025 - Prismic https://prismic.io/blog/react-component-libraries
|
||||
[23] Top 5 CSS Frameworks in 2024: Tailwind, Material-UI, Ant Design ... https://www.codingwallah.org/blog/2024-top-css-frameworks-tailwind-material-ui-ant-design-shadcn-chakra-ui
|
||||
[24] Ant Design System Overview: Versions, Basics & Resources - Motiff https://motiff.com/design-system-wiki/design-systems-overview/ant-design
|
||||
[25] The 10 Best Alternatives to Ant Design in 2025 - Subframe https://www.subframe.com/tips/ant-design-alternatives
|
||||
[26] antd vs Tailwind CSS - compare differences and reviews? | LibHunt https://www.libhunt.com/compare-ant-design-vs-tailwindcss
|
||||
[27] ant-design/cssinjs - GitHub https://github.com/ant-design/cssinjs
|
||||
[28] How to use Ant Design Icons and customize them in ReactJS app https://www.youtube.com/watch?v=faUYaR4Nb1E
|
||||
[29] React Bootstrap vs Ant Design: Which One is Better in 2025? https://www.subframe.com/tips/react-bootstrap-vs-ant-design
|
||||
[30] How to customize Ant.design styles - Stack Overflow https://stackoverflow.com/questions/48620712/how-to-customize-ant-design-styles
|
||||
[31] Reduce Ant Design Bundle Size Multiple Times - LinkedIn https://www.linkedin.com/posts/octa-labs-official_reduce-ant-design-bundle-size-multiple-times-activity-7192040213894905858-emnS
|
||||
[32] How to update style element in Ant Design React component using ... https://stackoverflow.com/questions/71974731/how-to-update-style-element-in-ant-design-react-component-using-javascript
|
||||
[33] Bundle size optimization #22698 - ant-design/ant-design - GitHub https://github.com/ant-design/ant-design/issues/22698
|
||||
[34] CSS in v6 - Ant Design https://ant.design/docs/blog/css-tricks/
|
||||
[35] Ant Design - The world's second most popular React UI framework https://ant-design.antgroup.com
|
||||
[36] Does Ant Design Library size effect website performance? https://stackoverflow.com/questions/73658638/does-ant-design-library-size-effect-website-performance
|
||||
[37] ant-design/antd-style: css-in-js library with antd v5 token system https://github.com/ant-design/antd-style
|
||||
[38] Icon - Ant Design https://ant.design/components/icon/
|
||||
[39] Optimizing Performance in React Apps Using Ant Design - Till it's done https://tillitsdone.com/blogs/react-performance-with-ant-design/
|
||||
[40] Quick Start to CSS in JS - Ant Design Style https://ant-design.github.io/antd-style/guide/css-in-js-intro/
|
||||
[41] Ant Design vs Shadcn: Which One is Better in 2025? - Subframe https://www.subframe.com/tips/ant-design-vs-shadcn
|
||||
[42] Ant Design Theme Customization in React JS - YouTube https://www.youtube.com/watch?v=tgD-csfLNUs
|
||||
[43] Build Smooth Scrolling Parallax Effects with React & Framer Motion https://www.youtube.com/watch?v=E5NK61vO_sg
|
||||
[44] Are you going to continue using css-in-js? It negatively affects the ... https://github.com/ant-design/ant-design/discussions/43668
|
||||
[45] Taking Control of the Browser Dark Mode with Ant Design and ... - TY https://www.tzeyiing.com/posts/taking-control-of-the-browser-dark-mode-with-ant-design-and-tailwindcss-for-dark-mode-wizardry
|
||||
[46] How to Use Framer Motion for React Animations https://blog.pixelfreestudio.com/how-to-use-framer-motion-for-react-animations/
|
||||
[47] Dark Mode - Ant Design https://ant.design/docs/spec/dark/
|
||||
[48] Motion - Ant Design https://ant.design/docs/spec/motion/
|
||||
[49] Blueprint vs Ant Design: Which One is Better in 2025? - Subframe https://www.subframe.com/tips/blueprint-vs-ant-design
|
||||
[50] Is there a way to change the colour palette for light and dark themes ... https://stackoverflow.com/questions/74653488/is-there-a-way-to-change-the-colour-palette-for-light-and-dark-themes-in-ant-des
|
||||
[51] Create clipped parallax with framer motion - Stack Overflow https://stackoverflow.com/questions/76777374/create-clipped-parallax-with-framer-motion
|
||||
[52] @ant-design/cssinjs CDN by jsDelivr - A CDN for npm and GitHub https://www.jsdelivr.com/package/npm/@ant-design/cssinjs
|
||||
[53] Build a Parallax Section Transition with React and Framer Motion https://www.youtube.com/watch?v=nZ2LDB7Q7Rk
|
||||
[54] Theming - Refine dev https://refine.dev/docs/ui-integrations/ant-design/theming/
|
||||
[55] 13 Awesome React Animation Libraries To Elevate Your Design ... https://magicui.design/blog/react-animation-libraries
|
||||
@@ -1,175 +0,0 @@
|
||||
# CHORUS Services Network Connectivity Validation Report
|
||||
|
||||
**Date:** 2025-08-02
|
||||
**System:** Docker Swarm on Linux 6.12.10
|
||||
**Report Generated By:** Systems Engineer (Network Infrastructure Validation)
|
||||
|
||||
## Executive Summary
|
||||
|
||||
✅ **DEPLOYMENT STATUS: FULLY OPERATIONAL**
|
||||
|
||||
The CHORUS Services website deployment has been successfully validated and is performing optimally across all tested endpoints and metrics. All infrastructure components are healthy and properly configured.
|
||||
|
||||
## Service Health Validation
|
||||
|
||||
### Docker Service Status
|
||||
- **Service Name:** `chorus-website_chorus-website`
|
||||
- **Replicas:** 2/2 healthy and running
|
||||
- **Image:** `registry.home.deepblack.cloud/tony/chorus-website:latest`
|
||||
- **Memory Allocation:** 64MiB reserved, 128MiB limit per replica
|
||||
- **Deployment Status:** Update completed 3 minutes ago
|
||||
- **Placement Constraint:** Running on walnut node
|
||||
|
||||
### Resource Utilization
|
||||
- **CPU:** Efficient Next.js 14.2.31 runtime
|
||||
- **Memory:** Well within allocated limits
|
||||
- **Startup Time:** 44-48ms average per container
|
||||
- **Health Status:** All containers reporting healthy
|
||||
|
||||
## Network Architecture Validation
|
||||
|
||||
### Docker Networks
|
||||
1. **tengig** (External/Public)
|
||||
- Type: Overlay network for external traffic
|
||||
- Purpose: Traefik ingress and SSL termination
|
||||
- Status: ✅ Operational
|
||||
|
||||
2. **chorus-website_chorus_website_network** (Internal)
|
||||
- Type: Stack-specific overlay network
|
||||
- Purpose: Internal service communication
|
||||
- Status: ✅ Operational with load balancer
|
||||
|
||||
### Port Configuration
|
||||
- **Internal Container Port:** 80/tcp
|
||||
- **External Published Port:** 3100/tcp
|
||||
- **Port Mapping:** *:3100->80/tcp (ingress mode)
|
||||
- **Protocol:** TCP with proper HTTP/HTTPS handling
|
||||
|
||||
## Connectivity Test Results
|
||||
|
||||
### Local Port Access (localhost:3100)
|
||||
- **Status:** ✅ OPERATIONAL
|
||||
- **Response Time:** 8-15ms average
|
||||
- **HTTP Status:** 200 OK
|
||||
- **Content:** Full Next.js application loading correctly
|
||||
- **Headers:** Proper security headers present (X-Frame-Options, X-Content-Type-Options, Referrer-Policy)
|
||||
|
||||
### External HTTPS Access
|
||||
|
||||
#### Primary Domain (https://www.chorus.services)
|
||||
- **Status:** ✅ OPERATIONAL
|
||||
- **Response Time:** 69-76ms average
|
||||
- **HTTP Protocol:** HTTP/2
|
||||
- **SSL Certificate:** Valid (0 = success)
|
||||
- **DNS Resolution:** 5-6ms
|
||||
- **TCP Connection:** 6-7ms
|
||||
- **SSL Handshake:** 43-58ms
|
||||
|
||||
#### Redirect Domain (https://chorus.services)
|
||||
- **Status:** ✅ OPERATIONAL
|
||||
- **HTTP Status:** 307 Temporary Redirect
|
||||
- **Redirect Target:** https://www.chorus.services/
|
||||
- **Redirect Time:** <1ms
|
||||
|
||||
### External IP Access (202.171.184.242:3100)
|
||||
- **Status:** ⚠️ BLOCKED (Expected - firewall protection)
|
||||
- **Note:** Direct IP access blocked by security configuration (normal for production)
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### Response Time Analysis
|
||||
| Endpoint | Min | Max | Average | Protocol |
|
||||
|----------|-----|-----|---------|----------|
|
||||
| localhost:3100 | 8.6ms | 14.6ms | 10.9ms | HTTP/1.1 |
|
||||
| www.chorus.services | 69.2ms | 76.2ms | 73.2ms | HTTP/2 |
|
||||
|
||||
### Content Delivery Performance
|
||||
- **Compressed Size:** 20,464 bytes (87% compression ratio)
|
||||
- **Uncompressed Size:** 162,285 bytes
|
||||
- **Transfer Speed:** 361KB/s (compressed), 2.5MB/s (uncompressed)
|
||||
- **Compression:** Gzip enabled and working efficiently
|
||||
|
||||
### Caching Configuration
|
||||
- **Cache-Control:** `s-maxage=31536000, stale-while-revalidate`
|
||||
- **ETag:** `"cjhbuylf93h0q"` (consistent across requests)
|
||||
- **Next.js Cache Status:** HIT (optimal caching performance)
|
||||
|
||||
## SSL/TLS Configuration
|
||||
|
||||
### Certificate Details
|
||||
- **Certificate Authority:** Let's Encrypt
|
||||
- **Certificate Resolver:** letsencryptresolver
|
||||
- **SSL Verification:** ✅ Valid (result code: 0)
|
||||
- **Protocol:** TLS with HTTP/2 support
|
||||
- **Security Headers:** Properly configured
|
||||
|
||||
### Traefik Configuration
|
||||
- **Routing Rule:** `Host(\`www.chorus.services\`) || Host(\`chorus.services\`)`
|
||||
- **Entrypoint:** web-secured (HTTPS)
|
||||
- **Middleware:** chorus-redirect (apex → www redirect)
|
||||
- **Load Balancer:** Configured with passhostheader=true
|
||||
|
||||
## Load Balancing Assessment
|
||||
|
||||
### Service Discovery
|
||||
- **Replicas:** 2 containers distributed across available nodes
|
||||
- **Load Balancer:** Docker Swarm ingress with VIP mode
|
||||
- **Health Checks:** Container-level health monitoring
|
||||
- **Distribution:** Even traffic distribution confirmed
|
||||
|
||||
### Container Health
|
||||
All containers show consistent startup patterns:
|
||||
- Next.js runtime initialization: ~44-48ms
|
||||
- Network binding: 0.0.0.0:80 (all interfaces)
|
||||
- Ready state: Achieved within <50ms
|
||||
|
||||
## Security Validation
|
||||
|
||||
### HTTP Security Headers
|
||||
```
|
||||
X-Frame-Options: DENY
|
||||
X-Content-Type-Options: nosniff
|
||||
Referrer-Policy: strict-origin-when-cross-origin
|
||||
```
|
||||
|
||||
### Network Security
|
||||
- External IP direct access blocked (appropriate security posture)
|
||||
- HTTPS-only access enforced through Traefik
|
||||
- Proper certificate chain validation
|
||||
|
||||
## Optimization Recommendations
|
||||
|
||||
### Performance Optimizations
|
||||
1. **Compression Ratio Excellent:** 87% compression achieved (20KB vs 162KB)
|
||||
2. **Caching Strategy Optimal:** Long-term caching with stale-while-revalidate
|
||||
3. **HTTP/2 Benefits:** Protocol upgrade providing multiplexing advantages
|
||||
|
||||
### Infrastructure Optimizations
|
||||
1. **Memory Allocation:** Current 128MiB limit appropriate for Next.js workload
|
||||
2. **Replica Count:** 2 replicas providing adequate redundancy for current load
|
||||
3. **Health Check Timing:** Container startup time <50ms is excellent
|
||||
|
||||
### Monitoring Recommendations
|
||||
1. **Response Time Monitoring:** Set alerts for >100ms average response time
|
||||
2. **SSL Certificate Monitoring:** Monitor certificate expiration (Let's Encrypt 90-day cycle)
|
||||
3. **Cache Hit Ratio:** Monitor Next.js cache performance metrics
|
||||
|
||||
### Minor Issues Identified
|
||||
1. **Next.js Metadata Warnings:** Viewport/themeColor metadata should be moved to viewport export
|
||||
- Impact: Minimal (development warnings only)
|
||||
- Action: Update Next.js metadata configuration
|
||||
|
||||
## Conclusion
|
||||
|
||||
The CHORUS Services website deployment is **FULLY OPERATIONAL** with excellent performance characteristics:
|
||||
|
||||
- ✅ All critical endpoints responding correctly
|
||||
- ✅ SSL certificates valid and properly configured
|
||||
- ✅ Load balancing and redundancy working
|
||||
- ✅ Performance within acceptable ranges (10-75ms)
|
||||
- ✅ Security headers and HTTPS enforcement active
|
||||
- ✅ Compression and caching optimized
|
||||
|
||||
The infrastructure demonstrates robust engineering with proper Docker Swarm networking, Traefik routing, and Next.js optimization. The deployment meets production-ready standards for availability, performance, and security.
|
||||
|
||||
**Network Infrastructure Status: VALIDATED ✅**
|
||||
129
PROJECT_PLAN.md
129
PROJECT_PLAN.md
@@ -1,129 +0,0 @@
|
||||
Certainly! Here’s a revised project plan for your development team, updated with your new brand—**CHORUS Services**—the sound-based component names (WHOOSH, BZZZ, HMMM, SLURP, COOEE), and your technology preferences (Ant Design for UI, no Tailwind). The plan will maintain all prior detail and traceability, but reflect your innovation-driven focus and up-to-date tech stack.
|
||||
|
||||
# CHORUS Services Development Project Plan (v2, Branded)
|
||||
|
||||
## 1. Executive Summary
|
||||
|
||||
CHORUS Services is an enterprise-ready distributed AI orchestration platform designed to eliminate context loss, reduce hallucinations, and enable scalable multi-agent collaboration. All naming, interface, and UI/UX conventions have been updated to the CHORUS Services brand and tech-focused visuals, using Ant Design 5+ for the front-end foundation.
|
||||
|
||||
## 2. Updated Technology Stack Overview
|
||||
|
||||
- **Frontend/UI**: React (Next.js 13+), Ant Design 5+ (component library and theming), Framer Motion (animations/parallax), CSS-in-JS with @ant-design/cssinjs
|
||||
- **Backend/API**: FastAPI (Python) for APIs; Node.js for certain orchestrator tools
|
||||
- **P2P/Mesh**: libp2p (Go or Rust) for BZZZ agent network
|
||||
- **Context Management**: SLURP context curator and archive (Python/FastAPI with Postgres/SQLite)
|
||||
- **Distributed Reasoning**: HMMM meta-discussion/consensus service
|
||||
- **Feedback/RL**: COOEE feedback service (Python/FastAPI, integrated with RL agents)
|
||||
- **Workflow Engine**: WHOOSH orchestration engine (integrates natively with BZZZ, HMMM, and SLURP)
|
||||
- **Audit Log**: Hypercore event log (Node.js or Rust)
|
||||
- **Networking/Security**: Tailscale (P2P overlay), Cloudflare as edge/proxy
|
||||
- **Automation/DevOps**: Ansible
|
||||
- **Model Hosting**: Ollama (on cloud/VPS), BZZZ container integration
|
||||
|
||||
## 3. Feature/Naming Reference
|
||||
|
||||
| Subsystem | Updated Name | Role | Old Name |
|
||||
|-----------|---------------------|-----------------------------|-------------|
|
||||
| Orchestrator | WHOOSH | Main workflow/control hub | Hive |
|
||||
| P2P Comm. | BZZZ | Agent mesh/coordination | Bzzz |
|
||||
| Reasoning | HMMM | Collaborative pre-task logic| Antennae |
|
||||
| Context | SLURP | Context curation/storage | HCFS |
|
||||
| Feedback | COOEE | RL feedback/tuning platform | RL Tuner |
|
||||
| Log | Hypercore Log | Tamper-evident, audit | Hypercore |
|
||||
|
||||
## 4. Project Modules and Feature Codes
|
||||
|
||||
(For traceability: **CHORUS-*** codes)
|
||||
|
||||
| Code | Module | Feature Name | Description |
|
||||
|--------------|-----------------|--------------------------|-------------------------------------------------------------------|
|
||||
| WHOOSH.01 | WHOOSH | AgentRegistry | Register/configure agents; display real-time health/status |
|
||||
| WHOOSH.02 | WHOOSH | WorkflowEditor | Visual workflow editor (Ant Design UI, React Flow integration) |
|
||||
| BZZZ.01 | BZZZ | MeshNetworking | Mesh node join, peer discovery (libp2p, Tailscale) |
|
||||
| BZZZ.02 | BZZZ | P2PTaskFlow | Event and coordination message system |
|
||||
| HMMM.01 | HMMM | MetaDiscussion | Structured agent reasoning and consensus before action |
|
||||
| SLURP.01 | SLURP | CuratorEngine | Event classification, context promotion, RL signals |
|
||||
| SLURP.02 | SLURP | StorageAPI | Context archive, Postgres/SQLite integration |
|
||||
| COOEE.01 | COOEE | FeedbackCollector | REST API + message bus for feedback ingestion |
|
||||
| COOEE.02 | COOEE | RLPolicyAdapter | RL-based tuning of promotion/curation rules |
|
||||
| LOG.01 | Hypercore Log | EventIngestion | Append-only event logging (Node.js/Rust) |
|
||||
| LOG.02 | Hypercore Log | LogExplorer | UI for browsing/auditing logs (React + Ant Design table/views) |
|
||||
|
||||
## 5. Tech Implementation Notes
|
||||
|
||||
### Ant Design UI System
|
||||
|
||||
- Use Ant Design’s dark mode theme as base, customized for CHORUS colors (#1a1a1a, #007aff, #30d158, #f2f2f7)
|
||||
- All buttons, cards, tables, and forms should use the Ant Design system for accessibility and consistency
|
||||
- Use Framer Motion for scroll-based/premier animations (parallax, reveals, metric counter-ups)
|
||||
- Network diagrams, logs, and workflow flows: combine Ant Design components with (if needed) custom SVG/Canvas
|
||||
|
||||
### Backend/APIs
|
||||
|
||||
- New endpoint structure should use updated nomenclature: `api/chorus/agent`, `api/chorus/whoosh/`, etc.
|
||||
- API docs generated via FastAPI’s native OpenAPI/Swagger
|
||||
|
||||
### BZZZ Nodes & Tailscale
|
||||
|
||||
- All peer discovery and pubsub must use BZZZ and reference BZZZ in code/docs (not Bzzz or any bee analogies)
|
||||
- Tailscale for securing cross-cloud/cluster comms; internal services should **never be internet-facing**
|
||||
|
||||
### Context/RL
|
||||
|
||||
- SLURP’s curation logic should be configurable via Ant Design modal/settings UI (YAML upload + real-time tuning)
|
||||
- COOEE’s feedback features have prominent performance dashboards (Ant Progress, Trend, and Gauge components)
|
||||
|
||||
### Logging
|
||||
|
||||
- All logs and traces to be named **CHORUS Event Log** or **Audit Log** in UI/copy
|
||||
- Include direct links from module UIs to related event log entries for traceability
|
||||
|
||||
## 6. Sprint Board / Waterfall Plan (Phased with New Names)
|
||||
|
||||
### Phase 1: Infra & Scaffolding
|
||||
- Initial WHOOSH orchestration deployment (UI + backend structure)
|
||||
- BZZZ mesh networking POC via Tailscale in local/lab
|
||||
- SLURP curation engine v0 (basic event filtering)
|
||||
- LOG module: event ingestion scaffolding
|
||||
|
||||
### Phase 2: Core Intelligence Flows
|
||||
- BZZZ agent integration and mesh-wide task routing
|
||||
- HMMM pre-task consensus/reasoning mechanism (with meta-discussion UI)
|
||||
- COOEE feedback API and RL signal loop (basic metric view)
|
||||
- SLURP role-based context curation (UI for manual tuning/testing)
|
||||
|
||||
### Phase 3: UX/Dashboards
|
||||
- Ant Design-powered dashboards for:
|
||||
- Real-time agent status (WHOOSH.01)
|
||||
- Context curation health and history (SLURP.01)
|
||||
- Feedback analytics (COOEE.01)
|
||||
- Log explorer with filters (LOG.02)
|
||||
- Public landing page: focus on what CHORUS **achieves** technologically
|
||||
|
||||
### Phase 4: Integration & Security
|
||||
- Harden all BZZZ/WHOOSH communications with Tailscale routing
|
||||
- Cloudflare Zero Trust on all public dashboards/APIs
|
||||
- OAuth2/JWT integration for RBAC across the UI
|
||||
|
||||
### Phase 5: Productionization
|
||||
- CI/CD pipelines with automated tests and deployments
|
||||
- Automated backup to Backblaze/Scaleway
|
||||
- Cost/performance optimization (monitoring, log rotation, model/agent scaling)
|
||||
|
||||
## 7. Key Team Roles
|
||||
|
||||
- **Frontend Developer (Ant Design + React):** UI/UX implementation, parallax and animation integration, technical dashboard builds
|
||||
- **Backend/API Developer (Python/FastAPI):** REST API, orchestration logic, BZZZ bridge, SLURP curation implementation
|
||||
- **DevOps/Cloud Engineer:** Ansible, Tailscale, AWS/Hetzner integration, security hardening, storage automation
|
||||
- **ML/Feedback Specialist:** COOEE RL policy, feedback analysis, SLURP rule optimization, trace/data QA
|
||||
- **Documentation/QA Engineer:** API/SDK docs, audit storybook, minimization of jargon, clear modular handover
|
||||
|
||||
## 8. Launch Checklist
|
||||
|
||||
- [ ] All modules rebranded in UI, code, and docs (WHOOSH, BZZZ, HMMM, SLURP, COOEE, CHORUS Event Log)
|
||||
- [ ] Ant Design v5+ and theme applied across UI, with Framer Motion for animation
|
||||
- [ ] Metrics dashboards highlight context retention, hallucination reduction, and cross-agent collaboration rates
|
||||
- [ ] No residual bee/honeycomb/hexagon branding or component naming
|
||||
- [ ] Landing page language and imagery emphasize technological innovation and enterprise readiness
|
||||
- [ ] Automated security, monitoring, and backup pipelines in place
|
||||
|
||||
380
README.md
380
README.md
@@ -1,380 +0,0 @@
|
||||
# 🎵 CHORUS Services - Distributed AI Orchestration Platform
|
||||
|
||||
**CHORUS Services** eliminates context loss, reduces hallucinations, and enables scalable multi-agent collaboration through intelligent context management and distributed reasoning.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
# Initialize submodules (first time only)
|
||||
./chorus.sh init
|
||||
|
||||
# Login to Docker registry
|
||||
./chorus.sh login
|
||||
|
||||
# Build and push images to registry
|
||||
./chorus.sh build
|
||||
|
||||
# Start all services
|
||||
./chorus.sh start
|
||||
|
||||
# Check service health
|
||||
./chorus.sh health
|
||||
```
|
||||
|
||||
**Access Points:**
|
||||
- 🌐 **Marketing Website**: https://www.chorus.services (production)
|
||||
- 🎛️ **Dashboard**: https://dashboard.chorus.services (production)
|
||||
- 📡 **API**: https://api.chorus.services (production)
|
||||
- 📊 **Grafana Monitoring**: http://localhost:3002 (admin/chorusadmin)
|
||||
- 🔍 **Prometheus Metrics**: http://localhost:9092
|
||||
|
||||
**Local Development:**
|
||||
- Dashboard: http://localhost:3001
|
||||
- API Docs: http://localhost:8087/docs
|
||||
|
||||
## 🏗️ Architecture Overview
|
||||
|
||||
CHORUS Services integrates five core components into a unified platform:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ CHORUS Services Platform │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ 🎛️ WHOOSH Orchestrator │ 📊 Monitoring & Analytics │
|
||||
│ - Workflow Management │ - Prometheus Metrics │
|
||||
│ - Agent Coordination │ - Grafana Dashboards │
|
||||
│ - Task Distribution │ - Real-time Health Monitoring │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ 🐝 BZZZ P2P Network │ 🧠 SLURP Context Management │
|
||||
│ - Agent Mesh Networking │ - Hierarchical Context Storage │
|
||||
│ - Peer Discovery │ - Semantic Search & Indexing │
|
||||
│ - Distributed Coordination │ - Multi-language SDK Support │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ 🎯 COOEE Feedback System (RL Context SLURP Integration) │
|
||||
│ - Performance-based Learning │ - Context Relevance Tuning │
|
||||
│ - Agent Feedback Collection │ - Role-based Access Control │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## 🧩 Core Components
|
||||
|
||||
### 🎛️ WHOOSH - Orchestration Engine
|
||||
- **Enterprise workflow management** for AI agents
|
||||
- **Visual workflow editor** with React Flow
|
||||
- **Real-time performance monitoring** and metrics
|
||||
- **Multi-agent task distribution** and coordination
|
||||
|
||||
### 🐝 BZZZ - P2P Agent Coordination
|
||||
- **Mesh networking** with libp2p for resilient communication
|
||||
- **Automatic peer discovery** via mDNS
|
||||
- **Distributed task coordination** without single points of failure
|
||||
- **Go-based** high-performance networking layer
|
||||
|
||||
### 🧠 SLURP - Context Curator Service
|
||||
- **Context curation** from Hypercore logs based on agent roles and triggers
|
||||
- **Role-based context filtering** for permissions, deprecation, feature changes
|
||||
- **SQL-based context delivery** with intelligent relevance scoring
|
||||
- **Integration with HCFS** for transparent filesystem-based context access
|
||||
|
||||
### 🎯 COOEE - Feedback & Learning (RL Context SLURP)
|
||||
- **Reinforcement learning** for context relevance tuning
|
||||
- **Agent feedback collection** with upvote/downvote systems
|
||||
- **Role-based context filtering** and access control
|
||||
- **Continuous improvement** through real-world performance data
|
||||
|
||||
### 📊 Monitoring & Analytics
|
||||
- **Prometheus metrics collection** across all services
|
||||
- **Grafana dashboards** for visualization and alerting
|
||||
- **Health checks** and performance monitoring
|
||||
- **Audit trails** with complete traceability
|
||||
|
||||
## 🛠️ Management Commands
|
||||
|
||||
The `./chorus.sh` script provides unified management:
|
||||
|
||||
```bash
|
||||
# Service Management
|
||||
./chorus.sh start # Start all services (registry images)
|
||||
./chorus.sh stop # Stop all services
|
||||
./chorus.sh restart # Restart all services
|
||||
./chorus.sh status # Show service status
|
||||
./chorus.sh dev # Start in development mode (local builds)
|
||||
|
||||
# Docker Registry Operations
|
||||
./chorus.sh login # Login to Docker registry
|
||||
./chorus.sh build # Build and push all images to registry
|
||||
./chorus.sh pull # Pull latest images from registry
|
||||
|
||||
# Individual Component Builds
|
||||
./build-and-push.sh website # Build and push website only
|
||||
./build-and-push.sh whoosh-backend # Build and push WHOOSH backend only
|
||||
./build-and-push.sh bzzz # Build and push BZZZ coordinator only
|
||||
|
||||
# Production Deployment
|
||||
./chorus.sh deploy # Deploy to Docker Swarm (production)
|
||||
./chorus.sh undeploy # Remove from Docker Swarm
|
||||
|
||||
# Development & Maintenance
|
||||
./chorus.sh update # Update submodules to latest
|
||||
./chorus.sh logs [service] # View logs
|
||||
./chorus.sh health # Check service health
|
||||
./chorus.sh clean # Clean up resources
|
||||
|
||||
# First-time Setup
|
||||
./chorus.sh init # Initialize git submodules
|
||||
```
|
||||
|
||||
## 🌐 Service Endpoints
|
||||
|
||||
| Service | Port | Purpose | Health Check |
|
||||
|---------|------|---------|--------------|
|
||||
| **WHOOSH Dashboard** | 3001 | Web UI for orchestration | http://localhost:3001 |
|
||||
| **WHOOSH API** | 8087 | REST API + WebSocket | http://localhost:8087/health |
|
||||
| **BZZZ Coordinator** | 8080 | P2P coordination API | http://localhost:8080/health |
|
||||
| **SLURP API** | 8088 | Context management API | http://localhost:8088/health |
|
||||
| **COOEE RL Tuner** | 8089 | Feedback & learning API | http://localhost:8089/health |
|
||||
| **Grafana** | 3002 | Monitoring dashboards | http://localhost:3002 |
|
||||
| **Prometheus** | 9092 | Metrics collection | http://localhost:9092 |
|
||||
| **PostgreSQL** | 5433 | Database | - |
|
||||
| **Redis** | 6380 | Cache & message queue | - |
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```
|
||||
chorus.services/
|
||||
├── 📋 README.md # This file
|
||||
├── 🎛️ chorus.sh # Management script
|
||||
├── 🐳 docker-compose.yml # Service orchestration
|
||||
├── 🗄️ init-db.sql # Database initialization
|
||||
├── 📊 monitoring/ # Monitoring configuration
|
||||
│ ├── prometheus.yml
|
||||
│ └── grafana/
|
||||
├── 📚 docs/ # Project documentation
|
||||
│ ├── PROJECT_PLAN.md
|
||||
│ ├── DESIGN.md
|
||||
│ └── Copywriting.md
|
||||
└── 🧩 modules/ # Git submodules
|
||||
├── whoosh/ # Orchestration platform
|
||||
├── bzzz/ # P2P coordination
|
||||
├── slurp/ # Context management (HCFS)
|
||||
└── website/ # Marketing website (Next.js)
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
Key configuration options in `docker-compose.yml`:
|
||||
|
||||
- **Database**: `DATABASE_URL`, `POSTGRES_*`
|
||||
- **Redis**: `REDIS_URL`
|
||||
- **CORS**: `CORS_ORIGINS`
|
||||
- **Logging**: `LOG_LEVEL`
|
||||
- **Environment**: `ENVIRONMENT` (development/production)
|
||||
|
||||
### Adding New Agents
|
||||
Edit agent configurations in `modules/whoosh/config/whoosh.yaml`:
|
||||
|
||||
```yaml
|
||||
whoosh:
|
||||
agents:
|
||||
my_agent:
|
||||
name: "My Agent"
|
||||
endpoint: "http://192.168.1.100:11434"
|
||||
model: "llama3.1"
|
||||
specialization: "coding"
|
||||
capabilities: ["python", "javascript"]
|
||||
```
|
||||
|
||||
## 🔧 Development Setup
|
||||
|
||||
### Prerequisites
|
||||
- Docker and Docker Compose
|
||||
- Git
|
||||
- 8GB+ RAM recommended
|
||||
- Access to Ollama agents on network
|
||||
|
||||
### First-Time Setup
|
||||
```bash
|
||||
# Clone with submodules
|
||||
git clone --recursive <repository-url>
|
||||
cd chorus.services
|
||||
|
||||
# Or initialize submodules if already cloned
|
||||
./chorus.sh init
|
||||
|
||||
# Login to Docker registry
|
||||
./chorus.sh login
|
||||
|
||||
# Build and push all images to registry
|
||||
./chorus.sh build
|
||||
|
||||
# Start services
|
||||
./chorus.sh start
|
||||
```
|
||||
|
||||
### Development Workflow
|
||||
```bash
|
||||
# Development mode (local builds, live reloading)
|
||||
./chorus.sh dev
|
||||
|
||||
# Update submodules to latest
|
||||
./chorus.sh update
|
||||
|
||||
# Rebuild and push after changes
|
||||
./chorus.sh build
|
||||
|
||||
# Pull latest images from registry
|
||||
./chorus.sh pull
|
||||
|
||||
# View logs during development
|
||||
./chorus.sh logs
|
||||
|
||||
# Check service health
|
||||
./chorus.sh health
|
||||
|
||||
# Restart after changes
|
||||
./chorus.sh restart
|
||||
```
|
||||
|
||||
### Production Deployment
|
||||
```bash
|
||||
# Deploy to Docker Swarm (production)
|
||||
./chorus.sh deploy
|
||||
|
||||
# Access at https://*.home.deepblack.cloud endpoints
|
||||
|
||||
# Remove from swarm
|
||||
./chorus.sh undeploy
|
||||
```
|
||||
|
||||
## 🚀 Git Submodules Guide
|
||||
|
||||
CHORUS Services uses git submodules to integrate independent components:
|
||||
|
||||
### Basic Submodule Commands
|
||||
```bash
|
||||
# Initialize submodules (first time)
|
||||
git submodule init
|
||||
git submodule update
|
||||
|
||||
# Update to latest commits
|
||||
git submodule update --remote
|
||||
|
||||
# Check submodule status
|
||||
git submodule status
|
||||
|
||||
# Enter a submodule to work on it
|
||||
cd modules/whoosh
|
||||
git checkout main
|
||||
# Make changes, commit, push
|
||||
|
||||
# Return to main project and commit submodule updates
|
||||
cd ../..
|
||||
git add modules/whoosh
|
||||
git commit -m "Update whoosh submodule"
|
||||
```
|
||||
|
||||
### Working with Submodules
|
||||
- **Each submodule** is an independent git repository
|
||||
- **Changes within submodules** must be committed in the submodule first
|
||||
- **Parent project** tracks specific commits of each submodule
|
||||
- **Use `./chorus.sh update`** to pull latest changes from all submodules
|
||||
|
||||
## 📊 Monitoring & Metrics
|
||||
|
||||
### Key Metrics Tracked
|
||||
- **Agent Performance**: Response time, throughput, availability
|
||||
- **Context Management**: Search performance, storage efficiency
|
||||
- **P2P Network**: Peer connectivity, message latency
|
||||
- **System Health**: CPU, memory, GPU utilization
|
||||
- **Workflow Execution**: Success rate, execution time
|
||||
|
||||
### Grafana Dashboards
|
||||
- **CHORUS Overview**: Platform-wide health and metrics
|
||||
- **Agent Performance**: Individual agent monitoring
|
||||
- **Context Analytics**: SLURP usage and performance
|
||||
- **Network Health**: BZZZ P2P network monitoring
|
||||
|
||||
## 🛡️ Security Features
|
||||
|
||||
- **Authentication**: JWT tokens and API key support
|
||||
- **Role-based Access**: Context filtering by agent roles
|
||||
- **Audit Trails**: Complete logging of all operations
|
||||
- **Network Security**: Internal container networking
|
||||
- **Data Privacy**: Encrypted context storage
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
### Component Development
|
||||
Each component can be developed independently:
|
||||
|
||||
```bash
|
||||
# Work on WHOOSH orchestrator
|
||||
cd modules/whoosh
|
||||
# Follow component-specific development guide
|
||||
|
||||
# Work on BZZZ P2P system
|
||||
cd modules/bzzz
|
||||
# Follow Go development practices
|
||||
|
||||
# Work on SLURP context system
|
||||
cd modules/slurp
|
||||
# Follow Python development practices
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
```bash
|
||||
# Test full platform integration
|
||||
./chorus.sh start
|
||||
./chorus.sh health
|
||||
|
||||
# Run component-specific tests
|
||||
cd modules/[component]
|
||||
# Follow component test procedures
|
||||
```
|
||||
|
||||
## 📈 Performance Metrics
|
||||
|
||||
**Production-Ready Performance:**
|
||||
- **API Response Times**: <5ms cached, <50ms uncached
|
||||
- **Context Search**: <100ms semantic search across 1000+ contexts
|
||||
- **P2P Network**: <10ms peer communication latency
|
||||
- **Workflow Execution**: Support for complex multi-agent workflows
|
||||
- **Concurrent Agents**: Scales to 10+ simultaneous agents
|
||||
|
||||
## 🎯 Use Cases
|
||||
|
||||
### Enterprise AI Development
|
||||
- **Multi-agent software development** with specialized AI agents
|
||||
- **Context-aware code generation** with organizational knowledge
|
||||
- **Distributed task execution** across development infrastructure
|
||||
|
||||
### Research & Collaboration
|
||||
- **AI agent coordination research** with real-world deployment
|
||||
- **Context management studies** with hierarchical storage
|
||||
- **Distributed systems research** with P2P networking
|
||||
|
||||
### Production AI Systems
|
||||
- **Enterprise AI orchestration** with monitoring and compliance
|
||||
- **Context-aware AI applications** with persistent memory
|
||||
- **Scalable multi-agent deployments** with automatic coordination
|
||||
|
||||
## 📞 Support & Documentation
|
||||
|
||||
- **🛠️ Management**: `./chorus.sh` for all operations
|
||||
- **📋 Component Docs**: See individual `modules/*/README.md`
|
||||
- **🔧 API Documentation**: http://localhost:8087/docs (when running)
|
||||
- **📊 Monitoring**: http://localhost:3002 (Grafana dashboards)
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Welcome to CHORUS Services!
|
||||
|
||||
**CHORUS Services represents the future of distributed AI orchestration**, providing enterprise-ready tools for context management, agent coordination, and intelligent workflow execution.
|
||||
|
||||
🎵 *"Individual components make music, but CHORUS Services creates symphony."*
|
||||
|
||||
**Ready to orchestrate your AI agents?**
|
||||
```bash
|
||||
./chorus.sh start
|
||||
```
|
||||
@@ -1,619 +0,0 @@
|
||||
# CHORUS Services Website - Comprehensive UX Design Strategy
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document provides a complete UX design strategy for the CHORUS Services marketing website, focusing on creating an enterprise-grade experience that showcases distributed AI orchestration capabilities while remaining accessible to diverse technical audiences. The strategy emphasizes Apple-inspired aesthetics, dark theme design, and sophisticated animations to demonstrate platform capabilities.
|
||||
|
||||
## 1. User Journey Mapping
|
||||
|
||||
### 1.1 Primary User Personas
|
||||
|
||||
#### Persona 1: Technical Decision Maker (CTO/VP Engineering)
|
||||
**Profile**:
|
||||
- 10+ years experience, enterprise environment
|
||||
- Evaluates technical architecture and scalability
|
||||
- Needs: ROI justification, technical depth, security assurance
|
||||
|
||||
**Journey Map**:
|
||||
```
|
||||
Entry Point → Technical Overview → Architecture Deep-dive → Performance Metrics → Security/Compliance → Demo Request → Investment Justification
|
||||
```
|
||||
|
||||
**Key Touchpoints**:
|
||||
1. **Landing (Homepage)** - Immediate credibility through metrics
|
||||
2. **Platform Overview** - Technical architecture understanding
|
||||
3. **Performance Data** - Quantitative validation
|
||||
4. **Security Section** - Compliance and audit trail features
|
||||
5. **Technical Demo** - Hands-on experience
|
||||
6. **Business Case** - ROI and implementation timeline
|
||||
|
||||
#### Persona 2: AI Research Lead/Principal Engineer
|
||||
**Profile**:
|
||||
- PhD/MS in AI/ML, 5+ years industry experience
|
||||
- Focuses on technical innovation and research applications
|
||||
- Needs: Technical specifications, API documentation, research validation
|
||||
|
||||
**Journey Map**:
|
||||
```
|
||||
Entry Point → Component Deep-dive → Technical Specifications → API Documentation → Research Papers → Community/Support → Trial Access
|
||||
```
|
||||
|
||||
**Key Touchpoints**:
|
||||
1. **Landing** - Technical sophistication signals
|
||||
2. **Component Pages** - WHOOSH, BZZZ, SLURP, COOEE details
|
||||
3. **Technical Specs** - Performance benchmarks and comparisons
|
||||
4. **Documentation** - API references and integration guides
|
||||
5. **Research Section** - White papers and case studies
|
||||
6. **Developer Portal** - SDK access and community
|
||||
|
||||
#### Persona 3: Business Stakeholder/Executive
|
||||
**Profile**:
|
||||
- C-level or VP, business-focused with technical awareness
|
||||
- Evaluates business impact and competitive advantage
|
||||
- Needs: Business outcomes, competitive positioning, implementation support
|
||||
|
||||
**Journey Map**:
|
||||
```
|
||||
Entry Point → Business Value Proposition → Use Cases/Scenarios → Success Stories → Pricing/Support → Enterprise Consultation
|
||||
```
|
||||
|
||||
**Key Touchpoints**:
|
||||
1. **Landing** - Clear value proposition
|
||||
2. **Business Benefits** - ROI and efficiency gains
|
||||
3. **Use Cases** - Real-world applications
|
||||
4. **Customer Stories** - Social proof and validation
|
||||
5. **Enterprise Features** - Support and service levels
|
||||
6. **Contact Sales** - Consultation and custom deployment
|
||||
|
||||
### 1.2 Cross-Persona Journey Optimization
|
||||
|
||||
**Shared Critical Moments**:
|
||||
- **First 10 seconds**: Establish credibility and relevance
|
||||
- **Platform Understanding**: Clear mental model of CHORUS capabilities
|
||||
- **Trust Building**: Technical depth + business validation
|
||||
- **Action Decision**: Clear next steps for engagement
|
||||
|
||||
## 2. Information Architecture & Page Hierarchy
|
||||
|
||||
### 2.1 Site Structure
|
||||
|
||||
```
|
||||
CHORUS Services Website
|
||||
├── Homepage (/)
|
||||
│ ├── Hero Section - Platform introduction
|
||||
│ ├── Platform Overview - 5 core components
|
||||
│ ├── Performance Metrics - Key statistics
|
||||
│ ├── Value Proposition - Business benefits
|
||||
│ └── CTA Section - Primary actions
|
||||
├── Platform (/platform)
|
||||
│ ├── Architecture Overview
|
||||
│ ├── Component Interaction Diagram
|
||||
│ ├── Performance Benchmarks
|
||||
│ └── Technical Specifications
|
||||
├── Components (/components)
|
||||
│ ├── WHOOSH Orchestrator (/components/whoosh)
|
||||
│ ├── BZZZ P2P Network (/components/bzzz)
|
||||
│ ├── SLURP Context Manager (/components/slurp)
|
||||
│ ├── COOEE Feedback System (/components/cooee)
|
||||
│ └── Monitoring Dashboard (/components/monitoring)
|
||||
├── Solutions (/solutions)
|
||||
│ ├── Enterprise AI Deployment
|
||||
│ ├── Multi-Agent Coordination
|
||||
│ ├── Context Management
|
||||
│ └── Continuous Learning
|
||||
├── Use Cases (/use-cases)
|
||||
│ ├── Scenario Demonstrations
|
||||
│ ├── Industry Applications
|
||||
│ ├── Technical Workflows
|
||||
│ └── ROI Calculators
|
||||
├── Documentation (/docs)
|
||||
│ ├── Getting Started
|
||||
│ ├── API Reference
|
||||
│ ├── SDK Documentation
|
||||
│ ├── Integration Guides
|
||||
│ └── Best Practices
|
||||
├── Resources (/resources)
|
||||
│ ├── White Papers
|
||||
│ ├── Case Studies
|
||||
│ ├── Technical Blog
|
||||
│ └── Research Publications
|
||||
├── About (/about)
|
||||
│ ├── Company Story
|
||||
│ ├── Team & Expertise
|
||||
│ ├── Mission & Values
|
||||
│ └── Contact Information
|
||||
└── Enterprise (/enterprise)
|
||||
├── Custom Deployment
|
||||
├── Professional Services
|
||||
├── Support & SLA
|
||||
└── Pricing & Packages
|
||||
```
|
||||
|
||||
### 2.2 Navigation Strategy
|
||||
|
||||
#### Primary Navigation
|
||||
- **Homepage**: Platform introduction and overview
|
||||
- **Platform**: Technical architecture and capabilities
|
||||
- **Components**: Individual module deep-dives
|
||||
- **Solutions**: Use case focused content
|
||||
- **Resources**: Educational and technical content
|
||||
- **Enterprise**: Business-focused engagement
|
||||
|
||||
#### Secondary Navigation
|
||||
- **Documentation**: Always accessible via persistent header link
|
||||
- **Demo Request**: Prominent CTA across all pages
|
||||
- **Contact**: Multiple touchpoints (header, footer, floating)
|
||||
- **Investor Relations**: Discrete but accessible
|
||||
|
||||
#### Mobile Navigation
|
||||
- Hamburger menu with full-screen overlay
|
||||
- Primary actions remain visible (Demo, Contact)
|
||||
- Simplified hierarchy with key sections
|
||||
- Search functionality for documentation
|
||||
|
||||
## 3. Content Strategy
|
||||
|
||||
### 3.1 Homepage Content Framework
|
||||
|
||||
#### Hero Section (Above Fold)
|
||||
**Primary Message**: "Distributed AI Orchestration Without the Hallucinations"
|
||||
**Supporting Copy**: "Enterprise-ready platform that eliminates context loss, reduces hallucinations, and enables true multi-agent coordination through intelligent context management and distributed reasoning."
|
||||
|
||||
**Content Hierarchy**:
|
||||
1. **Attention Hook** (3 seconds): "CHORUS Services" with animated subtitle
|
||||
2. **Value Proposition** (7 seconds): Clear problem solution statement
|
||||
3. **Credibility Signals** (10 seconds): Performance metrics or customer logos
|
||||
4. **Action Options** (15 seconds): Multiple engagement paths
|
||||
|
||||
#### Platform Overview Section
|
||||
**Content Structure**:
|
||||
- **Visual Architecture Diagram**: Interactive component relationships
|
||||
- **5 Core Components**: Brief descriptions with animation triggers
|
||||
- **Key Benefits**: Context preservation, hallucination reduction, coordination
|
||||
- **Technical Validation**: Performance metrics and benchmarks
|
||||
|
||||
#### Metrics & Validation Section
|
||||
**Content Elements**:
|
||||
- **Performance Statistics**: 92% context retention, 78% hallucination reduction
|
||||
- **Efficiency Gains**: 34% faster completion, 71% less intervention
|
||||
- **Technical Metrics**: Response times, accuracy rates, uptime statistics
|
||||
- **Customer Validation**: Usage statistics and growth metrics
|
||||
|
||||
### 3.2 Component Page Content Strategy
|
||||
|
||||
#### WHOOSH Orchestrator (/components/whoosh)
|
||||
**Content Framework**:
|
||||
- **Capability Overview**: Enterprise workflow management for AI agents
|
||||
- **Technical Specifications**: Task distribution, dependency management, monitoring
|
||||
- **Use Case Examples**: Complex project coordination, resource allocation
|
||||
- **Performance Data**: Throughput, latency, scalability metrics
|
||||
- **Integration Guide**: API endpoints, SDK examples, deployment options
|
||||
|
||||
#### BZZZ P2P Network (/components/bzzz)
|
||||
**Content Framework**:
|
||||
- **Architecture Explanation**: Peer-to-peer coordination without single points of failure
|
||||
- **Resilience Features**: Automatic failover, distributed consensus, network healing
|
||||
- **Communication Protocols**: Message passing, state synchronization, conflict resolution
|
||||
- **Security Model**: Encryption, authentication, audit trails
|
||||
- **Deployment Scenarios**: Multi-region, hybrid cloud, edge computing
|
||||
|
||||
#### SLURP Context Manager (/components/slurp)
|
||||
**Content Framework**:
|
||||
- **Context Intelligence**: Automatic relevance detection, organizational memory
|
||||
- **Learning Mechanisms**: Feedback integration, importance weighting, decay models
|
||||
- **Storage Architecture**: Distributed storage, query optimization, data consistency
|
||||
- **Privacy & Security**: Data encryption, access controls, retention policies
|
||||
- **Integration Examples**: CRM systems, documentation, communication platforms
|
||||
|
||||
#### COOEE Feedback System (/components/cooee)
|
||||
**Content Framework**:
|
||||
- **Continuous Learning**: Real-world performance feedback loops
|
||||
- **Feedback Mechanisms**: Human input, system metrics, outcome tracking
|
||||
- **Adaptation Algorithms**: Model updates, parameter tuning, behavior modification
|
||||
- **Quality Assurance**: Validation frameworks, testing protocols, error detection
|
||||
- **Reporting Dashboard**: Performance trends, improvement metrics, alert systems
|
||||
|
||||
### 3.3 Technical Documentation Strategy
|
||||
|
||||
#### Getting Started Guide
|
||||
**Progressive Disclosure Structure**:
|
||||
1. **Quick Start** (5 minutes): Basic setup and first API call
|
||||
2. **Core Concepts** (15 minutes): Architecture understanding
|
||||
3. **First Integration** (30 minutes): Simple use case implementation
|
||||
4. **Advanced Features** (60 minutes): Full platform capabilities
|
||||
|
||||
#### API Documentation
|
||||
**Organization Principles**:
|
||||
- **Resource-based grouping**: Organized by component (WHOOSH, BZZZ, etc.)
|
||||
- **Method-based navigation**: CRUD operations clearly categorized
|
||||
- **Interactive examples**: Live API testing capabilities
|
||||
- **Error handling**: Comprehensive error codes and resolution guides
|
||||
|
||||
## 4. UI/UX Wireframes for Key Pages
|
||||
|
||||
### 4.1 Homepage Wireframe Specifications
|
||||
|
||||
#### Desktop Layout (1440px width)
|
||||
```
|
||||
Header (72px height)
|
||||
├── Logo (left aligned)
|
||||
├── Navigation Menu (center)
|
||||
└── CTA Buttons (right aligned)
|
||||
|
||||
Hero Section (100vh height)
|
||||
├── Background: Animated gradient with subtle particles
|
||||
├── Main Title: "CHORUS Services" (84px, gradient text)
|
||||
├── Subtitle: Value proposition (36px, secondary color)
|
||||
├── Action Buttons: Primary CTA + Secondary option
|
||||
└── Scroll Indicator: Animated down arrow
|
||||
|
||||
Platform Overview (auto height)
|
||||
├── Section Title: "Context-Aware AI Coordination"
|
||||
├── Interactive Diagram: 5 component visualization
|
||||
├── Component Cards: Hover effects with metrics
|
||||
└── Technical Validation: Performance statistics
|
||||
|
||||
Metrics Section (60vh height)
|
||||
├── Background: Parallax effect
|
||||
├── Animated Counters: Key performance numbers
|
||||
├── Comparison Charts: Before/after visualizations
|
||||
└── Customer Logos: Social proof element
|
||||
|
||||
CTA Section (40vh height)
|
||||
├── Centered messaging: Next steps clarity
|
||||
├── Multiple pathways: Demo, Documentation, Contact
|
||||
└── Footer transition: Smooth visual flow
|
||||
```
|
||||
|
||||
#### Mobile Layout (375px width)
|
||||
```
|
||||
Header (64px height)
|
||||
├── Logo (left)
|
||||
├── Hamburger Menu (right)
|
||||
|
||||
Hero Section (100vh height)
|
||||
├── Stacked content: Title, subtitle, buttons
|
||||
├── Reduced text sizes: 48px title, 18px subtitle
|
||||
├── Single column: Simplified hierarchy
|
||||
└── Touch-optimized CTAs: 44px minimum height
|
||||
|
||||
Platform Overview (auto height)
|
||||
├── Vertical card stack: Mobile-optimized layout
|
||||
├── Simplified diagram: Touch-friendly interactions
|
||||
└── Swipeable components: Horizontal carousel
|
||||
|
||||
Metrics Section (auto height)
|
||||
├── Stacked counters: Vertical layout
|
||||
├── Simplified animations: Performance optimized
|
||||
└── Reduced parallax: Motion-sensitive users
|
||||
```
|
||||
|
||||
### 4.2 Component Page Wireframe (WHOOSH Example)
|
||||
|
||||
#### Page Structure
|
||||
```
|
||||
Breadcrumb Navigation
|
||||
├── Home > Components > WHOOSH Orchestrator
|
||||
|
||||
Component Hero (50vh height)
|
||||
├── Component Icon: Large, animated
|
||||
├── Component Name: "WHOOSH Orchestrator"
|
||||
├── Tagline: "Enterprise workflow management for AI agents"
|
||||
├── Quick Stats: Performance metrics
|
||||
└── Action Links: Try Demo, View Docs
|
||||
|
||||
Technical Overview (auto height)
|
||||
├── Architecture Diagram: Interactive component map
|
||||
├── Feature Grid: 3-column layout of capabilities
|
||||
├── Code Examples: Syntax-highlighted snippets
|
||||
└── Integration Points: API connection examples
|
||||
|
||||
Performance Section (60vh height)
|
||||
├── Metrics Dashboard: Real-time statistics
|
||||
├── Benchmark Comparisons: Competitive analysis
|
||||
├── Scalability Charts: Performance under load
|
||||
└── Case Study Preview: Customer success story
|
||||
|
||||
Documentation Navigation (auto height)
|
||||
├── Quick Links: API, SDK, Examples
|
||||
├── Tutorial Path: Guided learning journey
|
||||
├── Support Resources: Community, tickets
|
||||
└── Related Components: Cross-references
|
||||
```
|
||||
|
||||
### 4.3 Technical Specifications Page Wireframe
|
||||
|
||||
#### Layout Structure
|
||||
```
|
||||
Technical Header
|
||||
├── Page Title: "Technical Specifications"
|
||||
├── Last Updated: Version and date information
|
||||
├── Download Options: PDF, print-friendly
|
||||
└── Share Tools: Link copying, bookmarking
|
||||
|
||||
Specification Sections
|
||||
├── System Requirements
|
||||
│ ├── Hardware specifications
|
||||
│ ├── Software dependencies
|
||||
│ └── Network requirements
|
||||
├── Performance Benchmarks
|
||||
│ ├── Throughput measurements
|
||||
│ ├── Latency statistics
|
||||
│ └── Scalability limits
|
||||
├── API Specifications
|
||||
│ ├── Endpoint documentation
|
||||
│ ├── Authentication methods
|
||||
│ └── Rate limiting policies
|
||||
├── Security & Compliance
|
||||
│ ├── Encryption standards
|
||||
│ ├── Audit capabilities
|
||||
│ └── Compliance certifications
|
||||
└── Deployment Options
|
||||
├── Cloud configurations
|
||||
├── On-premises setup
|
||||
└── Hybrid architectures
|
||||
|
||||
Interactive Elements
|
||||
├── Collapsible sections: Detailed specifications
|
||||
├── Code examples: Copy-to-clipboard functionality
|
||||
├── Comparison tables: Feature matrix
|
||||
└── Search functionality: Quick specification lookup
|
||||
```
|
||||
|
||||
## 5. Accessibility and Usability Guidelines
|
||||
|
||||
### 5.1 WCAG 2.1 AA Compliance Requirements
|
||||
|
||||
#### Visual Accessibility
|
||||
- **Color Contrast**: Minimum 4.5:1 ratio for normal text, 3:1 for large text
|
||||
- **Text Scaling**: Supports 200% zoom without horizontal scrolling
|
||||
- **Focus Indicators**: Visible focus rings for keyboard navigation
|
||||
- **Color Independence**: Information not conveyed by color alone
|
||||
|
||||
#### Motor Accessibility
|
||||
- **Keyboard Navigation**: Full site functionality without mouse
|
||||
- **Touch Targets**: Minimum 44px for mobile interactions
|
||||
- **Hover Independence**: All hover information accessible via focus
|
||||
- **Motion Control**: Ability to disable parallax and animations
|
||||
|
||||
#### Cognitive Accessibility
|
||||
- **Clear Language**: Technical concepts explained in accessible terms
|
||||
- **Consistent Navigation**: Predictable interface patterns
|
||||
- **Error Prevention**: Form validation with clear guidance
|
||||
- **Progress Indicators**: Clear feedback for multi-step processes
|
||||
|
||||
### 5.2 Usability Testing Framework
|
||||
|
||||
#### Testing Methodology
|
||||
1. **User Types**: Representative samples from each persona
|
||||
2. **Task Scenarios**: Realistic goal-oriented interactions
|
||||
3. **Success Metrics**: Task completion, time-to-completion, error rates
|
||||
4. **Accessibility Testing**: Screen reader compatibility, keyboard navigation
|
||||
|
||||
#### Key Testing Scenarios
|
||||
- **Discovery Journey**: Finding relevant technical information
|
||||
- **Comparison Tasks**: Evaluating CHORUS vs. alternatives
|
||||
- **Integration Planning**: Understanding implementation requirements
|
||||
- **Contact/Demo Process**: Smooth conversion funnel experience
|
||||
|
||||
#### Iterative Improvement Process
|
||||
- **Weekly Analytics Review**: User behavior pattern analysis
|
||||
- **Monthly Usability Testing**: Rotating focus on different personas
|
||||
- **Quarterly Accessibility Audit**: Comprehensive compliance review
|
||||
- **Continuous A/B Testing**: Conversion optimization experiments
|
||||
|
||||
## 6. Mobile-First Responsive Design Strategy
|
||||
|
||||
### 6.1 Breakpoint Strategy
|
||||
|
||||
#### Mobile First Approach
|
||||
```css
|
||||
/* Base styles: Mobile (320px - 767px) */
|
||||
.container {
|
||||
padding: 16px;
|
||||
max-width: 100%;
|
||||
}
|
||||
|
||||
/* Tablet (768px - 1023px) */
|
||||
@media (min-width: 768px) {
|
||||
.container {
|
||||
padding: 24px;
|
||||
max-width: 750px;
|
||||
}
|
||||
}
|
||||
|
||||
/* Desktop (1024px - 1439px) */
|
||||
@media (min-width: 1024px) {
|
||||
.container {
|
||||
padding: 32px;
|
||||
max-width: 1200px;
|
||||
}
|
||||
}
|
||||
|
||||
/* Large Desktop (1440px+) */
|
||||
@media (min-width: 1440px) {
|
||||
.container {
|
||||
padding: 40px;
|
||||
max-width: 1400px;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2 Component Responsiveness
|
||||
|
||||
#### Navigation Adaptation
|
||||
- **Mobile**: Hamburger menu with full-screen overlay
|
||||
- **Tablet**: Horizontal navigation with dropdown menus
|
||||
- **Desktop**: Full horizontal navigation with mega-menu options
|
||||
|
||||
#### Content Layout Adaptation
|
||||
- **Mobile**: Single column, stacked content
|
||||
- **Tablet**: Two-column layout for appropriate content
|
||||
- **Desktop**: Three-column grid with sidebar navigation
|
||||
|
||||
#### Interactive Element Scaling
|
||||
- **Touch Targets**: 44px minimum on mobile, scalable on desktop
|
||||
- **Typography**: Fluid scaling using CSS clamp() function
|
||||
- **Images**: Responsive with appropriate aspect ratios maintained
|
||||
|
||||
### 6.3 Performance Optimization
|
||||
|
||||
#### Mobile Performance Priorities
|
||||
1. **Critical Path Optimization**: Above-fold content prioritized
|
||||
2. **Image Optimization**: WebP format with fallbacks, lazy loading
|
||||
3. **JavaScript Bundling**: Code splitting for mobile-specific features
|
||||
4. **Network Awareness**: Reduced animations on slow connections
|
||||
|
||||
## 7. Call-to-Action Placement and Conversion Optimization
|
||||
|
||||
### 7.1 CTA Hierarchy and Placement
|
||||
|
||||
#### Primary CTAs (High Conversion Intent)
|
||||
- **"Request Demo"**: Most prominent placement, enterprise-focused
|
||||
- **"View Documentation"**: Technical audience primary action
|
||||
- **"Contact Sales"**: Business decision maker pathway
|
||||
|
||||
#### Secondary CTAs (Engagement Building)
|
||||
- **"Explore Platform"**: Discovery-oriented navigation
|
||||
- **"Download Whitepaper"**: Lead generation for nurturing
|
||||
- **"Join Community"**: Developer engagement and support
|
||||
|
||||
#### Tertiary CTAs (Information Seeking)
|
||||
- **"Learn More"**: Deeper content exploration
|
||||
- **"See Examples"**: Technical implementation guidance
|
||||
- **"Compare Solutions"**: Competitive analysis tools
|
||||
|
||||
### 7.2 Strategic CTA Placement
|
||||
|
||||
#### Homepage CTA Strategy
|
||||
```
|
||||
Hero Section: Primary CTA (Request Demo) + Secondary (Explore Platform)
|
||||
Platform Overview: Component-specific CTAs (Learn More, Try Demo)
|
||||
Metrics Section: Validation-driven CTA (See Case Studies)
|
||||
Final CTA Section: Choice architecture (Demo, Docs, Contact)
|
||||
```
|
||||
|
||||
#### Component Page CTA Strategy
|
||||
```
|
||||
Component Hero: Try Component Demo + View Documentation
|
||||
Technical Section: API Reference + Integration Guide
|
||||
Performance Section: Request Benchmark Report
|
||||
Bottom Section: Related Components + Contact Expert
|
||||
```
|
||||
|
||||
### 7.3 Conversion Funnel Optimization
|
||||
|
||||
#### Multi-Step Engagement Strategy
|
||||
1. **Awareness**: Technical content consumption, whitepaper downloads
|
||||
2. **Interest**: Demo requests, documentation access, API exploration
|
||||
3. **Consideration**: Sales consultations, custom deployment discussions
|
||||
4. **Decision**: Pilot program enrollment, contract negotiations
|
||||
|
||||
#### Conversion Rate Optimization Tactics
|
||||
- **Social Proof**: Customer logos, case studies, testimonials
|
||||
- **Risk Reduction**: Free trials, money-back guarantees, flexible contracts
|
||||
- **Urgency Creation**: Limited-time offers, early access programs
|
||||
- **Personalization**: Dynamic content based on user behavior and persona
|
||||
|
||||
## 8. Implementation Guidelines
|
||||
|
||||
### 8.1 Development Handoff Specifications
|
||||
|
||||
#### Design Token System
|
||||
```typescript
|
||||
// Design tokens for consistent implementation
|
||||
export const designTokens = {
|
||||
colors: {
|
||||
primary: {
|
||||
blue: '#007aff',
|
||||
green: '#30d158',
|
||||
amber: '#ff9f0a',
|
||||
red: '#ff453a'
|
||||
},
|
||||
neutral: {
|
||||
black: '#000000',
|
||||
charcoal: '#1a1a1a',
|
||||
gray: '#2d2d30',
|
||||
lightGray: '#a1a1a6',
|
||||
white: '#f2f2f7'
|
||||
}
|
||||
},
|
||||
typography: {
|
||||
fonts: {
|
||||
primary: '-apple-system, BlinkMacSystemFont, "SF Pro Text", "Inter", sans-serif',
|
||||
mono: '"SF Mono", "Monaco", "Inconsolata", monospace'
|
||||
},
|
||||
sizes: {
|
||||
hero: 'clamp(48px, 8vw, 84px)',
|
||||
h1: 'clamp(32px, 5vw, 48px)',
|
||||
h2: 'clamp(24px, 4vw, 36px)',
|
||||
body: '16px',
|
||||
small: '14px'
|
||||
}
|
||||
},
|
||||
spacing: {
|
||||
xs: '8px',
|
||||
sm: '16px',
|
||||
md: '24px',
|
||||
lg: '32px',
|
||||
xl: '48px',
|
||||
xxl: '64px'
|
||||
},
|
||||
animations: {
|
||||
fast: '0.1s',
|
||||
normal: '0.2s',
|
||||
slow: '0.3s',
|
||||
easings: {
|
||||
smooth: 'cubic-bezier(0.4, 0, 0.2, 1)',
|
||||
bounce: 'cubic-bezier(0.68, -0.55, 0.265, 1.55)'
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
#### Component Specifications
|
||||
- **Spacing**: Consistent 8px grid system
|
||||
- **Typography**: Responsive scaling using CSS clamp()
|
||||
- **Colors**: CSS custom properties for theme consistency
|
||||
- **Animations**: Framer Motion variants for consistent behavior
|
||||
- **Interactions**: Hover states, focus management, loading states
|
||||
|
||||
### 8.2 Quality Assurance Checklist
|
||||
|
||||
#### Visual Design Compliance
|
||||
- [ ] Color contrast meets WCAG AA standards
|
||||
- [ ] Typography scales appropriately across devices
|
||||
- [ ] Interactive elements have clear hover/focus states
|
||||
- [ ] Brand consistency maintained throughout
|
||||
|
||||
#### Functional Requirements
|
||||
- [ ] All CTAs lead to appropriate destinations
|
||||
- [ ] Forms include proper validation and error handling
|
||||
- [ ] Navigation works consistently across all pages
|
||||
- [ ] Search functionality returns relevant results
|
||||
|
||||
#### Performance Standards
|
||||
- [ ] Core Web Vitals meet target thresholds
|
||||
- [ ] Bundle size optimized for fast loading
|
||||
- [ ] Images properly optimized and lazy-loaded
|
||||
- [ ] Accessibility features function correctly
|
||||
|
||||
### 8.3 Analytics and Monitoring Setup
|
||||
|
||||
#### Key Performance Indicators
|
||||
- **Technical Metrics**: Core Web Vitals, bundle size, load times
|
||||
- **User Engagement**: Time on site, pages per session, scroll depth
|
||||
- **Conversion Metrics**: Demo requests, documentation access, contact form submissions
|
||||
- **Content Performance**: Most viewed pages, exit rates, search queries
|
||||
|
||||
#### A/B Testing Framework
|
||||
- **CTA Optimization**: Button text, placement, color variations
|
||||
- **Content Testing**: Headlines, value propositions, technical depth
|
||||
- **Layout Experiments**: Navigation structures, information hierarchy
|
||||
- **Personalization**: Content adaptation based on user behavior
|
||||
|
||||
## Conclusion
|
||||
|
||||
This comprehensive UX design strategy provides a roadmap for creating an enterprise-grade marketing website that effectively showcases CHORUS Services' technical capabilities while maintaining excellent user experience across all target audiences. The strategy emphasizes accessibility, performance, and conversion optimization while staying true to the Apple-inspired aesthetic and technical sophistication of the platform.
|
||||
|
||||
The phased implementation approach allows for iterative improvement based on user feedback and performance data, ensuring the website evolves to meet changing user needs and business objectives.
|
||||
@@ -1,968 +0,0 @@
|
||||
# CHORUS Services Website Architecture Strategy
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document outlines a comprehensive website architecture strategy for CHORUS Services, a distributed AI orchestration platform. The strategy leverages Next.js 13+ App Router with Ant Design 5+ and Framer Motion to create an enterprise-grade marketing website that showcases the platform's technical capabilities while remaining accessible to both technical and business audiences.
|
||||
|
||||
## 1. Architecture Overview
|
||||
|
||||
### Core Technology Stack
|
||||
- **Framework**: Next.js 13+ with App Router for optimal performance and SEO
|
||||
- **UI Library**: Ant Design 5+ with custom dark theme and CSS-in-JS theming
|
||||
- **Animation**: Framer Motion for parallax effects and sophisticated animations
|
||||
- **Styling**: CSS-in-JS with @ant-design/cssinjs and antd-style for advanced theming
|
||||
- **Deployment**: Docker containerization with Traefik integration on existing infrastructure
|
||||
|
||||
### Design Philosophy
|
||||
- **Apple-inspired aesthetics**: Clean, sophisticated, technology-focused design
|
||||
- **Dark theme primary**: Technology-forward appearance with electric blue (#007aff) and emerald green (#30d158)
|
||||
- **Performance-first**: Enterprise-grade loading speeds and accessibility
|
||||
- **Responsive-native**: Mobile-first design with desktop enhancement
|
||||
|
||||
## 2. Folder Structure & Component Hierarchy
|
||||
|
||||
```
|
||||
chorus-website/
|
||||
├── src/
|
||||
│ ├── app/ # Next.js 13+ App Router
|
||||
│ │ ├── (marketing)/ # Route group for marketing pages
|
||||
│ │ │ ├── page.tsx # Home page (/)
|
||||
│ │ │ ├── ecosystem/ # Platform overview (/ecosystem)
|
||||
│ │ │ │ └── page.tsx
|
||||
│ │ │ ├── scenarios/ # Use cases and demos (/scenarios)
|
||||
│ │ │ │ └── page.tsx
|
||||
│ │ │ ├── modules/ # Component breakdown (/modules)
|
||||
│ │ │ │ └── page.tsx
|
||||
│ │ │ ├── how-it-works/ # Process explanation (/how-it-works)
|
||||
│ │ │ │ └── page.tsx
|
||||
│ │ │ └── about/ # Team and company (/about)
|
||||
│ │ │ └── page.tsx
|
||||
│ │ ├── investors/ # Investor relations (protected)
|
||||
│ │ │ └── page.tsx
|
||||
│ │ ├── api/ # API routes for contact forms, etc.
|
||||
│ │ │ └── contact/
|
||||
│ │ │ └── route.ts
|
||||
│ │ ├── globals.css # Global styles and CSS variables
|
||||
│ │ ├── layout.tsx # Root layout with Ant Design ConfigProvider
|
||||
│ │ └── loading.tsx # Global loading component
|
||||
│ ├── components/ # Reusable components
|
||||
│ │ ├── ui/ # Core UI components
|
||||
│ │ │ ├── PerformanceCard.tsx # Metrics display cards
|
||||
│ │ │ ├── ModuleCard.tsx # CHORUS component showcases
|
||||
│ │ │ ├── AnimatedCounter.tsx # Metric counters with animation
|
||||
│ │ │ ├── ParallaxSection.tsx # Scroll-based parallax container
|
||||
│ │ │ ├── GradientText.tsx # Styled gradient typography
|
||||
│ │ │ └── LoadingSpinner.tsx # Consistent loading states
|
||||
│ │ ├── sections/ # Page-specific sections
|
||||
│ │ │ ├── HeroSection.tsx # Homepage hero with animation
|
||||
│ │ │ ├── FeaturesSection.tsx # Platform capabilities
|
||||
│ │ │ ├── MetricsSection.tsx # Performance statistics
|
||||
│ │ │ ├── ModulesSection.tsx # Component breakdown
|
||||
│ │ │ ├── ScenariosSection.tsx # Use case demonstrations
|
||||
│ │ │ ├── TestimonialsSection.tsx # Customer validation
|
||||
│ │ │ └── CTASection.tsx # Call-to-action sections
|
||||
│ │ ├── navigation/ # Navigation components
|
||||
│ │ │ ├── Header.tsx # Main navigation with sticky behavior
|
||||
│ │ │ ├── Footer.tsx # Footer with links and company info
|
||||
│ │ │ ├── MobileMenu.tsx # Mobile-responsive navigation
|
||||
│ │ │ └── NavigationDots.tsx # Page section navigation
|
||||
│ │ └── forms/ # Contact and interaction forms
|
||||
│ │ ├── ContactForm.tsx # General contact form
|
||||
│ │ ├── InvestorForm.tsx # Investor qualification form
|
||||
│ │ └── DemoRequestForm.tsx # Technical demo requests
|
||||
│ ├── lib/ # Utilities and configurations
|
||||
│ │ ├── theme/ # Ant Design theme customization
|
||||
│ │ │ ├── chorusTheme.ts # Main theme configuration
|
||||
│ │ │ ├── darkTheme.ts # Dark mode specifications
|
||||
│ │ │ └── animations.ts # Framer Motion variants
|
||||
│ │ ├── utils/ # Helper functions
|
||||
│ │ │ ├── animations.ts # Animation utilities
|
||||
│ │ │ ├── metrics.ts # Performance data formatting
|
||||
│ │ │ └── validation.ts # Form validation schemas
|
||||
│ │ └── constants/ # Application constants
|
||||
│ │ ├── colors.ts # Brand color system
|
||||
│ │ ├── typography.ts # Font and text specifications
|
||||
│ │ └── content.ts # Static content and copy
|
||||
│ ├── styles/ # Global and component styles
|
||||
│ │ ├── globals.css # Reset and global styles
|
||||
│ │ ├── components.css # Component-specific styles
|
||||
│ │ └── animations.css # CSS animations and transitions
|
||||
│ └── assets/ # Static assets
|
||||
│ ├── images/ # Optimized images and graphics
|
||||
│ ├── icons/ # SVG icons and logos
|
||||
└── public/ # Public static files
|
||||
├── favicon.ico
|
||||
├── robots.txt
|
||||
├── sitemap.xml
|
||||
└── images/ # Public images for SEO
|
||||
```
|
||||
|
||||
## 3. Component Architecture Strategy
|
||||
|
||||
### 3.1 Design System Foundation
|
||||
|
||||
#### Color System
|
||||
```typescript
|
||||
// lib/constants/colors.ts
|
||||
export const colors = {
|
||||
primary: {
|
||||
blue: '#007aff', // Electric blue - primary actions
|
||||
green: '#30d158', // Emerald green - success states
|
||||
amber: '#ff9f0a', // Amber orange - warnings
|
||||
red: '#ff453a', // System red - errors
|
||||
},
|
||||
neutral: {
|
||||
black: '#000000', // Pure black - backgrounds
|
||||
charcoal: '#1a1a1a', // Deep charcoal - containers
|
||||
gray: '#2d2d30', // Cool gray - elevated surfaces
|
||||
lightGray: '#a1a1a6', // Light gray - secondary text
|
||||
white: '#f2f2f7', // Off-white - primary text
|
||||
},
|
||||
gradients: {
|
||||
hero: 'linear-gradient(135deg, #000000 0%, #1a1a1a 100%)',
|
||||
text: 'linear-gradient(135deg, #f2f2f7 0%, #007aff 100%)',
|
||||
card: 'linear-gradient(135deg, #1a1a1a 0%, #2d2d30 100%)',
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
#### Typography System
|
||||
```typescript
|
||||
// lib/constants/typography.ts
|
||||
export const typography = {
|
||||
fonts: {
|
||||
primary: `-apple-system, BlinkMacSystemFont, 'SF Pro Text', 'Inter', sans-serif`,
|
||||
mono: `'SF Mono', 'Monaco', 'Inconsolata', monospace`,
|
||||
},
|
||||
sizes: {
|
||||
hero: '84px', // Large headlines
|
||||
h1: '48px', // Section headers
|
||||
h2: '36px', // Subsection headers
|
||||
h3: '24px', // Component titles
|
||||
body: '16px', // Default body text
|
||||
small: '14px', // Secondary information
|
||||
},
|
||||
weights: {
|
||||
light: 300,
|
||||
regular: 400,
|
||||
medium: 500,
|
||||
semibold: 600,
|
||||
bold: 700,
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 3.2 Key Component Specifications
|
||||
|
||||
#### HeroSection Component
|
||||
```typescript
|
||||
// components/sections/HeroSection.tsx
|
||||
interface HeroSectionProps {
|
||||
title: string;
|
||||
subtitle: string;
|
||||
ctaButtons: Array<{
|
||||
text: string;
|
||||
type: 'primary' | 'secondary';
|
||||
href: string;
|
||||
}>;
|
||||
backgroundAnimation?: boolean;
|
||||
}
|
||||
|
||||
// Features:
|
||||
// - Parallax background with subtle particle animation
|
||||
// - Staggered text animations using Framer Motion
|
||||
// - Responsive typography scaling
|
||||
// - Accessibility-compliant contrast ratios
|
||||
// - Integration with Ant Design Button components
|
||||
```
|
||||
|
||||
#### ModuleCard Component
|
||||
```typescript
|
||||
// components/ui/ModuleCard.tsx
|
||||
interface ModuleCardProps {
|
||||
title: string;
|
||||
description: string;
|
||||
icon: ReactNode;
|
||||
metrics: Array<{
|
||||
label: string;
|
||||
value: string;
|
||||
trend?: 'up' | 'down' | 'stable';
|
||||
}>;
|
||||
delay?: number;
|
||||
link?: string;
|
||||
}
|
||||
|
||||
// Features:
|
||||
// - Hover animations with smooth transitions
|
||||
// - Metric counters with animation on scroll
|
||||
// - Consistent spacing using Ant Design tokens
|
||||
// - Dark mode optimized styling
|
||||
// - Performance-optimized rendering
|
||||
```
|
||||
|
||||
#### ParallaxSection Component
|
||||
```typescript
|
||||
// components/ui/ParallaxSection.tsx
|
||||
interface ParallaxSectionProps {
|
||||
children: ReactNode;
|
||||
speed?: number;
|
||||
offset?: [string, string];
|
||||
className?: string;
|
||||
}
|
||||
|
||||
// Features:
|
||||
// - Smooth scroll parallax using Framer Motion
|
||||
// - Configurable speed and offset parameters
|
||||
// - Intersection Observer for performance
|
||||
// - Reduced motion support for accessibility
|
||||
// - Compatible with Ant Design Layout components
|
||||
```
|
||||
|
||||
## 4. Technology Integration Approach
|
||||
|
||||
### 4.1 Next.js 13+ Configuration
|
||||
|
||||
#### App Router Setup
|
||||
```typescript
|
||||
// app/layout.tsx
|
||||
import { ConfigProvider } from 'antd';
|
||||
import { chorusTheme } from '@/lib/theme/chorusTheme';
|
||||
import type { Metadata } from 'next';
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: 'CHORUS Services - Distributed AI Orchestration',
|
||||
description: 'Enterprise-ready distributed AI orchestration platform that eliminates context loss, reduces hallucinations, and enables true multi-agent coordination.',
|
||||
keywords: ['AI orchestration', 'distributed systems', 'context management', 'enterprise AI'],
|
||||
openGraph: {
|
||||
title: 'CHORUS Services',
|
||||
description: 'Distributed AI Orchestration Without the Hallucinations',
|
||||
url: 'https://www.chorus.services',
|
||||
siteName: 'CHORUS Services',
|
||||
images: [
|
||||
{
|
||||
url: '/images/og-image.jpg',
|
||||
width: 1200,
|
||||
height: 630,
|
||||
alt: 'CHORUS Services Platform'
|
||||
}
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
export default function RootLayout({
|
||||
children,
|
||||
}: {
|
||||
children: React.ReactNode;
|
||||
}) {
|
||||
return (
|
||||
<html lang="en">
|
||||
<body>
|
||||
<ConfigProvider theme={chorusTheme}>
|
||||
{children}
|
||||
</ConfigProvider>
|
||||
</body>
|
||||
</html>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
#### Performance Optimizations
|
||||
```javascript
|
||||
// next.config.js
|
||||
/** @type {import('next').NextConfig} */
|
||||
const nextConfig = {
|
||||
experimental: {
|
||||
appDir: true,
|
||||
},
|
||||
transpilePackages: ['antd'],
|
||||
webpack: (config) => {
|
||||
// Ant Design tree shaking optimization
|
||||
config.optimization.splitChunks.cacheGroups.antd = {
|
||||
name: 'antd',
|
||||
test: /[\\/]node_modules[\\/]antd[\\/]/,
|
||||
chunks: 'all',
|
||||
priority: 10,
|
||||
};
|
||||
|
||||
// Framer Motion code splitting
|
||||
config.optimization.splitChunks.cacheGroups.framerMotion = {
|
||||
name: 'framer-motion',
|
||||
test: /[\\/]node_modules[\\/]framer-motion[\\/]/,
|
||||
chunks: 'all',
|
||||
priority: 10,
|
||||
};
|
||||
|
||||
return config;
|
||||
},
|
||||
images: {
|
||||
formats: ['image/webp', 'image/avif'],
|
||||
deviceSizes: [640, 750, 828, 1080, 1200, 1920, 2048, 3840],
|
||||
imageSizes: [16, 32, 48, 64, 96, 128, 256, 384],
|
||||
},
|
||||
compress: true,
|
||||
poweredByHeader: false,
|
||||
};
|
||||
|
||||
module.exports = nextConfig;
|
||||
```
|
||||
|
||||
### 4.2 Ant Design 5+ Integration
|
||||
|
||||
#### Custom Theme Configuration
|
||||
```typescript
|
||||
// lib/theme/chorusTheme.ts
|
||||
import { theme } from 'antd';
|
||||
import type { ThemeConfig } from 'antd';
|
||||
|
||||
export const chorusTheme: ThemeConfig = {
|
||||
algorithm: theme.darkAlgorithm,
|
||||
token: {
|
||||
// Color System
|
||||
colorPrimary: '#007aff', // Electric blue
|
||||
colorSuccess: '#30d158', // Emerald green
|
||||
colorWarning: '#ff9f0a', // Amber orange
|
||||
colorError: '#ff453a', // System red
|
||||
colorInfo: '#007aff', // Electric blue
|
||||
|
||||
// Background Colors
|
||||
colorBgContainer: '#1a1a1a', // Deep charcoal
|
||||
colorBgElevated: '#2d2d30', // Cool gray
|
||||
colorBgLayout: '#000000', // Pure black
|
||||
|
||||
// Typography
|
||||
fontFamily: `-apple-system, BlinkMacSystemFont, 'SF Pro Text', 'Inter', sans-serif`,
|
||||
fontSize: 16,
|
||||
fontSizeHeading1: 84, // Large headlines
|
||||
fontSizeHeading2: 48, // Section headers
|
||||
fontSizeHeading3: 36, // Subsection headers
|
||||
|
||||
// Spacing & Layout
|
||||
borderRadius: 8, // Consistent 8px radius
|
||||
wireframe: false, // Enable modern styling
|
||||
|
||||
// Motion & Animation
|
||||
motionDurationSlow: '0.3s', // Apple-style timing
|
||||
motionDurationMid: '0.2s',
|
||||
motionDurationFast: '0.1s',
|
||||
},
|
||||
|
||||
components: {
|
||||
Button: {
|
||||
primaryShadow: '0 12px 24px rgba(0, 122, 255, 0.3)',
|
||||
controlHeight: 48, // Larger touch targets
|
||||
fontWeight: 600,
|
||||
borderRadius: 8,
|
||||
},
|
||||
|
||||
Card: {
|
||||
borderRadiusLG: 12, // Slightly larger for cards
|
||||
paddingLG: 32,
|
||||
boxShadowTertiary: '0 8px 32px rgba(0, 0, 0, 0.4)',
|
||||
},
|
||||
|
||||
Layout: {
|
||||
headerBg: 'rgba(26, 26, 26, 0.8)', // Semi-transparent header
|
||||
headerHeight: 72,
|
||||
bodyBg: '#000000',
|
||||
},
|
||||
|
||||
Typography: {
|
||||
titleMarginTop: 0,
|
||||
titleMarginBottom: 24,
|
||||
colorText: '#f2f2f7',
|
||||
colorTextSecondary: '#a1a1a6',
|
||||
}
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 4.3 Framer Motion Integration
|
||||
|
||||
#### Animation Variants Library
|
||||
```typescript
|
||||
// lib/theme/animations.ts
|
||||
export const fadeInUp = {
|
||||
initial: { opacity: 0, y: 40 },
|
||||
animate: { opacity: 1, y: 0 },
|
||||
transition: { duration: 0.6 }
|
||||
};
|
||||
|
||||
export const staggerChildren = {
|
||||
animate: {
|
||||
transition: {
|
||||
staggerChildren: 0.1
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
export const parallaxVariants = {
|
||||
initial: { y: 0 },
|
||||
animate: (custom: number) => ({
|
||||
y: custom,
|
||||
transition: {
|
||||
type: "spring",
|
||||
stiffness: 100,
|
||||
damping: 30
|
||||
}
|
||||
})
|
||||
};
|
||||
|
||||
export const counterAnimation = {
|
||||
initial: { scale: 0.8, opacity: 0 },
|
||||
animate: { scale: 1, opacity: 1 },
|
||||
transition: {
|
||||
type: "spring",
|
||||
stiffness: 200,
|
||||
damping: 25
|
||||
}
|
||||
};
|
||||
|
||||
export const cardHover = {
|
||||
hover: {
|
||||
y: -8,
|
||||
boxShadow: "0 20px 40px rgba(0, 122, 255, 0.2)",
|
||||
transition: { duration: 0.3 }
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
#### Parallax Implementation
|
||||
```typescript
|
||||
// components/ui/ParallaxSection.tsx
|
||||
import { motion, useScroll, useTransform } from 'framer-motion';
|
||||
import { useRef } from 'react';
|
||||
import { Layout } from 'antd';
|
||||
|
||||
const { Content } = Layout;
|
||||
|
||||
interface ParallaxSectionProps {
|
||||
children: React.ReactNode;
|
||||
speed?: number;
|
||||
offset?: [string, string];
|
||||
className?: string;
|
||||
}
|
||||
|
||||
export const ParallaxSection: React.FC<ParallaxSectionProps> = ({
|
||||
children,
|
||||
speed = 0.5,
|
||||
offset = ["start end", "end start"],
|
||||
className
|
||||
}) => {
|
||||
const ref = useRef(null);
|
||||
const { scrollYProgress } = useScroll({
|
||||
target: ref,
|
||||
offset
|
||||
});
|
||||
|
||||
const y = useTransform(scrollYProgress, [0, 1], [0, -200 * speed]);
|
||||
|
||||
return (
|
||||
<Content ref={ref} className={className}>
|
||||
<motion.div style={{ y }}>
|
||||
{children}
|
||||
</motion.div>
|
||||
</Content>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
## 5. Performance Optimization Strategy
|
||||
|
||||
### 5.1 Bundle Optimization
|
||||
|
||||
#### Tree Shaking Configuration
|
||||
```typescript
|
||||
// lib/antd/index.ts - Centralized component imports
|
||||
export { default as Button } from 'antd/es/button';
|
||||
export { default as Card } from 'antd/es/card';
|
||||
export { default as Layout } from 'antd/es/layout';
|
||||
export { default as Typography } from 'antd/es/typography';
|
||||
export { default as Space } from 'antd/es/space';
|
||||
export { default as Row } from 'antd/es/row';
|
||||
export { default as Col } from 'antd/es/col';
|
||||
export { default as Form } from 'antd/es/form';
|
||||
export { default as Input } from 'antd/es/input';
|
||||
export { default as Progress } from 'antd/es/progress';
|
||||
export { default as Statistic } from 'antd/es/statistic';
|
||||
|
||||
// Usage in components
|
||||
import { Button, Card, Typography } from '@/lib/antd';
|
||||
```
|
||||
|
||||
#### Code Splitting Strategy
|
||||
```typescript
|
||||
// Dynamic imports for non-critical components
|
||||
const InvestorForm = dynamic(() => import('@/components/forms/InvestorForm'), {
|
||||
loading: () => <LoadingSpinner />,
|
||||
ssr: false
|
||||
});
|
||||
|
||||
const InteractiveDemo = dynamic(() => import('@/components/sections/InteractiveDemo'), {
|
||||
loading: () => <div>Loading demo...</div>
|
||||
});
|
||||
```
|
||||
|
||||
### 5.2 Image Optimization
|
||||
|
||||
#### Next.js Image Component Usage
|
||||
```typescript
|
||||
// components/ui/OptimizedImage.tsx
|
||||
import Image from 'next/image';
|
||||
import { useState } from 'react';
|
||||
|
||||
interface OptimizedImageProps {
|
||||
src: string;
|
||||
alt: string;
|
||||
width: number;
|
||||
height: number;
|
||||
priority?: boolean;
|
||||
className?: string;
|
||||
}
|
||||
|
||||
export const OptimizedImage: React.FC<OptimizedImageProps> = ({
|
||||
src,
|
||||
alt,
|
||||
width,
|
||||
height,
|
||||
priority = false,
|
||||
className
|
||||
}) => {
|
||||
const [isLoaded, setIsLoaded] = useState(false);
|
||||
|
||||
return (
|
||||
<div className={`image-container ${className}`}>
|
||||
<Image
|
||||
src={src}
|
||||
alt={alt}
|
||||
width={width}
|
||||
height={height}
|
||||
priority={priority}
|
||||
placeholder="blur"
|
||||
blurDataURL="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAABAAEDASIAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAv/xAAhEAACAQMDBQAAAAAAAAAAAAABAgMABAUGIWGRkqGx0f/EABUBAQEAAAAAAAAAAAAAAAAAAAMF/8QAGhEAAgIDAAAAAAAAAAAAAAAAAAECEgMRkf/aAAwDAQACEQMRAD8AltJagyeH0AthI5xdrLcNM91BF5pX2HaH9bcfaSennjdEBAABAAcAMgsGdCcIK+f"
|
||||
onLoad={() => setIsLoaded(true)}
|
||||
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 33vw"
|
||||
style={{
|
||||
transition: 'opacity 0.3s',
|
||||
opacity: isLoaded ? 1 : 0
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### 5.3 Loading Performance
|
||||
|
||||
#### Metrics and Monitoring
|
||||
```typescript
|
||||
// lib/utils/performance.ts
|
||||
export const trackWebVitals = (metric: any) => {
|
||||
switch (metric.name) {
|
||||
case 'FCP':
|
||||
// First Contentful Paint
|
||||
console.log('FCP:', metric.value);
|
||||
break;
|
||||
case 'LCP':
|
||||
// Largest Contentful Paint
|
||||
console.log('LCP:', metric.value);
|
||||
break;
|
||||
case 'CLS':
|
||||
// Cumulative Layout Shift
|
||||
console.log('CLS:', metric.value);
|
||||
break;
|
||||
case 'FID':
|
||||
// First Input Delay
|
||||
console.log('FID:', metric.value);
|
||||
break;
|
||||
case 'TTFB':
|
||||
// Time to First Byte
|
||||
console.log('TTFB:', metric.value);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
// Usage in _app.tsx or layout.tsx
|
||||
export function reportWebVitals(metric: any) {
|
||||
trackWebVitals(metric);
|
||||
}
|
||||
```
|
||||
|
||||
## 6. SEO and Accessibility Strategy
|
||||
|
||||
### 6.1 SEO Optimization
|
||||
|
||||
#### Metadata Configuration
|
||||
```typescript
|
||||
// app/page.tsx
|
||||
import type { Metadata } from 'next';
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: 'CHORUS Services - Distributed AI Orchestration Without the Hallucinations',
|
||||
description: 'Enterprise-ready distributed AI orchestration platform that eliminates context loss, reduces hallucinations, and enables true multi-agent coordination through intelligent context management.',
|
||||
keywords: [
|
||||
'AI orchestration',
|
||||
'distributed AI systems',
|
||||
'context management',
|
||||
'multi-agent coordination',
|
||||
'enterprise AI platform',
|
||||
'AI hallucination prevention',
|
||||
'collaborative AI reasoning',
|
||||
'persistent AI memory'
|
||||
],
|
||||
authors: [{ name: 'Deep Black Cloud Development' }],
|
||||
creator: 'Deep Black Cloud Development',
|
||||
publisher: 'CHORUS Services',
|
||||
formatDetection: {
|
||||
email: false,
|
||||
address: false,
|
||||
telephone: false,
|
||||
},
|
||||
robots: {
|
||||
index: true,
|
||||
follow: true,
|
||||
googleBot: {
|
||||
index: true,
|
||||
follow: true,
|
||||
'max-video-preview': -1,
|
||||
'max-image-preview': 'large',
|
||||
'max-snippet': -1,
|
||||
},
|
||||
},
|
||||
openGraph: {
|
||||
title: 'CHORUS Services - Distributed AI Orchestration',
|
||||
description: 'Distributed AI Orchestration Without the Hallucinations',
|
||||
url: 'https://www.chorus.services',
|
||||
siteName: 'CHORUS Services',
|
||||
type: 'website',
|
||||
images: [
|
||||
{
|
||||
url: '/images/og-chorus-platform.jpg',
|
||||
width: 1200,
|
||||
height: 630,
|
||||
alt: 'CHORUS Services Platform Architecture',
|
||||
}
|
||||
],
|
||||
locale: 'en_US',
|
||||
},
|
||||
twitter: {
|
||||
card: 'summary_large_image',
|
||||
title: 'CHORUS Services - Distributed AI Orchestration',
|
||||
description: 'Enterprise-ready AI orchestration platform that eliminates context loss and enables true multi-agent coordination.',
|
||||
images: ['/images/twitter-chorus-card.jpg'],
|
||||
creator: '@chorusservices',
|
||||
},
|
||||
verification: {
|
||||
google: 'google-site-verification-code',
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
#### Structured Data Implementation
|
||||
```typescript
|
||||
// components/seo/StructuredData.tsx
|
||||
export const OrganizationStructuredData = () => {
|
||||
const structuredData = {
|
||||
"@context": "https://schema.org",
|
||||
"@type": "Organization",
|
||||
"name": "CHORUS Services",
|
||||
"description": "Enterprise-ready distributed AI orchestration platform",
|
||||
"url": "https://www.chorus.services",
|
||||
"logo": "https://www.chorus.services/images/chorus-logo.png",
|
||||
"sameAs": [
|
||||
"https://github.com/chorus-services",
|
||||
"https://linkedin.com/company/chorus-services"
|
||||
],
|
||||
"contactPoint": {
|
||||
"@type": "ContactPoint",
|
||||
"telephone": "+1-XXX-XXX-XXXX",
|
||||
"contactType": "customer service",
|
||||
"availableLanguage": "English"
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<script
|
||||
type="application/ld+json"
|
||||
dangerouslySetInnerHTML={{
|
||||
__html: JSON.stringify(structuredData)
|
||||
}}
|
||||
/>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### 6.2 Accessibility Implementation
|
||||
|
||||
#### ARIA Labels and Semantic HTML
|
||||
```typescript
|
||||
// components/sections/HeroSection.tsx
|
||||
export const HeroSection: React.FC = () => {
|
||||
return (
|
||||
<section
|
||||
role="banner"
|
||||
aria-label="CHORUS Services introduction"
|
||||
className="hero-section"
|
||||
>
|
||||
<header>
|
||||
<h1
|
||||
className="hero-title"
|
||||
aria-describedby="hero-subtitle"
|
||||
>
|
||||
CHORUS Services
|
||||
</h1>
|
||||
<p
|
||||
id="hero-subtitle"
|
||||
className="hero-subtitle"
|
||||
>
|
||||
Distributed AI Orchestration Without the Hallucinations
|
||||
</p>
|
||||
</header>
|
||||
|
||||
<nav aria-label="Primary actions">
|
||||
<Button
|
||||
type="primary"
|
||||
size="large"
|
||||
aria-describedby="platform-exploration-desc"
|
||||
onClick={() => router.push('/ecosystem')}
|
||||
>
|
||||
Explore the Platform
|
||||
</Button>
|
||||
<span
|
||||
id="platform-exploration-desc"
|
||||
className="sr-only"
|
||||
>
|
||||
Navigate to detailed platform overview and capabilities
|
||||
</span>
|
||||
</nav>
|
||||
</section>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
#### Keyboard Navigation Support
|
||||
```typescript
|
||||
// components/navigation/Header.tsx
|
||||
import { useState, useRef, useEffect } from 'react';
|
||||
|
||||
export const Header: React.FC = () => {
|
||||
const [isMobileMenuOpen, setIsMobileMenuOpen] = useState(false);
|
||||
const mobileMenuRef = useRef<HTMLDivElement>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const handleKeyDown = (event: KeyboardEvent) => {
|
||||
if (event.key === 'Escape' && isMobileMenuOpen) {
|
||||
setIsMobileMenuOpen(false);
|
||||
}
|
||||
};
|
||||
|
||||
document.addEventListener('keydown', handleKeyDown);
|
||||
return () => document.removeEventListener('keydown', handleKeyDown);
|
||||
}, [isMobileMenuOpen]);
|
||||
|
||||
return (
|
||||
<header role="banner" className="main-header">
|
||||
<nav aria-label="Main navigation">
|
||||
{/* Navigation implementation with focus management */}
|
||||
</nav>
|
||||
</header>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
## 7. Docker Integration Plan
|
||||
|
||||
### 7.1 Dockerfile Configuration
|
||||
|
||||
#### Multi-stage Production Build
|
||||
```dockerfile
|
||||
# Dockerfile
|
||||
FROM node:18-alpine AS base
|
||||
|
||||
# Install dependencies only when needed
|
||||
FROM base AS deps
|
||||
RUN apk add --no-cache libc6-compat
|
||||
WORKDIR /app
|
||||
|
||||
# Install dependencies based on the preferred package manager
|
||||
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
|
||||
RUN \
|
||||
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
|
||||
elif [ -f package-lock.json ]; then npm ci; \
|
||||
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
|
||||
else echo "Lockfile not found." && exit 1; \
|
||||
fi
|
||||
|
||||
# Rebuild the source code only when needed
|
||||
FROM base AS builder
|
||||
WORKDIR /app
|
||||
COPY --from=deps /app/node_modules ./node_modules
|
||||
COPY . .
|
||||
|
||||
# Build application
|
||||
ENV NEXT_TELEMETRY_DISABLED 1
|
||||
RUN yarn build
|
||||
|
||||
# Production image, copy all the files and run next
|
||||
FROM base AS runner
|
||||
WORKDIR /app
|
||||
|
||||
ENV NODE_ENV production
|
||||
ENV NEXT_TELEMETRY_DISABLED 1
|
||||
|
||||
RUN addgroup --system --gid 1001 nodejs
|
||||
RUN adduser --system --uid 1001 nextjs
|
||||
|
||||
COPY --from=builder /app/public ./public
|
||||
|
||||
# Set the correct permission for prerender cache
|
||||
RUN mkdir .next
|
||||
RUN chown nextjs:nodejs .next
|
||||
|
||||
# Automatically leverage output traces to reduce image size
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
|
||||
|
||||
USER nextjs
|
||||
|
||||
EXPOSE 3000
|
||||
|
||||
ENV PORT 3000
|
||||
ENV HOSTNAME "0.0.0.0"
|
||||
|
||||
CMD ["node", "server.js"]
|
||||
```
|
||||
|
||||
### 7.2 Docker Compose Integration
|
||||
|
||||
#### Development Configuration
|
||||
```yaml
|
||||
# docker-compose.dev.yml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
chorus-website-dev:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.dev
|
||||
ports:
|
||||
- "3000:3000"
|
||||
volumes:
|
||||
- .:/app
|
||||
- /app/node_modules
|
||||
- /app/.next
|
||||
environment:
|
||||
- NODE_ENV=development
|
||||
- NEXT_TELEMETRY_DISABLED=1
|
||||
- WATCHPACK_POLLING=true
|
||||
command: yarn dev
|
||||
networks:
|
||||
- chorus_network
|
||||
|
||||
networks:
|
||||
chorus_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
### 7.3 Production Deployment Integration
|
||||
|
||||
#### Traefik Labels for Docker Swarm
|
||||
The website service is already configured in the existing `docker-compose.swarm.yml`:
|
||||
|
||||
```yaml
|
||||
# From docker-compose.swarm.yml (lines 70-97)
|
||||
chorus-website:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-website:latest
|
||||
deploy:
|
||||
replicas: 2
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == worker
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-website.rule=Host(`www.chorus.services`) || Host(`chorus.services`)"
|
||||
- "traefik.http.routers.chorus-website.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-website.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-website.loadbalancer.server.port=80"
|
||||
- "traefik.http.services.chorus-website.loadbalancer.passhostheader=true"
|
||||
# Redirect naked domain to www
|
||||
- "traefik.http.middlewares.chorus-redirect.redirectregex.regex=^https://chorus.services/(.*)"
|
||||
- "traefik.http.middlewares.chorus-redirect.redirectregex.replacement=https://www.chorus.services/$${1}"
|
||||
- "traefik.http.routers.chorus-website.middlewares=chorus-redirect"
|
||||
networks:
|
||||
- tengig
|
||||
```
|
||||
|
||||
## 8. Implementation Roadmap
|
||||
|
||||
### Phase 1: Foundation (Week 1-2)
|
||||
- Set up Next.js 13+ project with App Router
|
||||
- Configure Ant Design 5+ with custom CHORUS theme
|
||||
- Implement basic folder structure and component hierarchy
|
||||
- Set up Docker development environment
|
||||
- Create core UI components (buttons, cards, typography)
|
||||
|
||||
### Phase 2: Core Pages (Week 3-4)
|
||||
- Implement homepage with hero section and animations
|
||||
- Build ecosystem overview page with module showcases
|
||||
- Create scenarios page with interactive demonstrations
|
||||
- Develop modules detail page with technical specifications
|
||||
- Set up basic navigation and footer components
|
||||
|
||||
### Phase 3: Advanced Features (Week 5-6)
|
||||
- Integrate Framer Motion for parallax and advanced animations
|
||||
- Implement performance metrics displays with counters
|
||||
- Add contact forms and investor relations section
|
||||
- Optimize bundle size and implement code splitting
|
||||
- Set up comprehensive SEO and structured data
|
||||
|
||||
### Phase 4: Production Ready (Week 7-8)
|
||||
- Complete Docker production configuration
|
||||
- Implement comprehensive accessibility features
|
||||
- Set up analytics and performance monitoring
|
||||
- Complete testing across devices and browsers
|
||||
- Deploy to production environment with SSL/TLS
|
||||
|
||||
### Phase 5: Optimization (Week 9-10)
|
||||
- Performance optimization and Core Web Vitals improvement
|
||||
- A/B testing for conversion optimization
|
||||
- Content management system integration (if needed)
|
||||
- Advanced monitoring and error tracking
|
||||
- Documentation and handover
|
||||
|
||||
## 9. Success Metrics
|
||||
|
||||
### Technical Metrics
|
||||
- **Core Web Vitals**: LCP < 2.5s, FID < 100ms, CLS < 0.1
|
||||
- **Lighthouse Score**: 95+ for Performance, Accessibility, Best Practices, SEO
|
||||
- **Bundle Size**: < 500KB initial bundle, < 1MB total
|
||||
- **Load Time**: < 3s on 3G, < 1s on broadband
|
||||
|
||||
### Business Metrics
|
||||
- **Conversion Rate**: Track demo requests and investor inquiries
|
||||
- **Engagement**: Time on site, pages per session, scroll depth
|
||||
- **SEO Performance**: Organic traffic growth, keyword rankings
|
||||
- **Accessibility Score**: WCAG 2.1 AA compliance
|
||||
|
||||
## 10. Maintenance and Evolution Strategy
|
||||
|
||||
### Content Management
|
||||
- Implement headless CMS for non-technical team members to update content
|
||||
- Create component library documentation for developers
|
||||
- Set up automated testing for critical user journeys
|
||||
- Establish design system governance for consistent updates
|
||||
|
||||
### Performance Monitoring
|
||||
- Continuous monitoring of Core Web Vitals
|
||||
- Regular bundle size analysis and optimization
|
||||
- A/B testing infrastructure for conversion optimization
|
||||
- User feedback collection and implementation system
|
||||
|
||||
This comprehensive architecture strategy provides a solid foundation for creating an enterprise-grade website that effectively showcases CHORUS Services' technical capabilities while maintaining excellent performance, accessibility, and user experience standards.
|
||||
@@ -1,245 +0,0 @@
|
||||
# CHORUS Services Website - Comprehensive Functionality Audit
|
||||
## Post-Redesign Status Report
|
||||
|
||||
### Executive Summary
|
||||
|
||||
Following the sophisticated design redesign, a comprehensive functionality audit has been conducted to identify and resolve critical website functionality issues. This report details all problems found and solutions implemented to ensure a professional, fully functional user experience.
|
||||
|
||||
---
|
||||
|
||||
## Critical Issues Identified & Resolved
|
||||
|
||||
### 1. **Missing Website Assets** ✅ RESOLVED
|
||||
#### Problems Found:
|
||||
- Missing `apple-touch-icon.png` (causing 404 errors)
|
||||
- Missing `favicon-16x16.png` and `favicon-32x32.png`
|
||||
- Missing `og-image.png` for social sharing
|
||||
- Missing `android-chrome-*` icons for mobile
|
||||
- Outdated theme colors in manifest.json
|
||||
|
||||
#### Solutions Implemented:
|
||||
- **Updated manifest.json**: Simplified to reference only existing `favicon.ico`
|
||||
- **Fixed theme color**: Updated from bright blue (#007aff) to sophisticated steel blue (#4A90E2)
|
||||
- **Cleaned up metadata**: Removed references to non-existent image files
|
||||
- **Updated Open Graph**: Using favicon.ico as fallback for social sharing
|
||||
|
||||
### 2. **Broken Navigation System** ✅ RESOLVED
|
||||
#### Problems Found:
|
||||
- Navigation used `<button>` elements instead of proper Next.js `<Link>` components
|
||||
- Router navigation was commented out and non-functional
|
||||
- Navigation referenced 7 pages, but only 2 actually existed:
|
||||
- ❌ `/services` (missing)
|
||||
- ❌ `/components` (missing)
|
||||
- ✅ `/technical-specs` (exists)
|
||||
- ❌ `/pricing` (missing)
|
||||
- ❌ `/docs` (missing)
|
||||
- ❌ `/about` (missing)
|
||||
- ✅ `/` (home - exists)
|
||||
|
||||
#### Solutions Implemented:
|
||||
- **Implemented Next.js Link**: Replaced buttons with proper `<Link>` components
|
||||
- **Added usePathname**: Dynamic active state based on current route
|
||||
- **Streamlined navigation**: Reduced menu to only working pages:
|
||||
- Home (/)
|
||||
- Technical Specs (/technical-specs)
|
||||
- **Fixed mobile navigation**: Proper link handling with menu close functionality
|
||||
- **Updated hover colors**: Changed to sophisticated slate-400 (#94a3b8)
|
||||
|
||||
### 3. **Design Inconsistencies** ✅ RESOLVED
|
||||
#### Problems Found:
|
||||
- Bright colors still referenced in some components
|
||||
- Navigation hover states using old color scheme
|
||||
- Theme inconsistencies between components
|
||||
|
||||
#### Solutions Implemented:
|
||||
- **Unified color palette**: All navigation elements use consistent slate colors
|
||||
- **Sophisticated hover states**: Subtle hover effects with slate-400
|
||||
- **Consistent active states**: Clear visual feedback for current page
|
||||
|
||||
---
|
||||
|
||||
## Current Website Status
|
||||
|
||||
### ✅ **Fully Functional Pages**
|
||||
1. **Homepage (`/`)**
|
||||
- EnhancedHero with authentic technical capabilities
|
||||
- WHOOSHShowcase with real architecture highlights
|
||||
- BZZZShowcase with P2P networking details
|
||||
- SLURPShowcase with context curation features
|
||||
- COOEEShowcase with feedback system components
|
||||
- Footer with proper information
|
||||
|
||||
2. **Technical Specs (`/technical-specs`)**
|
||||
- Dedicated technical documentation page
|
||||
- Accessible via working navigation
|
||||
|
||||
### ✅ **Infrastructure Components**
|
||||
- **Header Navigation**: Fully functional with Next.js Links
|
||||
- **Mobile Menu**: Working drawer with proper navigation
|
||||
- **Footer**: Complete layout structure
|
||||
- **Theme System**: Sophisticated color palette implemented
|
||||
- **Responsive Design**: Mobile-friendly across all components
|
||||
|
||||
### ✅ **Asset Management**
|
||||
- **Favicon**: Working favicon.ico
|
||||
- **Web Manifest**: Clean, minimal manifest.json
|
||||
- **Meta Tags**: Proper SEO and social sharing tags
|
||||
- **Theme Colors**: Sophisticated color scheme throughout
|
||||
|
||||
---
|
||||
|
||||
## SEO & Performance Status
|
||||
|
||||
### ✅ **SEO Optimization**
|
||||
- **Meta Tags**: Complete title, description, keywords
|
||||
- **Open Graph**: Proper social sharing metadata
|
||||
- **Structured Data**: JSON-LD organization schema
|
||||
- **Robots.txt**: Proper search engine indexing
|
||||
- **Sitemap**: Ready for search engine discovery
|
||||
|
||||
### ✅ **Performance Optimization**
|
||||
- **Asset Loading**: No broken asset requests
|
||||
- **Font Loading**: Preconnect to Google Fonts
|
||||
- **Mobile Optimization**: Proper viewport and touch targets
|
||||
- **Accessibility**: WCAG 2.1 AA compliance maintained
|
||||
|
||||
### ✅ **Technical Standards**
|
||||
- **Next.js 13+**: App Router implementation
|
||||
- **TypeScript**: Type-safe component architecture
|
||||
- **Responsive**: Mobile-first design approach
|
||||
- **Modern CSS**: CSS-in-JS with Ant Design 5+
|
||||
|
||||
---
|
||||
|
||||
## User Experience Improvements
|
||||
|
||||
### ✅ **Navigation Experience**
|
||||
- **Clear Menu Structure**: Only functional pages shown
|
||||
- **Visual Feedback**: Proper active and hover states
|
||||
- **Mobile-Friendly**: Working drawer navigation
|
||||
- **Keyboard Accessible**: Proper focus management
|
||||
|
||||
### ✅ **Content Quality**
|
||||
- **Authentic Information**: No fake statistics or misleading metrics
|
||||
- **Technical Accuracy**: Real architecture components highlighted
|
||||
- **Professional Messaging**: Enterprise-appropriate content
|
||||
- **Value Proposition**: Clear technical capabilities presented
|
||||
|
||||
### ✅ **Visual Sophistication**
|
||||
- **Muted Color Palette**: Professional, enterprise-ready aesthetics
|
||||
- **Consistent Iconography**: Unified slate-gray icons throughout
|
||||
- **Subtle Animations**: Refined motion design
|
||||
- **Typography Hierarchy**: Clear information architecture
|
||||
|
||||
---
|
||||
|
||||
## Browser Compatibility
|
||||
|
||||
### ✅ **Modern Browser Support**
|
||||
- **Chrome/Chromium**: Full functionality
|
||||
- **Firefox**: Complete compatibility
|
||||
- **Safari**: iOS/macOS support with proper touch icons
|
||||
- **Edge**: Microsoft browser compatibility
|
||||
- **Mobile Browsers**: Responsive design across devices
|
||||
|
||||
### ✅ **Progressive Enhancement**
|
||||
- **Core Functionality**: Works without JavaScript
|
||||
- **Enhanced Experience**: Rich interactions with JS enabled
|
||||
- **Graceful Degradation**: Fallbacks for older browsers
|
||||
- **Performance**: Optimized loading and rendering
|
||||
|
||||
---
|
||||
|
||||
## Security & Best Practices
|
||||
|
||||
### ✅ **Security Headers**
|
||||
- **Content Security Policy**: Implemented via Next.js
|
||||
- **X-Frame-Options**: Clickjacking protection
|
||||
- **Referrer Policy**: Privacy-preserving referrers
|
||||
- **DNS Prefetch**: Controlled external resource loading
|
||||
|
||||
### ✅ **Development Standards**
|
||||
- **Type Safety**: Full TypeScript implementation
|
||||
- **Component Architecture**: Modular, reusable components
|
||||
- **Code Quality**: Consistent formatting and structure
|
||||
- **Error Handling**: Graceful error states
|
||||
|
||||
---
|
||||
|
||||
## Monitoring & Analytics Readiness
|
||||
|
||||
### ✅ **Tracking Infrastructure**
|
||||
- **Google Analytics**: Ready for implementation
|
||||
- **Search Console**: Verification tags in place
|
||||
- **Performance Monitoring**: Core Web Vitals tracking ready
|
||||
- **Error Tracking**: Structured error reporting capability
|
||||
|
||||
---
|
||||
|
||||
## Recommendations for Future Development
|
||||
|
||||
### 1. **Additional Pages** (Optional)
|
||||
If business needs require additional pages, create:
|
||||
- `/about` - Company information and team
|
||||
- `/contact` - Contact information and forms
|
||||
- `/docs` - Documentation portal
|
||||
- `/pricing` - Service pricing information
|
||||
|
||||
### 2. **Enhanced Features** (Phase 2)
|
||||
- **Search Functionality**: Site-wide search capability
|
||||
- **Documentation Integration**: Technical docs with search
|
||||
- **Contact Forms**: Lead generation capabilities
|
||||
- **Blog/News**: Company updates and technical articles
|
||||
|
||||
### 3. **Advanced Integrations** (Phase 3)
|
||||
- **API Documentation**: Interactive API explorer
|
||||
- **User Dashboard**: Customer portal functionality
|
||||
- **Support System**: Help desk integration
|
||||
- **Analytics Dashboard**: Real-time metrics display
|
||||
|
||||
---
|
||||
|
||||
## Quality Assurance Checklist
|
||||
|
||||
### ✅ **Functionality Testing**
|
||||
- [x] All navigation links work correctly
|
||||
- [x] Mobile menu opens and closes properly
|
||||
- [x] Page routing functions as expected
|
||||
- [x] Responsive design works across breakpoints
|
||||
- [x] No 404 errors for referenced assets
|
||||
|
||||
### ✅ **Design Validation**
|
||||
- [x] Consistent sophisticated color palette
|
||||
- [x] Professional visual hierarchy
|
||||
- [x] Subtle, appropriate animations
|
||||
- [x] Unified iconography system
|
||||
- [x] Accessible contrast ratios
|
||||
|
||||
### ✅ **Content Verification**
|
||||
- [x] No fake statistics or misleading metrics
|
||||
- [x] Accurate technical capability descriptions
|
||||
- [x] Professional, enterprise-appropriate messaging
|
||||
- [x] Clear value propositions
|
||||
- [x] Authentic architecture highlights
|
||||
|
||||
### ✅ **Technical Validation**
|
||||
- [x] Fast loading times
|
||||
- [x] Proper SEO metadata
|
||||
- [x] Mobile-friendly design
|
||||
- [x] Accessibility compliance
|
||||
- [x] Error-free console output
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
The CHORUS Services website has been transformed from a dysfunctional site with broken navigation and missing assets into a fully functional, sophisticated, and professional web presence. All critical functionality issues have been resolved while maintaining the new sophisticated design aesthetic.
|
||||
|
||||
**Current Status**: ✅ Production Ready
|
||||
**Navigation**: ✅ Fully Functional
|
||||
**Assets**: ✅ All References Working
|
||||
**Design**: ✅ Sophisticated & Professional
|
||||
**Performance**: ✅ Optimized & Fast
|
||||
**SEO**: ✅ Search Engine Ready
|
||||
|
||||
The website now provides a credible, functional, and sophisticated representation of the CHORUS Services platform that accurately reflects its technical capabilities without misleading metrics or broken functionality.
|
||||
@@ -1,52 +0,0 @@
|
||||
# Website Integration - Ready for Submodule Addition
|
||||
|
||||
## Status: Prepared ✅
|
||||
|
||||
The CHORUS Services platform is fully configured for the www.chorus.services website integration. All configuration is ready for when the website project is created.
|
||||
|
||||
## Configuration Complete
|
||||
|
||||
### Docker Swarm Configuration
|
||||
- `docker-compose.swarm.yml` includes `chorus-website` service
|
||||
- Traefik labels configured for `www.chorus.services` and `chorus.services`
|
||||
- Domain redirect: `chorus.services` → `www.chorus.services`
|
||||
- SSL/TLS certificates via Let's Encrypt
|
||||
- Registry image: `registry.home.deepblack.cloud/tony/chorus-website:latest`
|
||||
|
||||
### Build Scripts
|
||||
- `build-and-push.sh` includes website build support
|
||||
- Individual build command: `./build-and-push.sh website`
|
||||
- Integrated with unified build: `./chorus.sh build`
|
||||
|
||||
### Management Integration
|
||||
- `./chorus.sh deploy` includes website in production deployment
|
||||
- Production endpoints configured and documented
|
||||
|
||||
## Next Steps (When Website Project is Ready)
|
||||
|
||||
1. **Add Git Submodule:**
|
||||
```bash
|
||||
git submodule add <website-repo-url> modules/website
|
||||
```
|
||||
|
||||
2. **Build and Deploy:**
|
||||
```bash
|
||||
./chorus.sh build # Includes website
|
||||
./chorus.sh deploy # Deploys to production
|
||||
```
|
||||
|
||||
3. **Access Points:**
|
||||
- **Marketing**: https://www.chorus.services
|
||||
- **Dashboard**: https://dashboard.chorus.services
|
||||
- **API**: https://api.chorus.services
|
||||
|
||||
## Domain Configuration ✅
|
||||
|
||||
External domains configured with DNS pointing to 202.171.184.242:
|
||||
- `chorus.services` (redirects to www)
|
||||
- `www.chorus.services` (marketing website)
|
||||
- `dashboard.chorus.services` (WHOOSH dashboard)
|
||||
- `api.chorus.services` (API endpoints)
|
||||
- `*.chorus.services` (wildcard for future services)
|
||||
|
||||
All Traefik labels and routing ready for production deployment.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,167 +0,0 @@
|
||||
/* Color System CHORUS.services */
|
||||
|
||||
/*
|
||||
- **Usage**: Primary backgrounds, high-contrast text, logo applications
|
||||
- **Psychology**: Authority, sophistication, premium quality
|
||||
- **Applications**: Website backgrounds, app interfaces, business cards
|
||||
*/
|
||||
|
||||
.carbon-black {
|
||||
color: #000000ff;
|
||||
}
|
||||
/*
|
||||
- **Usage**: hero backgrounds, dark accents, secondary elements, natural touches
|
||||
- **Psychology**: Richness, mystery, luxury, power, and depth. Calm Darkness: Unlike harsh black, it feels less aggressive and more contemplative.
|
||||
- **Applications**: Secondary text, accent elements, print materials
|
||||
*/
|
||||
.dark-mulberry{
|
||||
color: #0b0213ff;
|
||||
}
|
||||
|
||||
/*
|
||||
- **Usage**: Light backgrounds, high-contrast text, accessibility contrast
|
||||
- **Psychology**: Clarity, simplicity, natural intelligence
|
||||
- **Applications**: Print materials, light themes, text on dark backgrounds
|
||||
*/
|
||||
|
||||
.natural-paper{
|
||||
color: #f5f5dcff;
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
- **Usage**: Warm accents, secondary elements, natural touches
|
||||
- **Psychology**: Reliability, craftsmanship, approachable intelligence
|
||||
- **Applications**: Secondary text, accent elements, print materials
|
||||
*/
|
||||
.walnut-brown {
|
||||
color: #403730ff;
|
||||
}
|
||||
|
||||
/*
|
||||
- **Usage**: UI elements, borders, technical precision
|
||||
- **Psychology**: Modern sophistication, precision, technology
|
||||
- **Applications**: Interface elements, technical diagrams, secondary text
|
||||
*/
|
||||
.brushed-nickel {
|
||||
color: #c1bfb1ff;
|
||||
}
|
||||
|
||||
|
||||
/* System Colors */
|
||||
|
||||
/*
|
||||
- **Usage**: Primary actions, interactive elements, system feedback
|
||||
- **Psychology**: Trust, reliability, technological precision
|
||||
- **Applications**: Buttons, links, primary CTAs, logo accents
|
||||
*/
|
||||
.orchestration-blue {
|
||||
color: #5a6c80ff;
|
||||
}
|
||||
|
||||
/*
|
||||
- **Usage**: Success states, positive feedback, growth indicators
|
||||
- **Applications**: Success messages, positive data visualization
|
||||
*/
|
||||
.harmony-green {
|
||||
color: #515d54ff;
|
||||
}
|
||||
|
||||
/*
|
||||
- **Usage**: Warning states, attention indicators, energy elements
|
||||
- **Applications**: Warnings, attention callouts, active processes
|
||||
*/
|
||||
.resonance-amber {
|
||||
color: #8e7b5eff;
|
||||
}
|
||||
|
||||
/*
|
||||
- **Usage**: Error states, critical alerts, problem indicators
|
||||
- **Applications**: Error messages, critical warnings, urgent notifications
|
||||
*/
|
||||
.alert-coral {
|
||||
color: #593735ff;
|
||||
}
|
||||
|
||||
/* Ultra-Minimalist UI Color System */
|
||||
|
||||
/* Background System for Subtle Depth */
|
||||
.pure-white {
|
||||
background-color: #FFFFFF;
|
||||
}
|
||||
|
||||
.warm-white {
|
||||
background-color: #FEFEFE;
|
||||
}
|
||||
|
||||
.paper-tint {
|
||||
background-color: #F7F7E2;
|
||||
}
|
||||
|
||||
/* Text Hierarchy System - Minimalist Approach */
|
||||
.primary-text {
|
||||
color: #000000;
|
||||
}
|
||||
|
||||
.secondary-text {
|
||||
color: #1A1A1A;
|
||||
}
|
||||
|
||||
.tertiary-text {
|
||||
color: #333333;
|
||||
}
|
||||
|
||||
.subtle-text {
|
||||
color: #666666;
|
||||
}
|
||||
|
||||
.ghost-text {
|
||||
color: #999999;
|
||||
}
|
||||
|
||||
/* Border System for Invisible Organization */
|
||||
.border-invisible {
|
||||
border: 1px solid #FAFAFA;
|
||||
}
|
||||
|
||||
.border-subtle {
|
||||
border: 1px solid #F0F0F0;
|
||||
}
|
||||
|
||||
.border-defined {
|
||||
border: 1px solid #E5E5E5;
|
||||
}
|
||||
|
||||
.border-emphasis {
|
||||
border: 1px solid #CCCCCC;
|
||||
}
|
||||
|
||||
/* Interactive Element Variations */
|
||||
.dark-mulberry-subtle {
|
||||
color: rgba(11, 2, 19, 0.8); /* 80% opacity for hover states */
|
||||
}
|
||||
|
||||
.dark-mulberry-ghost {
|
||||
color: rgba(11, 2, 19, 0.4); /* 40% opacity for disabled states */
|
||||
}
|
||||
|
||||
.walnut-brown-subtle {
|
||||
color: rgba(64, 55, 48, 0.8); /* 80% opacity for secondary interactions */
|
||||
}
|
||||
|
||||
.walnut-brown-ghost {
|
||||
color: rgba(64, 55, 48, 0.4); /* 40% opacity for disabled secondary */
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
### Dark Mode Implementation
|
||||
- **Background Hierarchy**: Pure Black → Carbon Gray → Cool Gray → Border Gray
|
||||
- **Text Hierarchy**: Natural Paper → Light Gray → Medium Gray → Orchestration Blue
|
||||
- **Contrast**: All combinations tested for WCAG 2.1 AA compliance
|
||||
|
||||
### Light Mode Implementation
|
||||
- **Background Hierarchy**: Pure White → Natural Paper → Light Gray → Border Light
|
||||
- **Text Hierarchy**: Carbon Black → Medium Gray → Light Gray → Orchestration Blue
|
||||
- **Contrast**: Optimized for readability on warm, natural backgrounds
|
||||
*/
|
||||
@@ -1,105 +0,0 @@
|
||||
# CHORUS Services Color System
|
||||
|
||||
## Brand Color Philosophy
|
||||
|
||||
The CHORUS Services color palette reflects sophisticated orchestration, enterprise reliability, and technological innovation. Drawing inspiration from natural materials (carbon black, walnut brown, brushed aluminum) and warmer accents, the system creates a premium, approachable aesthetic that works in both digital dark themes and print materials.
|
||||
|
||||
## Primary Color Palette
|
||||
|
||||
### Core Brand Colors
|
||||
|
||||
#### Carbon Black
|
||||
- **Primary**: #000000
|
||||
- **Usage**: Primary backgrounds, high-contrast text, logo applications
|
||||
- **Psychology**: Authority, sophistication, premium quality
|
||||
- **Print**: Rich black (C:91, M:79, Y:62, K:97)
|
||||
|
||||
#### Mulberry
|
||||
- **Primary**: #0b0213
|
||||
- **Usage**: Primary-adjacent backgrounds, hero sections, logo applications, promotions
|
||||
- **Psychology**: a color of mystery, sophistication, and subtle power. Unlike black, it retains a spiritual and emotional resonance from purple. It’s excellent for luxury, artistic, and technology branding, especially when balanced with lighter contrasts. Overuse can feel isolating, but in the right proportions, it commands authority and intrigue.
|
||||
- **Print**: Used sparingly except in glossy brochure covers or prospectus documents, business cards or pamphlets.
|
||||
|
||||
#### Walnut Brown
|
||||
- **Primary**: #1E1815 (Deep Walnut)
|
||||
- **Usage**: Accent elements, warm touches, secondary branding
|
||||
- **Psychology**: Reliability, craftsmanship, natural intelligence
|
||||
- **Print**: C:30, M:70, Y:100, K:20
|
||||
|
||||
#### Brushed Nickel Grey
|
||||
- **Primary**: #888681 (Brushed Nickel Grey)
|
||||
- **Usage**: UI elements, borders, technical diagrams
|
||||
- **Psychology**: Precision, technology, modern sophistication
|
||||
- **Print**: C:15, M:10, Y:12, K:0
|
||||
|
||||
#### Muted Slate Blue
|
||||
- **Primary**: #5E6367
|
||||
- **Usage**: secondary backgrounds, forms, panels, toolbars, dashboard elements.
|
||||
- **Psychology**: Neutral, lets content pop whether its either darker or lighter.
|
||||
- **Print**: TBA
|
||||
|
||||
#### Natural Fiber
|
||||
- **Primary**: #F5F5DC (Warm Cream)
|
||||
- **Usage**: Light backgrounds, print materials, accessibility contrast
|
||||
- **Psychology**: Clarity, simplicity, natural intelligence
|
||||
- **Print**: C:6, M:4, Y:15, K:0
|
||||
|
||||
## Secondary Palette (UI Accents)
|
||||
|
||||
### Orchestration Blue (Primary System Color)
|
||||
- **Electric Blue**: #626b74ff - Primary actions, links, system elements
|
||||
- **Deep Blue**: #7c838fff - Hover states, pressed elements
|
||||
- **Light Blue**: #91a1b2ff - Secondary actions, info states
|
||||
|
||||
### Harmony Green (Success/Growth)
|
||||
- **Emerald**: #78a082ff - Success states, positive feedback
|
||||
- **Forest**: #597259ff - Secondary success, stable states
|
||||
- **Sage**: #474c41ff - Subtle positive indicators
|
||||
|
||||
### Resonance Amber (Warning/Energy)
|
||||
- **Warm Amber**: #b9a586ff - Warnings, attention states
|
||||
- **Golden**: #7b7656ff - Premium features, highlights
|
||||
- **Copper**: #5b544dff - Secondary attention elements
|
||||
|
||||
### Alert Coral (Error/Critical)
|
||||
- **System Red**: #895956ff - Errors, critical states
|
||||
- **Warm Red**: #563838ff - Secondary errors, warnings
|
||||
- **Rose**: #522a2aff - Soft error states
|
||||
|
||||
## Dark Mode Implementation
|
||||
|
||||
### Background Hierarchy
|
||||
1. **Pure Black**: #000000 - App backgrounds, highest contrast
|
||||
2. **Carbon Gray**: #1A1A1A - Card backgrounds, elevated surfaces
|
||||
3. **Cool Gray**: #2D2D30 - Secondary surfaces, input fields
|
||||
4. **Border Gray**: #48484A - Dividers, subtle borders
|
||||
|
||||
### Text Colors (Dark Mode)
|
||||
1. **Primary Text**: #F2F2F7 - Headlines, primary content
|
||||
2. **Secondary Text**: #A1A1A6 - Descriptions, secondary content
|
||||
3. **Tertiary Text**: #6D6D73 - Captions, disabled text
|
||||
4. **Accent Text**: #496078ff - Links, interactive elements
|
||||
|
||||
## Light Mode Implementation
|
||||
|
||||
### Background Hierarchy
|
||||
1. **Pure White**: #FFFFFF - Clean backgrounds
|
||||
2. **Natural Paper**: #F5F5DC - Warm backgrounds, print materials
|
||||
3. **Light Gray**: #F2F2F2 - Secondary surfaces
|
||||
4. **Border Light**: #E5E5E5 - Dividers, subtle borders
|
||||
|
||||
### Text Colors (Light Mode)
|
||||
1. **Primary Text**: #1A1A1A - Headlines, primary content
|
||||
2. **Secondary Text**: #6D6D73 - Descriptions, secondary content
|
||||
3. **Tertiary Text**: #A1A1A6 - Captions, disabled text
|
||||
4. **Accent Text**: #463f4fff - Links, interactive elements
|
||||
|
||||
## Accessibility Standards
|
||||
|
||||
### WCAG 2.1 AA Compliance
|
||||
- **Normal Text**: Minimum 3.5:1 contrast ratio
|
||||
- **Large Text**: Minimum 2:1 contrast ratio
|
||||
- **Interactive Elements*
|
||||
|
||||
|
||||
This color system provides a sophisticated, accessible foundation for the CHORUS Services brand that works across all applications while maintaining the premium, technology-focused aesthetic required for enterprise clients.
|
||||
@@ -1,54 +0,0 @@
|
||||
# Docker Compose Override for Development
|
||||
# This file provides local build configurations for development
|
||||
# Use: docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
|
||||
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# Development overrides - builds locally instead of using registry
|
||||
whoosh-backend:
|
||||
build:
|
||||
context: ./modules/whoosh/backend
|
||||
dockerfile: Dockerfile
|
||||
volumes:
|
||||
- ./modules/whoosh/backend:/app
|
||||
- ./modules/whoosh/config:/app/config
|
||||
environment:
|
||||
- ENVIRONMENT=development
|
||||
- LOG_LEVEL=debug
|
||||
|
||||
whoosh-frontend:
|
||||
build:
|
||||
context: ./modules/whoosh/frontend
|
||||
dockerfile: Dockerfile
|
||||
volumes:
|
||||
- ./modules/whoosh/frontend:/app
|
||||
- /app/node_modules
|
||||
|
||||
bzzz-coordinator:
|
||||
build:
|
||||
context: ./modules/bzzz
|
||||
dockerfile: Dockerfile
|
||||
volumes:
|
||||
- ./modules/bzzz/config:/app/config
|
||||
- ./modules/bzzz/data:/app/data
|
||||
environment:
|
||||
- BZZZ_NODE_ENV=development
|
||||
- BZZZ_LOG_LEVEL=debug
|
||||
|
||||
slurp-api:
|
||||
build:
|
||||
context: ./modules/slurp/hcfs-python
|
||||
dockerfile: Dockerfile
|
||||
volumes:
|
||||
- ./modules/slurp/data:/app/data
|
||||
- ./modules/slurp/config:/app/config
|
||||
environment:
|
||||
- HCFS_LOG_LEVEL=debug
|
||||
|
||||
slurp-rl-tuner:
|
||||
build:
|
||||
context: ./modules/slurp
|
||||
dockerfile: Dockerfile.rl-tuner
|
||||
environment:
|
||||
- LOG_LEVEL=debug
|
||||
@@ -1,298 +0,0 @@
|
||||
# Docker Compose for Docker Swarm Deployment
|
||||
# Optimized for production deployment on deepblack.cloud infrastructure
|
||||
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# WHOOSH - Orchestration Platform
|
||||
whoosh-backend:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-whoosh-backend:latest
|
||||
deploy:
|
||||
replicas: 2
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
reservations:
|
||||
memory: 512M
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-api.rule=Host(`api.chorus.services`)"
|
||||
- "traefik.http.routers.chorus-api.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-api.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-api.loadbalancer.server.port=8000"
|
||||
- "traefik.http.services.chorus-api.loadbalancer.passhostheader=true"
|
||||
environment:
|
||||
- DATABASE_URL=postgresql://chorus:choruspass@postgres:5432/chorus_whoosh
|
||||
- REDIS_URL=redis://redis:6379
|
||||
- CORS_ORIGINS=https://dashboard.chorus.services,https://www.chorus.services
|
||||
- ENVIRONMENT=production
|
||||
- LOG_LEVEL=info
|
||||
networks:
|
||||
- tengig
|
||||
- chorus_network
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
|
||||
whoosh-frontend:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-whoosh-frontend:latest
|
||||
deploy:
|
||||
replicas: 2
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
reservations:
|
||||
memory: 256M
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-dashboard.rule=Host(`dashboard.chorus.services`)"
|
||||
- "traefik.http.routers.chorus-dashboard.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-dashboard.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-dashboard.loadbalancer.server.port=3000"
|
||||
- "traefik.http.services.chorus-dashboard.loadbalancer.passhostheader=true"
|
||||
environment:
|
||||
- REACT_APP_API_URL=https://api.chorus.services
|
||||
- REACT_APP_WS_URL=wss://api.chorus.services
|
||||
networks:
|
||||
- tengig
|
||||
- chorus_network
|
||||
depends_on:
|
||||
- whoosh-backend
|
||||
|
||||
# Marketing Website
|
||||
chorus-website:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-website:latest
|
||||
deploy:
|
||||
replicas: 2
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-website.rule=Host(`www.chorus.services`) || Host(`chorus.services`)"
|
||||
- "traefik.http.routers.chorus-website.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-website.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-website.loadbalancer.server.port=80"
|
||||
- "traefik.http.services.chorus-website.loadbalancer.passhostheader=true"
|
||||
# Redirect naked domain to www
|
||||
- "traefik.http.middlewares.chorus-redirect.redirectregex.regex=^https://chorus.services/(.*)"
|
||||
- "traefik.http.middlewares.chorus-redirect.redirectregex.replacement=https://www.chorus.services/$${1}"
|
||||
- "traefik.http.routers.chorus-website.middlewares=chorus-redirect"
|
||||
networks:
|
||||
- tengig
|
||||
|
||||
# BZZZ - P2P Agent Coordination
|
||||
bzzz-coordinator:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-bzzz-coordinator:latest
|
||||
deploy:
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager # P2P networking works better on manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
reservations:
|
||||
memory: 256M
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-bzzz.rule=Host(`chorus-bzzz.home.deepblack.cloud`)"
|
||||
- "traefik.http.routers.chorus-bzzz.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-bzzz.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-bzzz.loadbalancer.server.port=8080"
|
||||
ports:
|
||||
- target: 4001
|
||||
published: 4001
|
||||
protocol: tcp
|
||||
mode: host # Required for P2P networking
|
||||
environment:
|
||||
- BZZZ_NODE_ENV=production
|
||||
- BZZZ_LOG_LEVEL=info
|
||||
networks:
|
||||
- tengig
|
||||
- chorus_network
|
||||
volumes:
|
||||
- bzzz_data:/app/data
|
||||
|
||||
# SLURP - Context Curator Service
|
||||
slurp-curator:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-slurp-curator:latest
|
||||
deploy:
|
||||
replicas: 2
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
reservations:
|
||||
memory: 512M
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-slurp.rule=Host(`slurp.chorus.services`)"
|
||||
- "traefik.http.routers.chorus-slurp.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-slurp.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-slurp.loadbalancer.server.port=8000"
|
||||
environment:
|
||||
- SLURP_DATABASE_URL=postgresql://chorus:choruspass@postgres:5432/chorus_slurp
|
||||
- SLURP_LOG_LEVEL=info
|
||||
- SLURP_AUTH_ENABLED=true
|
||||
- HYPERCORE_LOG_URL=http://hypercore-log:8000
|
||||
- BZZZ_COORDINATOR_URL=http://bzzz-coordinator:8080
|
||||
networks:
|
||||
- tengig
|
||||
- chorus_network
|
||||
volumes:
|
||||
- slurp_data:/app/data
|
||||
depends_on:
|
||||
- postgres
|
||||
- bzzz-coordinator
|
||||
|
||||
# COOEE - RL Context Tuner
|
||||
slurp-rl-tuner:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-slurp-rl-tuner:latest
|
||||
deploy:
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
reservations:
|
||||
memory: 256M
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-cooee.rule=Host(`chorus-cooee.home.deepblack.cloud`)"
|
||||
- "traefik.http.routers.chorus-cooee.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-cooee.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-cooee.loadbalancer.server.port=8000"
|
||||
environment:
|
||||
- RL_TUNER_DATABASE_URL=postgresql://chorus:choruspass@postgres:5432/chorus_rl_tuner
|
||||
- SLURP_CURATOR_URL=http://slurp-curator:8000
|
||||
- BZZZ_API_URL=http://bzzz-coordinator:8080
|
||||
networks:
|
||||
- tengig
|
||||
- chorus_network
|
||||
depends_on:
|
||||
- postgres
|
||||
- slurp-curator
|
||||
- bzzz-coordinator
|
||||
|
||||
# Shared Infrastructure
|
||||
postgres:
|
||||
image: postgres:15
|
||||
deploy:
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager # Keep database on manager for stability
|
||||
resources:
|
||||
limits:
|
||||
memory: 2G
|
||||
reservations:
|
||||
memory: 1G
|
||||
environment:
|
||||
- POSTGRES_DB=chorus
|
||||
- POSTGRES_USER=chorus
|
||||
- POSTGRES_PASSWORD=choruspass
|
||||
networks:
|
||||
- chorus_network
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
- ./init-db.sql:/docker-entrypoint-initdb.d/init-db.sql
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
deploy:
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
reservations:
|
||||
memory: 128M
|
||||
networks:
|
||||
- chorus_network
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
|
||||
# Monitoring Stack
|
||||
prometheus:
|
||||
image: prom/prometheus:latest
|
||||
deploy:
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-prometheus.rule=Host(`chorus-prometheus.home.deepblack.cloud`)"
|
||||
- "traefik.http.routers.chorus-prometheus.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-prometheus.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-prometheus.loadbalancer.server.port=9090"
|
||||
networks:
|
||||
- tengig
|
||||
- chorus_network
|
||||
volumes:
|
||||
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
- prometheus_data:/prometheus
|
||||
|
||||
grafana:
|
||||
image: grafana/grafana:latest
|
||||
deploy:
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-grafana.rule=Host(`chorus-grafana.home.deepblack.cloud`)"
|
||||
- "traefik.http.routers.chorus-grafana.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-grafana.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-grafana.loadbalancer.server.port=3000"
|
||||
environment:
|
||||
- GF_SECURITY_ADMIN_PASSWORD=chorusadmin
|
||||
networks:
|
||||
- tengig
|
||||
- chorus_network
|
||||
volumes:
|
||||
- grafana_data:/var/lib/grafana
|
||||
- ./monitoring/grafana/dashboards:/etc/grafana/provisioning/dashboards
|
||||
- ./monitoring/grafana/datasources:/etc/grafana/provisioning/datasources
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
prometheus_data:
|
||||
grafana_data:
|
||||
bzzz_data:
|
||||
slurp_data:
|
||||
|
||||
networks:
|
||||
tengig:
|
||||
external: true
|
||||
chorus_network:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
@@ -1,46 +0,0 @@
|
||||
# Docker Compose for Website-Only Deployment
|
||||
# Minimal deployment for CHORUS Services marketing website
|
||||
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# Marketing Website
|
||||
chorus-website:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-website:latest
|
||||
deploy:
|
||||
replicas: 2
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == walnut
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=tengig"
|
||||
- "traefik.http.routers.chorus-website.rule=Host(`www.chorus.services`) || Host(`chorus.services`)"
|
||||
- "traefik.http.routers.chorus-website.entrypoints=web-secured"
|
||||
- "traefik.http.routers.chorus-website.tls.certresolver=letsencryptresolver"
|
||||
- "traefik.http.services.chorus-website.loadbalancer.server.port=80"
|
||||
- "traefik.http.services.chorus-website.loadbalancer.passhostheader=true"
|
||||
# Redirect naked domain to www
|
||||
- "traefik.http.middlewares.chorus-redirect.redirectregex.regex=^https://chorus.services/(.*)"
|
||||
- "traefik.http.middlewares.chorus-redirect.redirectregex.replacement=https://www.chorus.services/$${1}"
|
||||
- "traefik.http.routers.chorus-website.middlewares=chorus-redirect"
|
||||
ports:
|
||||
- target: 80
|
||||
published: 3100
|
||||
protocol: tcp
|
||||
mode: ingress
|
||||
networks:
|
||||
- tengig
|
||||
- chorus_website_network
|
||||
|
||||
networks:
|
||||
tengig:
|
||||
external: true
|
||||
chorus_website_network:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
@@ -1,159 +0,0 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# WHOOSH - Orchestration Platform
|
||||
whoosh-backend:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-whoosh-backend:latest
|
||||
container_name: chorus_whoosh_backend
|
||||
ports:
|
||||
- "8087:8000"
|
||||
environment:
|
||||
- DATABASE_URL=postgresql://chorus:choruspass@postgres:5432/chorus_whoosh
|
||||
- REDIS_URL=redis://redis:6379
|
||||
- CORS_ORIGINS=http://localhost:3001,https://whoosh.home.deepblack.cloud
|
||||
- ENVIRONMENT=development
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
networks:
|
||||
- chorus_network
|
||||
volumes:
|
||||
- ./modules/whoosh/backend:/app
|
||||
- ./modules/whoosh/config:/app/config
|
||||
|
||||
whoosh-frontend:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-whoosh-frontend:latest
|
||||
container_name: chorus_whoosh_frontend
|
||||
ports:
|
||||
- "3001:3000"
|
||||
environment:
|
||||
- REACT_APP_API_URL=http://localhost:8087
|
||||
- REACT_APP_WS_URL=ws://localhost:8087
|
||||
depends_on:
|
||||
- whoosh-backend
|
||||
networks:
|
||||
- chorus_network
|
||||
volumes:
|
||||
- ./modules/whoosh/frontend:/app
|
||||
- /app/node_modules
|
||||
|
||||
# BZZZ - P2P Agent Coordination
|
||||
bzzz-coordinator:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-bzzz-coordinator:latest
|
||||
container_name: chorus_bzzz_coordinator
|
||||
ports:
|
||||
- "4001:4001" # libp2p port
|
||||
- "8080:8080" # HTTP API port
|
||||
environment:
|
||||
- BZZZ_NODE_ENV=development
|
||||
- BZZZ_LOG_LEVEL=info
|
||||
networks:
|
||||
- chorus_network
|
||||
- host # Needed for P2P discovery
|
||||
volumes:
|
||||
- ./modules/bzzz/config:/app/config
|
||||
- ./modules/bzzz/data:/app/data
|
||||
|
||||
# SLURP - Context Curator Service
|
||||
slurp-curator:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-slurp-curator:latest
|
||||
container_name: chorus_slurp_curator
|
||||
ports:
|
||||
- "8088:8000"
|
||||
environment:
|
||||
- SLURP_DATABASE_URL=postgresql://chorus:choruspass@postgres:5432/chorus_slurp
|
||||
- SLURP_LOG_LEVEL=info
|
||||
- SLURP_AUTH_ENABLED=true
|
||||
- HYPERCORE_LOG_URL=http://hypercore-log:8000
|
||||
- BZZZ_COORDINATOR_URL=http://bzzz-coordinator:8080
|
||||
networks:
|
||||
- chorus_network
|
||||
volumes:
|
||||
- slurp_data:/app/data
|
||||
- ./modules/slurp/config:/app/config
|
||||
depends_on:
|
||||
- postgres
|
||||
- bzzz-coordinator
|
||||
|
||||
# COOEE - RL Feedback System (part of SLURP)
|
||||
slurp-rl-tuner:
|
||||
image: registry.home.deepblack.cloud/tony/chorus-slurp-rl-tuner:latest
|
||||
container_name: chorus_slurp_rl_tuner
|
||||
ports:
|
||||
- "8089:8000"
|
||||
environment:
|
||||
- RL_TUNER_DATABASE_URL=postgresql://chorus:choruspass@postgres:5432/chorus_rl_tuner
|
||||
- SLURP_CURATOR_URL=http://slurp-curator:8000
|
||||
- BZZZ_API_URL=http://bzzz-coordinator:8080
|
||||
depends_on:
|
||||
- postgres
|
||||
- slurp-curator
|
||||
- bzzz-coordinator
|
||||
networks:
|
||||
- chorus_network
|
||||
|
||||
# Shared Infrastructure
|
||||
postgres:
|
||||
image: postgres:15
|
||||
container_name: chorus_postgres
|
||||
environment:
|
||||
- POSTGRES_DB=chorus
|
||||
- POSTGRES_USER=chorus
|
||||
- POSTGRES_PASSWORD=choruspass
|
||||
ports:
|
||||
- "5433:5432"
|
||||
volumes:
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
- ./init-db.sql:/docker-entrypoint-initdb.d/init-db.sql
|
||||
networks:
|
||||
- chorus_network
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
container_name: chorus_redis
|
||||
ports:
|
||||
- "6380:6379"
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
networks:
|
||||
- chorus_network
|
||||
|
||||
# Monitoring Stack
|
||||
prometheus:
|
||||
image: prom/prometheus:latest
|
||||
container_name: chorus_prometheus
|
||||
ports:
|
||||
- "9092:9090"
|
||||
volumes:
|
||||
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
- prometheus_data:/prometheus
|
||||
networks:
|
||||
- chorus_network
|
||||
|
||||
grafana:
|
||||
image: grafana/grafana:latest
|
||||
container_name: chorus_grafana
|
||||
ports:
|
||||
- "3002:3000"
|
||||
environment:
|
||||
- GF_SECURITY_ADMIN_PASSWORD=chorusadmin
|
||||
volumes:
|
||||
- grafana_data:/var/lib/grafana
|
||||
- ./monitoring/grafana/dashboards:/etc/grafana/provisioning/dashboards
|
||||
- ./monitoring/grafana/datasources:/etc/grafana/provisioning/datasources
|
||||
networks:
|
||||
- chorus_network
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
redis_data:
|
||||
prometheus_data:
|
||||
grafana_data:
|
||||
slurp_data:
|
||||
|
||||
networks:
|
||||
chorus_network:
|
||||
driver: bridge
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.20.0.0/16
|
||||
@@ -1,53 +0,0 @@
|
||||
domain: chorus.services
|
||||
records:
|
||||
- name: qe8ddd03c96afc707._domainkey.chorus.services
|
||||
type: TXT
|
||||
ttl: 3600
|
||||
content: '"v=DKIM1; k=rsa; h=sha256; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArPS8sm8Y3VGybA1x2y+YBb0DTwiyzMNEy5wB2oxM5BBywohhp9LJGfqCOsjQQR/mqBZc1cyUM10rYZgCZqzbIQpvcnsUsd20KWyxLWdgbMGIirmcwlJAtYr6Rajj1bI0nSQHb6319ZgDuV4jfQNEYaSATooBCponFv6jVzetj0d4c9NN/b0IsfKH4bYvnldtUF2EyZWpfT8srD2wbEqbDKNsu3Rbcdg+dTM5TIRRC+FeOU16SdGZGb8epjsT6yytHeBaZrsDikeKy6TdTAkZf8WGonffWz2/V6Uw2zL3xKtOfkzInyZvgMx3qylz4a3ceNb2BfVmlvSEPjZLU3cB+wIDAQAB"'
|
||||
- name: autodiscover.chorus.services
|
||||
type: CNAME
|
||||
ttl: 3600
|
||||
content: mail.chorus.services.
|
||||
- name: _autodiscover._tcp.chorus.services
|
||||
type: SRV
|
||||
ttl: 3600
|
||||
content: 10 10 443 mail.chorus.services.
|
||||
- name: webmail.chorus.services
|
||||
type: CNAME
|
||||
ttl: 3600
|
||||
content: mail.chorus.services.
|
||||
- name: mail.chorus.services
|
||||
type: CNAME
|
||||
ttl: 3600
|
||||
content: mx3594.syd1.mymailhosting.com.
|
||||
- name: api.chorus.services
|
||||
type: A
|
||||
ttl: 900
|
||||
content: 202.171.184.242
|
||||
- name: _dmarc.chorus.services
|
||||
type: TXT
|
||||
ttl: 3600
|
||||
content: '"v=DMARC1;p=none;adkim=s;aspf=s;"'
|
||||
- name: '*.chorus.services'
|
||||
type: A
|
||||
ttl: 900
|
||||
content: 202.171.184.242
|
||||
- name: chorus.services
|
||||
type: TXT
|
||||
ttl: 3600
|
||||
content: '"v=spf1 a mx include:spf.mymailhosting.com -all"'
|
||||
- name: chorus.services
|
||||
type: MX
|
||||
ttl: 3600
|
||||
content: 10 mx3594.syd1.mymailhosting.com.
|
||||
- name: chorus.services
|
||||
type: NS
|
||||
ttl: 3600
|
||||
content:
|
||||
- ns1.netregistry.net.
|
||||
- ns2.netregistry.net.
|
||||
- ns3.netregistry.net.
|
||||
- name: chorus.services
|
||||
type: A
|
||||
ttl: 3600
|
||||
content: 202.171.184.242
|
||||
@@ -1,184 +0,0 @@
|
||||
# CHORUS Services - Homepage Content
|
||||
|
||||
## Hero Section
|
||||
|
||||
### Primary Headline
|
||||
**AI Development Teams That Think, Learn, and Optimize Themselves**
|
||||
|
||||
### Secondary Headline
|
||||
The next evolution in AI orchestration: Self-optimizing agents that dynamically build optimal teams, learn from every interaction, and deliver auditable results with complete traceability.
|
||||
|
||||
### Value Proposition
|
||||
CHORUS Services transforms how AI development works. Our breakthrough orchestration platform creates autonomous development teams that continuously improve their own performance, automatically form optimal team compositions, and maintain complete audit trails of every decision.
|
||||
|
||||
---
|
||||
|
||||
## Key Innovations Section
|
||||
|
||||
### Self-Optimizing Intelligence
|
||||
**AI agents that get better with every task**
|
||||
|
||||
Our breakthrough reinforcement learning system enables agents to continuously optimize their own performance through real-time feedback loops. Each completed task makes the entire system more effective.
|
||||
|
||||
- **Sub-5ms task routing** with intelligent load balancing
|
||||
- **48GB distributed GPU infrastructure** for massive parallel processing
|
||||
- **Enterprise-grade monitoring** with real-time optimization
|
||||
|
||||
### Dynamic Team Formation
|
||||
**Perfect teams, automatically assembled**
|
||||
|
||||
Gone are the days of manually coordinating AI tools. CHORUS agents autonomously analyze task requirements and automatically form optimal team compositions from our 8 specialized agent roles.
|
||||
|
||||
- **Composable context management** - Knowledge components mix and match across projects
|
||||
- **Fine-tuned specialized models** optimized for specific development workflows
|
||||
- **Real-time team rebalancing** based on workload and capabilities
|
||||
|
||||
### Complete Auditability
|
||||
**Every decision traceable, every solution replayable**
|
||||
|
||||
Enterprise development demands transparency. CHORUS provides complete traceability of every decision with the ability to replay and understand exactly how solutions were developed.
|
||||
|
||||
- **Immutable decision logs** with cryptographic integrity
|
||||
- **Full solution replay capability** for debugging and compliance
|
||||
- **End-to-end workflow transparency** for regulatory requirements
|
||||
|
||||
---
|
||||
|
||||
## Target Audience Benefits
|
||||
|
||||
### For Enterprise Development Teams
|
||||
**10x your development velocity without losing control**
|
||||
|
||||
- Autonomous task distribution across optimal AI team compositions
|
||||
- Complete audit trails for compliance and quality assurance
|
||||
- Integration with existing enterprise development workflows
|
||||
- Real-time performance monitoring and optimization
|
||||
|
||||
### For Tech Startups
|
||||
**Compete with larger teams through AI force multiplication**
|
||||
|
||||
- Small team leverage through intelligent task orchestration
|
||||
- Automatic knowledge capture and reuse across projects
|
||||
- Cost-effective scaling without proportional headcount increases
|
||||
- Rapid iteration with continuous system improvement
|
||||
|
||||
### For Research Organizations
|
||||
**Auditable, repeatable AI-assisted research processes**
|
||||
|
||||
- Complete reproducibility of AI-assisted research workflows
|
||||
- Transparent decision-making processes for peer review
|
||||
- Collaborative reasoning between multiple specialized AI agents
|
||||
- Long-term knowledge accumulation and institutional memory
|
||||
|
||||
### For AI Companies
|
||||
**Cutting-edge orchestration for your own AI development**
|
||||
|
||||
- Advanced context management for complex AI development projects
|
||||
- Multi-model coordination for hybrid AI solutions
|
||||
- Performance optimization through continuous learning
|
||||
- Scalable infrastructure for distributed AI development
|
||||
|
||||
---
|
||||
|
||||
## Technical Differentiators
|
||||
|
||||
### Beyond Basic AI Tools
|
||||
CHORUS Services isn't another AI assistant or code completion tool. We've built the infrastructure that makes AI agents actually work together as high-performing development teams.
|
||||
|
||||
**Traditional AI Tools:**
|
||||
- Single-agent interactions
|
||||
- No persistent team memory
|
||||
- Manual coordination required
|
||||
- Limited task complexity
|
||||
|
||||
**CHORUS Services:**
|
||||
- Self-organizing multi-agent teams
|
||||
- Persistent organizational knowledge
|
||||
- Autonomous task coordination
|
||||
- Enterprise-scale complexity handling
|
||||
|
||||
### The CHORUS Ecosystem
|
||||
**Integrated components working in perfect harmony**
|
||||
|
||||
- **WHOOSH**: Intelligent workflow orchestration with role-based agent assignment
|
||||
- **BZZZ**: Peer-to-peer coordination without single points of failure
|
||||
- **SLURP**: Context management that learns what information matters
|
||||
- **COOEE**: Continuous feedback loops for system optimization
|
||||
- **HMMM**: Collaborative reasoning before critical decisions
|
||||
|
||||
---
|
||||
|
||||
## Proven Results
|
||||
|
||||
### Measurable Performance Improvements
|
||||
**Real metrics from production deployments**
|
||||
|
||||
- **92% reduction** in context loss events across development sessions
|
||||
- **78% reduction** in hallucinated or incorrect AI outputs
|
||||
- **40% fewer iterations** required for project completion
|
||||
- **60% reduction** in duplicated work across team members
|
||||
- **34% faster** overall project delivery times
|
||||
|
||||
### Enterprise-Ready Architecture
|
||||
**Built for scale, security, and reliability**
|
||||
|
||||
- Multi-tenant SaaS deployment with enterprise security
|
||||
- Hybrid cloud/on-premises deployment options
|
||||
- Role-based access control and complete audit logging
|
||||
- Integration with existing CI/CD and project management tools
|
||||
|
||||
---
|
||||
|
||||
## Business Outcomes Focus
|
||||
|
||||
### Reduce Development Risk
|
||||
- Complete transparency in AI decision-making processes
|
||||
- Audit trails for compliance and quality assurance
|
||||
- Reduced hallucinations through collaborative verification
|
||||
- Consistent results through continuous system optimization
|
||||
|
||||
### Accelerate Innovation
|
||||
- Faster iteration cycles through intelligent task orchestration
|
||||
- Knowledge reuse across projects and teams
|
||||
- Automatic optimization of development workflows
|
||||
- Scalable capacity without proportional cost increases
|
||||
|
||||
### Maintain Control
|
||||
- Full visibility into AI agent decision-making
|
||||
- Configurable guardrails and approval workflows
|
||||
- Human oversight integration at critical decision points
|
||||
- Complete solution replay for debugging and improvement
|
||||
|
||||
---
|
||||
|
||||
## Call to Action
|
||||
|
||||
### Primary CTA
|
||||
**Experience Self-Optimizing AI Development**
|
||||
*Schedule a live demonstration of autonomous team formation and optimization*
|
||||
|
||||
### Secondary CTAs
|
||||
- **View Technical Architecture** - Deep dive into our breakthrough orchestration platform
|
||||
- **Download Case Study** - See how CHORUS reduced development time by 40% for enterprise clients
|
||||
- **Request Private Demo** - See your specific development challenges solved in real-time
|
||||
|
||||
---
|
||||
|
||||
## Trust Indicators
|
||||
|
||||
### Production-Proven Technology
|
||||
"CHORUS Services isn't experimental - it's deployed and delivering measurable results in production environments today."
|
||||
|
||||
### Enterprise Security Standards
|
||||
- SOC 2 Type II compliant infrastructure
|
||||
- Enterprise-grade data encryption and access controls
|
||||
- Complete audit logging and compliance reporting
|
||||
- Hybrid deployment options for sensitive workloads
|
||||
|
||||
### Technical Leadership
|
||||
Built by the team that solved AI's fundamental context and coordination problems. Our research-to-production pipeline ensures breakthrough innovations reach enterprise customers quickly and reliably.
|
||||
|
||||
---
|
||||
|
||||
*Ready to transform your development velocity with self-optimizing AI teams?*
|
||||
**Schedule your demonstration today.**
|
||||
21
hosting.md
21
hosting.md
@@ -1,21 +0,0 @@
|
||||
Domain Name
|
||||
chorus.services
|
||||
|
||||
Admin Username
|
||||
admin@chorus.services
|
||||
|
||||
SPF Record
|
||||
v=spf1 a mx include:spf.mymailhosting.com -all
|
||||
|
||||
DMARC Record
|
||||
v=DMARC1;p=none;adkim=s;aspf=s;
|
||||
|
||||
DKIM Selector
|
||||
qe8DDD03C96AFC707
|
||||
|
||||
DKIM Key
|
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArPS8sm8Y3VGybA1x2y+YBb0DTwiyzMNEy5wB2oxM5BBywohhp9LJGfqCOsjQQR/mqBZc1cyUM10rYZgCZqzbIQpvcnsUsd20KWyxLWdgbMGIirmcwlJAtYr6Rajj1bI0nSQHb6319ZgDuV4jfQNEYaSATooBCponFv6jVzetj0d4c9NN/b0IsfKH4bYvnldtUF2EyZWpfT8srD2wbEqbDKNsu3Rbcdg+dTM5TIRRC+FeOU16SdGZGb8epjsT6yytHeBaZrsDikeKy6TdTAkZf8WGonffWz2/V6Uw2zL3xKtOfkzInyZvgMx3qylz4a3ceNb2BfVmlvSEPjZLU3cB+wIDAQAB
|
||||
|
||||
Mail Server Name
|
||||
mx3594.syd1.mymailhosting.com
|
||||
|
||||
14
init-db.sql
14
init-db.sql
@@ -5,9 +5,23 @@ CREATE DATABASE chorus_whoosh;
|
||||
CREATE DATABASE chorus_slurp;
|
||||
CREATE DATABASE chorus_rl_tuner;
|
||||
CREATE DATABASE chorus_monitoring;
|
||||
CREATE DATABASE chorus_website;
|
||||
|
||||
-- Grant permissions
|
||||
GRANT ALL PRIVILEGES ON DATABASE chorus_whoosh TO chorus;
|
||||
GRANT ALL PRIVILEGES ON DATABASE chorus_slurp TO chorus;
|
||||
GRANT ALL PRIVILEGES ON DATABASE chorus_rl_tuner TO chorus;
|
||||
GRANT ALL PRIVILEGES ON DATABASE chorus_monitoring TO chorus;
|
||||
GRANT ALL PRIVILEGES ON DATABASE chorus_website TO chorus;
|
||||
|
||||
-- Connect to chorus_website database to set up migration tracking
|
||||
\c chorus_website;
|
||||
|
||||
-- Create migration tracking table
|
||||
CREATE TABLE IF NOT EXISTS schema_migrations (
|
||||
version VARCHAR(255) PRIMARY KEY,
|
||||
applied_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Grant permissions on migration table
|
||||
GRANT SELECT, INSERT ON schema_migrations TO chorus;
|
||||
Submodule modules/bzzz deleted from f2dd0e8d6d
Submodule modules/posthuman deleted from 2e39cd8664
@@ -1,37 +0,0 @@
|
||||
# SHHH Secrets Sentinel Docker Image
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Install system dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
gcc \
|
||||
libpq-dev \
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy requirements first for better caching
|
||||
COPY requirements.txt .
|
||||
|
||||
# Install Python dependencies
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Create data directories
|
||||
RUN mkdir -p /data /config /logs
|
||||
|
||||
# Set permissions
|
||||
RUN chmod +x main.py
|
||||
|
||||
# Expose API port
|
||||
EXPOSE 8000
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost:8000/health || exit 1
|
||||
|
||||
# Default command (can be overridden)
|
||||
CMD ["python", "main.py", "--mode", "monitor", "--structured-logs"]
|
||||
@@ -1,319 +0,0 @@
|
||||
## Plan: Hybrid Secret Detection with Sanitized Log Replication
|
||||
|
||||
### 1. Objective
|
||||
|
||||
To implement a robust, two-stage secret detection pipeline that:
|
||||
1. Reads from a primary hypercore log in real-time.
|
||||
2. Uses a fast, regex-based scanner for initial detection.
|
||||
3. Leverages a local LLM (via Ollama) for deeper, context-aware analysis of potential secrets to reduce false positives.
|
||||
4. Writes a fully sanitized version of the log to a new, parallel "sister" hypercore stream.
|
||||
5. Quarantines and alerts on confirmed high-severity secrets, ensuring the original log remains untouched for audit purposes while the sanitized log is safe for wider consumption.
|
||||
|
||||
### 2. High-Level Architecture & Data Flow
|
||||
|
||||
The process will follow this data flow:
|
||||
|
||||
```
|
||||
┌──────────────────────────┐
|
||||
[Primary Hypercore Log] ─────► │ HypercoreReader │
|
||||
└────────────┬─────────────┘
|
||||
│ (Raw Log Entry)
|
||||
▼
|
||||
┌──────────<E29480><E29480>───────────────┐
|
||||
│ MessageProcessor │
|
||||
│ (Orchestrator) │
|
||||
└────────────┬─────────────┘
|
||||
│
|
||||
┌───────────────────────▼───────────────────────┐
|
||||
│ Stage 1: Fast Regex Scan │
|
||||
│ (SecretDetector) │
|
||||
└───────────────────────┬───────────────────────┘
|
||||
│
|
||||
┌───────────────────────────┼───────────────────────────┐
|
||||
│ (No Match) │ (Potential Match) │ (High-Confidence Match)
|
||||
▼ ▼ ▼
|
||||
┌──────────────────────────┐ ┌─<E2948C><E29480>────────────────────────┐ ┌──────────────────────────┐
|
||||
│ SanitizedWriter │ │ Stage 2: LLM Analysis │ │ (Skip LLM) │
|
||||
│ (Writes original entry) │ │ (LLMAnalyzer) │ │ Quarantine Immediately │
|
||||
└──────────────────────────┘ └────────────┬─────────────┘ └────────────┬─────────────┘
|
||||
▲ │ (LLM Confirms) │
|
||||
│ ▼ ▼
|
||||
│ ┌──────────────────────────┐ ┌──────────────────────────┐
|
||||
│ │ QuarantineManager │ │ Alerting System │
|
||||
│ │ (DB Storage, Alerts) │ │ (Webhooks) │
|
||||
│ └──────────────────────────┘ └────────────<E29480><E29480>─────────────┘
|
||||
│ │
|
||||
│ ▼
|
||||
│ ┌──────────────────────────┐
|
||||
└──────────────┤ SanitizedWriter │
|
||||
│ (Writes REDACTED entry) │
|
||||
└──────────────────────────┘
|
||||
│
|
||||
▼
|
||||
[Sanitized Hypercore Log]
|
||||
```
|
||||
|
||||
### 3. Component Implementation Plan
|
||||
|
||||
This plan modifies existing components and adds new ones.
|
||||
|
||||
#### 3.1. New Component: `core/llm_analyzer.py`
|
||||
|
||||
This new file will contain all logic for interacting with the Ollama instance. This isolates the dependency and makes it easy to test or swap out the LLM backend.
|
||||
|
||||
```python
|
||||
# core/llm_analyzer.py
|
||||
import requests
|
||||
import json
|
||||
|
||||
class LLMAnalyzer:
|
||||
"""Analyzes text for secrets using a local LLM via Ollama."""
|
||||
|
||||
def __init__(self, endpoint: str, model: str, system_prompt: str):
|
||||
self.endpoint = endpoint
|
||||
self.model = model
|
||||
self.system_prompt = system_prompt
|
||||
|
||||
def analyze(self, text: str) -> dict:
|
||||
"""
|
||||
Sends text to the Ollama API for analysis and returns a structured JSON response.
|
||||
|
||||
Returns:
|
||||
A dictionary like:
|
||||
{
|
||||
"secret_found": bool,
|
||||
"secret_type": str,
|
||||
"confidence_score": float,
|
||||
"severity": str
|
||||
}
|
||||
Returns a default "not found" response on error.
|
||||
"""
|
||||
prompt = f"Log entry: \"{text}\"\n\nAnalyze this for secrets and respond with only the required JSON."
|
||||
payload = {
|
||||
"model": self.model,
|
||||
"system": self.system_prompt,
|
||||
"prompt": prompt,
|
||||
"format": "json",
|
||||
"stream": False
|
||||
}
|
||||
try:
|
||||
response = requests.post(self.endpoint, json=payload, timeout=15)
|
||||
response.raise_for_status()
|
||||
# The response from Ollama is a JSON string, which needs to be parsed.
|
||||
analysis = json.loads(response.json().get("response", "{}"))
|
||||
return analysis
|
||||
except (requests.exceptions.RequestException, json.JSONDecodeError) as e:
|
||||
print(f"[ERROR] LLMAnalyzer failed: {e}")
|
||||
# Fallback: If LLM fails, assume no secret was found to avoid blocking the pipeline.
|
||||
return {"secret_found": False}
|
||||
```
|
||||
|
||||
#### 3.2. New Component: `core/sanitized_writer.py`
|
||||
|
||||
This component is responsible for writing to the new, sanitized hypercore log. This abstraction allows us to easily change the output destination in the future.
|
||||
|
||||
```python
|
||||
# core/sanitized_writer.py
|
||||
class SanitizedWriter:
|
||||
"""Writes log entries to the sanitized sister hypercore log."""
|
||||
|
||||
def __init__(self, sanitized_log_path: str):
|
||||
self.log_path = sanitized_log_path
|
||||
# Placeholder for hypercore writing logic. For now, we'll append to a file.
|
||||
self.log_file = open(self.log_path, "a")
|
||||
|
||||
def write(self, log_entry: str):
|
||||
"""Writes a single log entry to the sanitized stream."""
|
||||
self.log_file.write(log_entry + "\n")
|
||||
self.log_file.flush()
|
||||
|
||||
def close(self):
|
||||
self.log_file.close()
|
||||
```
|
||||
|
||||
#### 3.3. Modify: `core/detector.py`
|
||||
|
||||
We will enhance the `SecretDetector` to not only find matches but also redact them.
|
||||
|
||||
```python
|
||||
# core/detector.py
|
||||
import re
|
||||
|
||||
class SecretDetector:
|
||||
def __init__(self, patterns_file: str = "patterns.yaml"):
|
||||
# ... (load_patterns remains the same) ...
|
||||
|
||||
def scan(self, text: str) -> list[dict]:
|
||||
"""Scans text and returns a list of found secrets with metadata."""
|
||||
matches = []
|
||||
for pattern_name, pattern in self.patterns.items():
|
||||
if pattern.get("active", True):
|
||||
regex_match = re.search(pattern["regex"], text)
|
||||
if regex_match:
|
||||
matches.append({
|
||||
"secret_type": pattern_name,
|
||||
"value": regex_match.group(0),
|
||||
"confidence": pattern.get("confidence", 0.8), # Default confidence
|
||||
"severity": pattern.get("severity", "MEDIUM")
|
||||
})
|
||||
return matches
|
||||
|
||||
def redact(self, text: str, secret_value: str) -> str:
|
||||
"""Redacts a specific secret value within a string."""
|
||||
redacted_str = secret_value[:4] + "****" + secret_value[-4:]
|
||||
return text.replace(secret_value, f"[REDACTED:{redacted_str}]")
|
||||
```
|
||||
|
||||
#### 3.4. Modify: `pipeline/processor.py`
|
||||
|
||||
This is the orchestrator and will see the most significant changes to implement the hybrid logic.
|
||||
|
||||
```python
|
||||
# pipeline/processor.py
|
||||
from core.hypercore_reader import HypercoreReader
|
||||
from core.detector import SecretDetector
|
||||
from core.llm_analyzer import LLMAnalyzer
|
||||
from core.quarantine import QuarantineManager
|
||||
from core.sanitized_writer import SanitizedWriter
|
||||
|
||||
class MessageProcessor:
|
||||
def __init__(self, reader: HypercoreReader, detector: SecretDetector, llm_analyzer: LLMAnalyzer, quarantine: QuarantineManager, writer: SanitizedWriter, llm_threshold: float):
|
||||
self.reader = reader
|
||||
self.detector = detector
|
||||
self.llm_analyzer = llm_analyzer
|
||||
self.quarantine = quarantine
|
||||
self.writer = writer
|
||||
self.llm_threshold = llm_threshold # e.g., 0.90
|
||||
|
||||
async def process_stream(self):
|
||||
"""Main processing loop for the hybrid detection model."""
|
||||
async for entry in self.reader.stream_entries():
|
||||
# Stage 1: Fast Regex Scan
|
||||
regex_matches = self.detector.scan(entry.content)
|
||||
|
||||
if not regex_matches:
|
||||
# No secrets found, write original entry to sanitized log
|
||||
self.writer.write(entry.content)
|
||||
continue
|
||||
|
||||
# A potential secret was found. Default to sanitized, but may be quarantined.
|
||||
sanitized_content = entry.content
|
||||
should_quarantine = False
|
||||
confirmed_secret = None
|
||||
|
||||
for match in regex_matches:
|
||||
# High-confidence regex matches trigger immediate quarantine, skipping LLM.
|
||||
if match['confidence'] >= self.llm_threshold:
|
||||
should_quarantine = True
|
||||
confirmed_secret = match
|
||||
break # One high-confidence match is enough
|
||||
|
||||
# Stage 2: Low-confidence matches go to LLM for verification.
|
||||
llm_result = self.llm_analyzer.analyze(entry.content)
|
||||
if llm_result.get("secret_found"):
|
||||
should_quarantine = True
|
||||
# Prefer LLM's classification but use regex value for redaction
|
||||
confirmed_secret = {
|
||||
"secret_type": llm_result.get("secret_type", match['secret_type']),
|
||||
"value": match['value'],
|
||||
"severity": llm_result.get("severity", match['severity'])
|
||||
}
|
||||
break
|
||||
|
||||
if should_quarantine and confirmed_secret:
|
||||
# A secret is confirmed. Redact, quarantine, and alert.
|
||||
sanitized_content = self.detector.redact(entry.content, confirmed_secret['value'])
|
||||
self.quarantine.quarantine_message(
|
||||
message=entry,
|
||||
secret_type=confirmed_secret['secret_type'],
|
||||
severity=confirmed_secret['severity'],
|
||||
redacted_content=sanitized_content
|
||||
)
|
||||
# Potentially trigger alerts here as well
|
||||
print(f"[ALERT] Confirmed secret {confirmed_secret['secret_type']} found and quarantined.")
|
||||
|
||||
# Write the (potentially redacted) content to the sanitized log
|
||||
self.writer.write(sanitized_content)
|
||||
```
|
||||
|
||||
#### 3.5. Modify: `main.py`
|
||||
|
||||
The main entry point will be updated to instantiate and wire together the new and modified components.
|
||||
|
||||
```python
|
||||
# main.py
|
||||
# ... imports ...
|
||||
import asyncio
|
||||
from core.hypercore_reader import HypercoreReader
|
||||
from core.detector import SecretDetector
|
||||
from core.llm_analyzer import LLMAnalyzer
|
||||
from core.quarantine import QuarantineManager
|
||||
from core.sanitized_writer import SanitizedWriter
|
||||
# ... other imports
|
||||
|
||||
def main():
|
||||
# 1. Configuration
|
||||
# Load from a new config.yaml or environment variables
|
||||
PRIMARY_LOG_PATH = "/path/to/primary/hypercore.log"
|
||||
SANITIZED_LOG_PATH = "/path/to/sanitized/hypercore.log"
|
||||
PATTERNS_PATH = "patterns.yaml"
|
||||
DB_CONNECTION = "..."
|
||||
OLLAMA_ENDPOINT = "http://localhost:11434/api/generate"
|
||||
OLLAMA_MODEL = "llama3"
|
||||
LLM_CONFIDENCE_THRESHOLD = 0.90 # Regex confidence >= this skips LLM
|
||||
|
||||
with open("SHHH_SECRETS_SENTINEL_AGENT_PROMPT.md", "r") as f:
|
||||
OLLAMA_SYSTEM_PROMPT = f.read()
|
||||
|
||||
# 2. Instantiation
|
||||
reader = HypercoreReader(PRIMARY_LOG_PATH)
|
||||
detector = SecretDetector(PATTERNS_PATH)
|
||||
llm_analyzer = LLMAnalyzer(OLLAMA_ENDPOINT, OLLAMA_MODEL, OLLAMA_SYSTEM_PROMPT)
|
||||
quarantine = QuarantineManager(DB_CONNECTION)
|
||||
writer = SanitizedWriter(SANITIZED_LOG_PATH)
|
||||
|
||||
processor = MessageProcessor(
|
||||
reader=reader,
|
||||
detector=detector,
|
||||
llm_analyzer=llm_analyzer,
|
||||
quarantine=quarantine,
|
||||
writer=writer,
|
||||
llm_threshold=LLM_CONFIDENCE_THRESHOLD
|
||||
)
|
||||
|
||||
# 3. Execution
|
||||
print("Starting SHHH Hypercore Monitor...")
|
||||
try:
|
||||
asyncio.run(processor.process_stream())
|
||||
except KeyboardInterrupt:
|
||||
print("Shutting down...")
|
||||
finally:
|
||||
writer.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
### 4. Phased Rollout
|
||||
|
||||
1. **Phase 1: Component Implementation (1-2 days)**
|
||||
* Create `core/llm_analyzer.py` and `core/sanitized_writer.py`.
|
||||
* Write unit tests for both new components. Mock the `requests` calls for the analyzer.
|
||||
* Update `core/detector.py` with the `redact` method and update its unit tests.
|
||||
|
||||
2. **Phase 2: Orchestration Logic (2-3 days)**
|
||||
* Implement the new logic in `pipeline/processor.py`.
|
||||
* Write integration tests for the processor that simulate the full flow: no match, low-confidence match (with mocked LLM response), and high-confidence match.
|
||||
* Update `main.py` to wire everything together.
|
||||
|
||||
3. **Phase 3: Configuration & Testing (1 day)**
|
||||
* Add a `config.yaml` to manage all paths, thresholds, and endpoints.
|
||||
* Perform an end-to-end test run with a sample log file and a running Ollama instance.
|
||||
* Verify that the primary log is untouched, the sanitized log is created correctly (with and without redactions), and the quarantine database is populated as expected.
|
||||
|
||||
### 5. Success Criteria
|
||||
|
||||
* **Zero Leaks:** The sanitized log stream contains no secrets.
|
||||
* **High Accuracy:** False positive rate is demonstrably lower than a regex-only solution, verified during testing.
|
||||
* **Performance:** The pipeline maintains acceptable latency (<200ms per log entry on average, accounting for occasional LLM analysis).
|
||||
* **Auditability:** The primary log remains a perfect, unaltered source of truth. All detection and quarantine events are logged in the PostgreSQL database.
|
||||
@@ -1,561 +0,0 @@
|
||||
🔥 Excellent — let’s push this all the way into a **production-grade spec**.
|
||||
|
||||
---
|
||||
|
||||
## 📂 **1️⃣ Feedback Ingestion Spec**
|
||||
|
||||
This defines how curators/humans give feedback to the Sentinel so it can **update its detection rules (patterns.yaml)** safely.
|
||||
|
||||
---
|
||||
|
||||
### 🔄 **Feedback Flow**
|
||||
|
||||
1. **Curator/Reviewer sees alert** → marks it as:
|
||||
|
||||
* `false_positive` (regex over-triggered)
|
||||
* `missed_secret` (regex failed to detect)
|
||||
* `uncertain` (needs better regex refinement)
|
||||
|
||||
2. **Feedback API** ingests the report:
|
||||
|
||||
```json
|
||||
{
|
||||
"alert_id": "log_345",
|
||||
"secret_type": "AWS_ACCESS_KEY",
|
||||
"feedback_type": "false_positive",
|
||||
"evidence": "Key was dummy data: TESTKEY123",
|
||||
"suggested_regex_fix": null
|
||||
}
|
||||
```
|
||||
|
||||
3. **Meta-Learner** updates rules:
|
||||
|
||||
* `false_positive` → adds **exceptions** (e.g., allowlist prefixes like `TESTKEY`).
|
||||
* `missed_secret` → drafts **new regex** from evidence (using regex generator or LLM).
|
||||
* Writes changes to **patterns.yaml** under `pending_review`.
|
||||
|
||||
4. **Security admin approves** before the new regex is marked `active: true`.
|
||||
|
||||
---
|
||||
|
||||
### 🧠 **Feedback Schema in YAML**
|
||||
|
||||
```yaml
|
||||
pending_updates:
|
||||
- regex_name: AWS_ACCESS_KEY
|
||||
action: modify
|
||||
new_regex: "AKIA[0-9A-Z]{16}(?!TESTKEY)"
|
||||
confidence: 0.82
|
||||
status: "pending human review"
|
||||
submitted_by: curator_2
|
||||
timestamp: 2025-08-02T12:40:00Z
|
||||
```
|
||||
|
||||
✅ This keeps **audit trails** & allows **safe hot updates**.
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ **2️⃣ Real AWS/GitHub Webhook Payload Templates**
|
||||
|
||||
These are **example POST payloads** your Sentinel would send when it detects a leaked secret.
|
||||
|
||||
---
|
||||
|
||||
### 🔐 **AWS Access Key Revocation**
|
||||
|
||||
**Endpoint:**
|
||||
`POST https://security.example.com/hooks/aws-revoke`
|
||||
|
||||
**Payload:**
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "secret_leak_detected",
|
||||
"secret_type": "AWS_ACCESS_KEY",
|
||||
"redacted_key": "AKIA****XYZ",
|
||||
"log_reference": "hyperlog:58321",
|
||||
"recommended_action": "Revoke IAM access key immediately",
|
||||
"severity": "HIGH",
|
||||
"timestamp": "2025-08-02T12:45:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
➡ Your security automation would call AWS CLI or IAM API:
|
||||
|
||||
```bash
|
||||
aws iam update-access-key --access-key-id <redacted> --status Inactive
|
||||
aws iam delete-access-key --access-key-id <redacted>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🐙 **GitHub Token Revocation**
|
||||
|
||||
**Endpoint:**
|
||||
`POST https://security.example.com/hooks/github-revoke`
|
||||
|
||||
**Payload:**
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "secret_leak_detected",
|
||||
"secret_type": "GITHUB_TOKEN",
|
||||
"redacted_key": "ghp_****abcd",
|
||||
"repository": "repo-name",
|
||||
"log_reference": "hyperlog:58322",
|
||||
"severity": "HIGH",
|
||||
"recommended_action": "Invalidate GitHub token via API",
|
||||
"timestamp": "2025-08-02T12:46:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
➡ This would tie into GitHub’s [token-scanning API](https://docs.github.com/en/developers/overview/secret-scanning) or use PAT revocation.
|
||||
|
||||
---
|
||||
|
||||
### 💬 **Slack Token Revocation**
|
||||
|
||||
**Endpoint:**
|
||||
`POST https://security.example.com/hooks/slack-revoke`
|
||||
|
||||
**Payload:**
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "secret_leak_detected",
|
||||
"secret_type": "SLACK_TOKEN",
|
||||
"redacted_key": "xoxb****hjk",
|
||||
"workspace": "company-slack",
|
||||
"log_reference": "hyperlog:58323",
|
||||
"severity": "HIGH",
|
||||
"recommended_action": "Revoke Slack bot/user token",
|
||||
"timestamp": "2025-08-02T12:47:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
➡ Slack Admin API can be used to **revoke** or **rotate**.
|
||||
|
||||
---
|
||||
|
||||
## 📡 **3️⃣ Redis or PostgreSQL Quarantine Store**
|
||||
|
||||
Switching from memory to **persistent storage** means quarantined logs survive restarts.
|
||||
|
||||
---
|
||||
|
||||
### ✅ **Redis Option (Fast, Volatile)**
|
||||
|
||||
```python
|
||||
import redis, json
|
||||
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
|
||||
|
||||
def quarantine_log(log_line, reason):
|
||||
entry = {"timestamp": datetime.utcnow().isoformat() + "Z", "reason": reason, "log_line": log_line}
|
||||
r.lpush("quarantine", json.dumps(entry))
|
||||
print(f"[QUARANTINE] Stored in Redis: {reason}")
|
||||
```
|
||||
|
||||
* 🏎 **Pros:** Fast, easy to scale.
|
||||
* ⚠️ **Cons:** Volatile unless persisted (RDB/AOF).
|
||||
|
||||
---
|
||||
|
||||
### ✅ **PostgreSQL Option (Auditable, Durable)**
|
||||
|
||||
**Schema:**
|
||||
|
||||
```sql
|
||||
CREATE TABLE quarantine (
|
||||
id SERIAL PRIMARY KEY,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
reason TEXT NOT NULL,
|
||||
log_line TEXT NOT NULL,
|
||||
reviewed BOOLEAN DEFAULT FALSE
|
||||
);
|
||||
```
|
||||
|
||||
**Python Insert:**
|
||||
|
||||
```python
|
||||
import psycopg2
|
||||
|
||||
conn = psycopg2.connect("dbname=sentinel user=postgres password=secret")
|
||||
cursor = conn.cursor()
|
||||
|
||||
def quarantine_log(log_line, reason):
|
||||
entry_time = datetime.utcnow().isoformat() + "Z"
|
||||
cursor.execute(
|
||||
"INSERT INTO quarantine (timestamp, reason, log_line) VALUES (%s, %s, %s)",
|
||||
(entry_time, reason, log_line)
|
||||
)
|
||||
conn.commit()
|
||||
print(f"[QUARANTINE] Stored in PostgreSQL: {reason}")
|
||||
```
|
||||
|
||||
✅ **Postgres is better for long-term auditing** — you can run reports like:
|
||||
|
||||
* “How many AWS keys leaked this month?”
|
||||
* “Which agents generated the most HIGH-severity quarantines?”
|
||||
|
||||
---
|
||||
|
||||
We now have:
|
||||
✅ **Detection → Redaction → Quarantine → Revocation → Feedback → Pattern Evolution**
|
||||
✅ **patterns.yaml** for versioned regex
|
||||
✅ **Webhooks** for real-time secret revocation
|
||||
✅ **Persistent quarantine store** (Redis or Postgres)
|
||||
|
||||
---
|
||||
|
||||
## 📜 **1️⃣ Migration Script: Redis → PostgreSQL**
|
||||
|
||||
This script will migrate existing quarantined log entries from **Redis** to **Postgres**.
|
||||
|
||||
```python
|
||||
import redis, json, psycopg2
|
||||
from datetime import datetime
|
||||
|
||||
# Redis config
|
||||
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
|
||||
|
||||
# Postgres config
|
||||
conn = psycopg2.connect("dbname=sentinel user=postgres password=secret")
|
||||
cursor = conn.cursor()
|
||||
|
||||
def migrate_quarantine():
|
||||
count = 0
|
||||
while True:
|
||||
entry_json = r.rpop("quarantine") # pop oldest entry from Redis
|
||||
if not entry_json:
|
||||
break
|
||||
entry = json.loads(entry_json)
|
||||
cursor.execute(
|
||||
"INSERT INTO quarantine (timestamp, reason, log_line) VALUES (%s, %s, %s)",
|
||||
(entry["timestamp"], entry["reason"], entry["log_line"])
|
||||
)
|
||||
count += 1
|
||||
conn.commit()
|
||||
print(f"[MIGRATION] Moved {count} quarantined entries from Redis → PostgreSQL")
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_quarantine()
|
||||
```
|
||||
|
||||
✅ **Run once** after Postgres is set up — empties Redis queue into the durable DB.
|
||||
|
||||
---
|
||||
|
||||
## 🖥 **2️⃣ Admin Dashboard Spec**
|
||||
|
||||
**Purpose:** A web UI to manage the Sentinel’s security pipeline.
|
||||
|
||||
---
|
||||
|
||||
### 🎯 **Core Features**
|
||||
|
||||
✅ **Quarantine Browser**
|
||||
|
||||
* Paginated view of all quarantined logs
|
||||
* Search/filter by `secret_type`, `source_agent`, `date`, `status`
|
||||
* Mark quarantined logs as **reviewed** or **false alarm**
|
||||
|
||||
✅ **Regex Rules Manager**
|
||||
|
||||
* Lists all regexes from `patterns.yaml`
|
||||
* Add / update / deactivate rules via UI
|
||||
* Shows `pending_updates` flagged by the Meta-Learner for human approval
|
||||
|
||||
✅ **Revocation Status Board**
|
||||
|
||||
* See which secrets triggered revocations
|
||||
* Status of revocation hooks (success/fail)
|
||||
|
||||
✅ **Metrics Dashboard**
|
||||
|
||||
* Charts: “Secrets Detected Over Time”, “Top Sources of Leaks”
|
||||
* KPIs: # HIGH severity secrets this month, # rules updated, # false positives
|
||||
|
||||
---
|
||||
|
||||
### 🏗 **Tech Stack Suggestion**
|
||||
|
||||
* **Backend:** FastAPI (Python)
|
||||
* **Frontend:** React + Tailwind
|
||||
* **DB:** PostgreSQL for quarantine + rules history
|
||||
* **Auth:** OAuth (GitHub/Google) + RBAC (only security admins can approve regex changes)
|
||||
|
||||
---
|
||||
|
||||
### 🔌 **Endpoints**
|
||||
|
||||
```
|
||||
GET /api/quarantine → list quarantined entries
|
||||
POST /api/quarantine/review → mark entry as reviewed
|
||||
GET /api/rules → list regex patterns
|
||||
POST /api/rules/update → update or add a regex
|
||||
GET /api/revocations → list revocation events
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🖥 **Mock Dashboard Layout**
|
||||
|
||||
* **Left Nav:** Quarantine | Rules | Revocations | Metrics
|
||||
* **Main Panel:**
|
||||
|
||||
* Data tables with sorting/filtering
|
||||
* Inline editors for regex rules
|
||||
* Approve/Reject buttons for pending regex updates
|
||||
|
||||
✅ Basically a **security control room** for Sentinel.
|
||||
|
||||
---
|
||||
|
||||
## 🤖 **3️⃣ Meta-Curator AI Prompt**
|
||||
|
||||
This agent reviews Sentinel’s work and **tunes it automatically**.
|
||||
|
||||
---
|
||||
|
||||
### **Meta-Curator: System Prompt**
|
||||
|
||||
> **Role & Mission:**
|
||||
> You are the **Meta-Curator**, a supervisory AI responsible for reviewing the **Secrets Sentinel’s** detections, regex updates, and feedback reports.
|
||||
>
|
||||
> **Core Responsibilities:**
|
||||
> ✅ **Audit alerts** – Look for false positives, duplicates, or missed leaks by cross-checking Sentinel outputs.
|
||||
> ✅ **Review regex proposals** – When Sentinel drafts new regex rules, decide if they’re:
|
||||
>
|
||||
> * ✅ Approved (safe to activate)
|
||||
> * ❌ Rejected (too broad or incorrect)
|
||||
> * 🕒 Deferred (needs human review)
|
||||
> ✅ **Tune detection thresholds** – Adjust `confidence` or `severity` on patterns based on outcomes.
|
||||
> ✅ **Generate new rules** – If multiple missed secrets share a format, draft a regex and submit to humans for approval.
|
||||
> ✅ **Report upstream** – Summarize changes to security admins weekly.
|
||||
|
||||
---
|
||||
|
||||
### **Behavior Guidelines**
|
||||
|
||||
* **Conservative by default:** Don’t auto-approve regexes unless confidence > 0.95.
|
||||
* **Keep auditability:** Every decision (approve/reject) is logged in the hyperlog.
|
||||
* **Respect human overrides:** Never overwrite a regex that a human explicitly locked.
|
||||
|
||||
---
|
||||
|
||||
### **Example Meta-Curator Output**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "approve_regex",
|
||||
"regex_name": "GITLAB_TOKEN",
|
||||
"regex_pattern": "glpat-[0-9A-Za-z\\-_]{20}",
|
||||
"confidence": 0.97,
|
||||
"decision_reason": "Validated against 12 quarantined examples, no false positives found.",
|
||||
"timestamp": "2025-08-02T13:45:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
✅ This meta-agent is the **brains of the rules layer** — keeps Sentinel evolving, but under control.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 **Now You Have:**
|
||||
|
||||
✅ **Migration Path** → Redis → PostgreSQL
|
||||
✅ **Admin Dashboard Spec** → complete with endpoints & layout
|
||||
✅ **Meta-Curator Prompt** → the agent that “manages the manager”
|
||||
|
||||
Alright — here’s the next batch to lock this into a **real, buildable system**.
|
||||
|
||||
---
|
||||
|
||||
## 📂 **1️⃣ `patterns_history` Table Schema**
|
||||
|
||||
This tracks **every regex change** ever made — who/what made it, why, and when.
|
||||
|
||||
```sql
|
||||
CREATE TABLE patterns_history (
|
||||
id SERIAL PRIMARY KEY,
|
||||
regex_name TEXT NOT NULL,
|
||||
old_regex TEXT,
|
||||
new_regex TEXT,
|
||||
action TEXT CHECK (action IN ('add', 'update', 'remove')),
|
||||
confidence NUMERIC(3,2),
|
||||
status TEXT CHECK (status IN ('approved', 'pending', 'rejected')),
|
||||
submitted_by TEXT NOT NULL,
|
||||
approved_by TEXT,
|
||||
decision_reason TEXT,
|
||||
timestamp TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
```
|
||||
|
||||
### ✅ What this gives you:
|
||||
|
||||
* **Full audit trail** (critical for security compliance).
|
||||
* You can run queries like:
|
||||
|
||||
* *“Show all regex changes made by Meta-Curator vs. humans.”*
|
||||
* *“List all rules rejected in the last 90 days.”*
|
||||
|
||||
---
|
||||
|
||||
## 🖼 **2️⃣ Admin Dashboard Wireframes**
|
||||
|
||||
**Goal:** show your devs exactly what to build — no ambiguity.
|
||||
|
||||
---
|
||||
|
||||
### **🔒 Dashboard Home**
|
||||
|
||||
```
|
||||
------------------------------------------------------
|
||||
| [Sentinel Logo] Secrets Sentinel Dashboard |
|
||||
------------------------------------------------------
|
||||
| Quarantine | Rules | Revocations | Metrics | Admin |
|
||||
------------------------------------------------------
|
||||
| Welcome back, Security Admin! |
|
||||
| |
|
||||
| ▢ 32 Quarantined logs waiting review |
|
||||
| ▢ 4 Pending regex updates |
|
||||
| ▢ 2 Failed revocation hooks |
|
||||
------------------------------------------------------
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### **🗄 Quarantine View**
|
||||
|
||||
```
|
||||
------------------------------------------------------
|
||||
| Quarantine Logs |
|
||||
------------------------------------------------------
|
||||
| Search: [______________] [Filter ▼] |
|
||||
------------------------------------------------------
|
||||
| Log ID | Secret Type | Severity | Status |
|
||||
------------------------------------------------------
|
||||
| log_4287 | AWS_ACCESS_KEY| HIGH | PENDING |
|
||||
| log_4288 | JWT | MEDIUM | REVIEWED|
|
||||
| log_4289 | SSH_KEY | HIGH | PENDING |
|
||||
------------------------------------------------------
|
||||
[ View Details ] [ Mark as Reviewed ] [ Delete ]
|
||||
```
|
||||
|
||||
**Clicking “View Details” →** shows full log snippet (with redacted secret).
|
||||
|
||||
---
|
||||
|
||||
### **📜 Regex Manager**
|
||||
|
||||
```
|
||||
------------------------------------------------------
|
||||
| Regex Rules |
|
||||
------------------------------------------------------
|
||||
| Name | Regex Pattern | Active |
|
||||
------------------------------------------------------
|
||||
| AWS_ACCESS_KEY | AKIA[0-9A-Z]{16} | ✔ |
|
||||
| JWT | eyJ[A-Za-z0-9_-]+?\.[…] | ✔ |
|
||||
| SLACK_TOKEN | xox[baprs]-[0-9A-Za-z-]{10,48} | ✔ |
|
||||
------------------------------------------------------
|
||||
[ Add New Regex ] [ View History ]
|
||||
```
|
||||
|
||||
Clicking **View History** → pulls from `patterns_history`.
|
||||
|
||||
---
|
||||
|
||||
### **📊 Metrics View**
|
||||
|
||||
* **Line Chart:** “Secrets Detected Over Time”
|
||||
* **Bar Chart:** “Secrets by Type” (AWS, GitHub, JWT, etc.)
|
||||
* **KPIs:**
|
||||
|
||||
* 🔴 High Severity Leaks: 12 this week
|
||||
* 🟢 Regex Accuracy: 94%
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ **3️⃣ FastAPI Skeleton**
|
||||
|
||||
Here’s the **starter code** for your dev team to run with.
|
||||
|
||||
```python
|
||||
from fastapi import FastAPI, Depends
|
||||
from pydantic import BaseModel
|
||||
from typing import List
|
||||
import psycopg2, json
|
||||
|
||||
app = FastAPI(title="Secrets Sentinel Dashboard API")
|
||||
|
||||
# --- Database Setup ---
|
||||
conn = psycopg2.connect("dbname=sentinel user=postgres password=secret")
|
||||
cursor = conn.cursor()
|
||||
|
||||
# --- Models ---
|
||||
class QuarantineEntry(BaseModel):
|
||||
id: int
|
||||
timestamp: str
|
||||
reason: str
|
||||
log_line: str
|
||||
reviewed: bool
|
||||
|
||||
class RegexRule(BaseModel):
|
||||
regex_name: str
|
||||
regex_pattern: str
|
||||
severity: str
|
||||
confidence: float
|
||||
active: bool
|
||||
|
||||
# --- Endpoints ---
|
||||
@app.get("/quarantine", response_model=List[QuarantineEntry])
|
||||
def get_quarantine():
|
||||
cursor.execute("SELECT id, timestamp, reason, log_line, reviewed FROM quarantine")
|
||||
rows = cursor.fetchall()
|
||||
return [QuarantineEntry(id=r[0], timestamp=str(r[1]), reason=r[2], log_line=r[3], reviewed=r[4]) for r in rows]
|
||||
|
||||
@app.post("/quarantine/review/{entry_id}")
|
||||
def review_quarantine(entry_id: int):
|
||||
cursor.execute("UPDATE quarantine SET reviewed=true WHERE id=%s", (entry_id,))
|
||||
conn.commit()
|
||||
return {"status": "ok", "message": f"Quarantine entry {entry_id} marked reviewed"}
|
||||
|
||||
@app.get("/rules", response_model=List[RegexRule])
|
||||
def get_rules():
|
||||
# Load from patterns.yaml
|
||||
with open("patterns.yaml", "r") as f:
|
||||
patterns = json.load(f) if f.read().strip().startswith("{") else {}
|
||||
rules = []
|
||||
for name, rule in patterns.get("patterns", {}).items():
|
||||
rules.append(RegexRule(
|
||||
regex_name=name,
|
||||
regex_pattern=rule["regex"],
|
||||
severity=rule["severity"],
|
||||
confidence=rule["confidence"],
|
||||
active=rule["active"]
|
||||
))
|
||||
return rules
|
||||
|
||||
@app.post("/rules/update")
|
||||
def update_rule(rule: RegexRule):
|
||||
# Append to patterns_history table
|
||||
cursor.execute("""
|
||||
INSERT INTO patterns_history (regex_name, old_regex, new_regex, action, confidence, status, submitted_by)
|
||||
VALUES (%s, %s, %s, 'update', %s, 'pending', 'admin')
|
||||
""", (rule.regex_name, None, rule.regex_pattern, rule.confidence))
|
||||
conn.commit()
|
||||
return {"status": "ok", "message": f"Regex {rule.regex_name} queued for update"}
|
||||
```
|
||||
|
||||
✅ **Why this skeleton works:**
|
||||
|
||||
* REST endpoints for **Quarantine**, **Rules**, **History**.
|
||||
* Uses **Postgres for persistence**.
|
||||
* Reads from `patterns.yaml` for active rules.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 **Now You Have:**
|
||||
|
||||
✅ A **Postgres schema** for regex change history.
|
||||
✅ **Wireframes** for the admin dashboard.
|
||||
✅ A **FastAPI skeleton** your team can expand into a full API/UI stack.
|
||||
@@ -1,512 +0,0 @@
|
||||
# 🔒 SHHH Hypercore Log Monitor - Implementation Plan
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This plan outlines the creation of a Python application that monitors our hypercore log to ensure no secrets are leaked in BZZZ messages, based on the SHHH module's secrets detection framework.
|
||||
|
||||
## Project Overview
|
||||
|
||||
### Objective
|
||||
Create a real-time monitoring system that:
|
||||
- Monitors hypercore log entries for secret patterns
|
||||
- Detects potential secrets in BZZZ P2P messages before they propagate
|
||||
- Quarantines suspicious entries and triggers automatic remediation
|
||||
- Provides audit trails and security dashboard for compliance
|
||||
|
||||
### Architecture Integration
|
||||
- **Hypercore Log**: Source of truth for all CHORUS Services events
|
||||
- **BZZZ Network**: P2P messaging layer that could inadvertently transmit secrets
|
||||
- **SHHH Module**: Existing secrets detection framework and patterns
|
||||
- **Monitoring App**: New Python application bridging these systems
|
||||
|
||||
## Technical Requirements
|
||||
|
||||
### 1. Hypercore Log Integration
|
||||
```python
|
||||
# Real-time log monitoring
|
||||
- Stream hypercore entries as they're written
|
||||
- Parse BZZZ message payloads for secret patterns
|
||||
- Filter for message types that could contain secrets
|
||||
- Handle log rotation and recovery scenarios
|
||||
```
|
||||
|
||||
### 2. Secret Detection Engine
|
||||
Based on SHHH's `patterns.yaml` framework:
|
||||
```yaml
|
||||
patterns:
|
||||
AWS_ACCESS_KEY:
|
||||
regex: "AKIA[0-9A-Z]{16}"
|
||||
severity: "HIGH"
|
||||
confidence: 0.95
|
||||
active: true
|
||||
GITHUB_TOKEN:
|
||||
regex: "ghp_[0-9A-Za-z]{36}"
|
||||
severity: "HIGH"
|
||||
confidence: 0.92
|
||||
active: true
|
||||
PRIVATE_KEY:
|
||||
regex: "-----BEGIN [A-Z ]*PRIVATE KEY-----"
|
||||
severity: "CRITICAL"
|
||||
confidence: 0.98
|
||||
active: true
|
||||
```
|
||||
|
||||
### 3. Quarantine & Response System
|
||||
- **Immediate**: Block message propagation in BZZZ network
|
||||
- **Log**: Store quarantined entries in PostgreSQL
|
||||
- **Alert**: Notify security team via webhooks
|
||||
- **Revoke**: Trigger automatic secret revocation APIs
|
||||
|
||||
## Implementation Architecture
|
||||
|
||||
### Phase 1: Core Monitoring System (Weeks 1-2)
|
||||
|
||||
#### 1.1 Hypercore Log Reader
|
||||
```python
|
||||
# /shhh-monitor/core/hypercore_reader.py
|
||||
class HypercoreReader:
|
||||
def __init__(self, log_path: str):
|
||||
self.log_path = log_path
|
||||
self.position = 0
|
||||
|
||||
def stream_entries(self) -> Iterator[LogEntry]:
|
||||
"""Stream new hypercore entries in real-time"""
|
||||
# Tail-like functionality with inotify
|
||||
# Parse hypercore binary format
|
||||
# Yield structured LogEntry objects
|
||||
|
||||
def parse_bzzz_message(self, entry: LogEntry) -> Optional[BzzzMessage]:
|
||||
"""Extract BZZZ message payload from hypercore entry"""
|
||||
# Decode BZZZ message format
|
||||
# Extract message content and metadata
|
||||
# Return structured message or None
|
||||
```
|
||||
|
||||
#### 1.2 Secret Detection Engine
|
||||
```python
|
||||
# /shhh-monitor/core/detector.py
|
||||
class SecretDetector:
|
||||
def __init__(self, patterns_file: str = "patterns.yaml"):
|
||||
self.patterns = self.load_patterns(patterns_file)
|
||||
|
||||
def scan_message(self, message: BzzzMessage) -> List[SecretMatch]:
|
||||
"""Scan BZZZ message for secret patterns"""
|
||||
matches = []
|
||||
for pattern_name, pattern in self.patterns.items():
|
||||
if pattern["active"]:
|
||||
matches.extend(self.apply_regex(message, pattern))
|
||||
return matches
|
||||
|
||||
def redact_secret(self, text: str, match: SecretMatch) -> str:
|
||||
"""Redact detected secret while preserving context"""
|
||||
# Replace secret with asterisks, keep first/last chars
|
||||
# Maintain log readability for analysis
|
||||
```
|
||||
|
||||
#### 1.3 Quarantine System
|
||||
```python
|
||||
# /shhh-monitor/core/quarantine.py
|
||||
class QuarantineManager:
|
||||
def __init__(self, db_connection: str):
|
||||
self.db = psycopg2.connect(db_connection)
|
||||
|
||||
def quarantine_message(self, message: BzzzMessage, matches: List[SecretMatch]):
|
||||
"""Store quarantined message and block propagation"""
|
||||
# Insert into quarantine table
|
||||
# Generate alert payload
|
||||
# Trigger BZZZ network block
|
||||
|
||||
def send_alert(self, severity: str, secret_type: str, redacted_content: str):
|
||||
"""Send webhook alerts for detected secrets"""
|
||||
# POST to security webhook endpoints
|
||||
# Different payloads for AWS, GitHub, Slack tokens
|
||||
# Include revocation recommendations
|
||||
```
|
||||
|
||||
### Phase 2: BZZZ Network Integration (Weeks 3-4)
|
||||
|
||||
#### 2.1 BZZZ Message Interceptor
|
||||
```python
|
||||
# /shhh-monitor/integrations/bzzz_interceptor.py
|
||||
class BzzzInterceptor:
|
||||
def __init__(self, bzzz_config: Dict):
|
||||
self.bzzz_client = BzzzClient(bzzz_config)
|
||||
|
||||
def install_message_hook(self):
|
||||
"""Install pre-send hook in BZZZ network layer"""
|
||||
# Intercept messages before P2P transmission
|
||||
# Scan with SecretDetector
|
||||
# Block or allow message propagation
|
||||
|
||||
def block_message(self, message_id: str, reason: str):
|
||||
"""Prevent message from propagating in P2P network"""
|
||||
# Mark message as blocked in BZZZ
|
||||
# Log blocking reason
|
||||
# Notify sender agent of security violation
|
||||
```
|
||||
|
||||
#### 2.2 Real-time Processing Pipeline
|
||||
```python
|
||||
# /shhh-monitor/pipeline/processor.py
|
||||
class MessageProcessor:
|
||||
def __init__(self, detector: SecretDetector, quarantine: QuarantineManager):
|
||||
self.detector = detector
|
||||
self.quarantine = quarantine
|
||||
|
||||
async def process_hypercore_stream(self):
|
||||
"""Main processing loop for hypercore monitoring"""
|
||||
async for entry in self.hypercore_reader.stream_entries():
|
||||
if bzzz_message := self.parse_bzzz_message(entry):
|
||||
matches = self.detector.scan_message(bzzz_message)
|
||||
if matches:
|
||||
await self.handle_secret_detection(bzzz_message, matches)
|
||||
|
||||
async def handle_secret_detection(self, message: BzzzMessage, matches: List[SecretMatch]):
|
||||
"""Handle detected secrets with appropriate response"""
|
||||
# Determine severity level
|
||||
# Quarantine message
|
||||
# Send alerts
|
||||
# Trigger revocation if needed
|
||||
# Update detection statistics
|
||||
```
|
||||
|
||||
### Phase 3: Admin Dashboard & Feedback Loop (Weeks 5-6)
|
||||
|
||||
#### 3.1 FastAPI Backend
|
||||
```python
|
||||
# /shhh-monitor/api/main.py
|
||||
from fastapi import FastAPI, Depends
|
||||
from .models import QuarantineEntry, SecretPattern, RevocationEvent
|
||||
|
||||
app = FastAPI(title="SHHH Hypercore Monitor API")
|
||||
|
||||
@app.get("/quarantine", response_model=List[QuarantineEntry])
|
||||
async def get_quarantine_entries():
|
||||
"""List all quarantined messages"""
|
||||
|
||||
@app.post("/quarantine/{entry_id}/review")
|
||||
async def review_quarantine_entry(entry_id: int, action: str):
|
||||
"""Mark quarantine entry as reviewed/false positive"""
|
||||
|
||||
@app.get("/patterns", response_model=List[SecretPattern])
|
||||
async def get_detection_patterns():
|
||||
"""List all secret detection patterns"""
|
||||
|
||||
@app.post("/patterns/{pattern_name}/update")
|
||||
async def update_pattern(pattern_name: str, pattern: SecretPattern):
|
||||
"""Update regex pattern based on feedback"""
|
||||
```
|
||||
|
||||
#### 3.2 React Dashboard Frontend
|
||||
```typescript
|
||||
// /shhh-monitor/dashboard/src/components/QuarantineDashboard.tsx
|
||||
interface QuarantineDashboard {
|
||||
// Real-time quarantine feed
|
||||
// Pattern management interface
|
||||
// Revocation status tracking
|
||||
// Security metrics and charts
|
||||
// Alert configuration
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Automated Response & Learning (Weeks 7-8)
|
||||
|
||||
#### 4.1 Automated Secret Revocation
|
||||
```python
|
||||
# /shhh-monitor/automation/revocation.py
|
||||
class SecretRevoker:
|
||||
def __init__(self):
|
||||
self.aws_client = boto3.client('iam')
|
||||
self.github_client = github.Github()
|
||||
self.slack_client = slack.WebClient()
|
||||
|
||||
async def revoke_aws_key(self, access_key_id: str):
|
||||
"""Automatically deactivate AWS access key"""
|
||||
self.aws_client.update_access_key(
|
||||
AccessKeyId=access_key_id,
|
||||
Status='Inactive'
|
||||
)
|
||||
|
||||
async def revoke_github_token(self, token: str):
|
||||
"""Revoke GitHub personal access token"""
|
||||
# Use GitHub's token scanning API
|
||||
# Or organization webhook for automatic revocation
|
||||
|
||||
async def revoke_slack_token(self, token: str):
|
||||
"""Revoke Slack bot/user token"""
|
||||
# Use Slack Admin API
|
||||
# Invalidate token and rotate if possible
|
||||
```
|
||||
|
||||
#### 4.2 Meta-Learning System
|
||||
```python
|
||||
# /shhh-monitor/learning/meta_curator.py
|
||||
class MetaCurator:
|
||||
def __init__(self, llm_client):
|
||||
self.llm = llm_client
|
||||
|
||||
async def analyze_false_positives(self, entries: List[QuarantineEntry]):
|
||||
"""Use LLM to improve regex patterns"""
|
||||
# Analyze patterns in false positives
|
||||
# Generate regex refinements
|
||||
# Submit for human approval
|
||||
|
||||
async def detect_new_secret_types(self, quarantine_history: List[QuarantineEntry]):
|
||||
"""Identify new types of secrets to detect"""
|
||||
# Look for patterns in undetected secrets
|
||||
# Generate new regex proposals
|
||||
# Calculate confidence scores
|
||||
```
|
||||
|
||||
## Database Schema
|
||||
|
||||
### Core Tables
|
||||
```sql
|
||||
-- Quarantined messages
|
||||
CREATE TABLE quarantine (
|
||||
id SERIAL PRIMARY KEY,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
hypercore_position BIGINT NOT NULL,
|
||||
bzzz_message_id TEXT NOT NULL,
|
||||
secret_type TEXT NOT NULL,
|
||||
severity TEXT CHECK (severity IN ('LOW', 'MEDIUM', 'HIGH', 'CRITICAL')),
|
||||
confidence NUMERIC(3,2),
|
||||
redacted_content TEXT NOT NULL,
|
||||
full_content_hash TEXT NOT NULL, -- For audit purposes
|
||||
reviewed BOOLEAN DEFAULT FALSE,
|
||||
review_action TEXT, -- 'false_positive', 'confirmed', 'uncertain'
|
||||
reviewer TEXT,
|
||||
review_timestamp TIMESTAMPTZ
|
||||
);
|
||||
|
||||
-- Pattern history and evolution
|
||||
CREATE TABLE patterns_history (
|
||||
id SERIAL PRIMARY KEY,
|
||||
pattern_name TEXT NOT NULL,
|
||||
old_regex TEXT,
|
||||
new_regex TEXT,
|
||||
action TEXT CHECK (action IN ('add', 'update', 'remove')),
|
||||
confidence NUMERIC(3,2),
|
||||
status TEXT CHECK (status IN ('approved', 'pending', 'rejected')),
|
||||
submitted_by TEXT NOT NULL, -- 'human', 'meta_curator', 'feedback_system'
|
||||
approved_by TEXT,
|
||||
decision_reason TEXT,
|
||||
timestamp TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
-- Revocation events tracking
|
||||
CREATE TABLE revocations (
|
||||
id SERIAL PRIMARY KEY,
|
||||
quarantine_id INTEGER REFERENCES quarantine(id),
|
||||
secret_type TEXT NOT NULL,
|
||||
revocation_method TEXT NOT NULL, -- 'aws_api', 'github_api', 'manual'
|
||||
status TEXT CHECK (status IN ('success', 'failed', 'pending')),
|
||||
response_data JSONB, -- API response details
|
||||
timestamp TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
-- Performance metrics
|
||||
CREATE TABLE detection_metrics (
|
||||
id SERIAL PRIMARY KEY,
|
||||
date DATE NOT NULL,
|
||||
total_messages_scanned INTEGER,
|
||||
secrets_detected INTEGER,
|
||||
false_positives INTEGER,
|
||||
patterns_updated INTEGER,
|
||||
avg_detection_latency_ms INTEGER
|
||||
);
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### 1. Secure Secret Storage
|
||||
- **Never store actual secrets** in quarantine database
|
||||
- Use **cryptographic hashes** for audit trails
|
||||
- **Redact sensitive content** while preserving detection context
|
||||
- Implement **secure deletion** for expired quarantine entries
|
||||
|
||||
### 2. Access Control
|
||||
- **Role-based access** to dashboard (security admin, reviewer, read-only)
|
||||
- **Audit logging** for all administrative actions
|
||||
- **OAuth integration** with existing identity provider
|
||||
- **API key authentication** for automated systems
|
||||
|
||||
### 3. Network Security
|
||||
- **TLS encryption** for all API communication
|
||||
- **VPN/private network** access to monitoring systems
|
||||
- **Rate limiting** to prevent API abuse
|
||||
- **IP allowlisting** for critical endpoints
|
||||
|
||||
## Deployment Architecture
|
||||
|
||||
### Development Environment
|
||||
```yaml
|
||||
# docker-compose.dev.yml
|
||||
services:
|
||||
shhh-monitor:
|
||||
build: .
|
||||
environment:
|
||||
- DATABASE_URL=postgresql://dev:dev@postgres:5432/shhh_dev
|
||||
- HYPERCORE_LOG_PATH=/data/hypercore.log
|
||||
- BZZZ_CONFIG_PATH=/config/bzzz.yaml
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- ./config:/config
|
||||
|
||||
postgres:
|
||||
image: postgres:15
|
||||
environment:
|
||||
POSTGRES_DB: shhh_dev
|
||||
POSTGRES_USER: dev
|
||||
POSTGRES_PASSWORD: dev
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
# For caching and real-time notifications
|
||||
```
|
||||
|
||||
### Production Deployment
|
||||
```yaml
|
||||
# docker-compose.prod.yml
|
||||
services:
|
||||
shhh-monitor:
|
||||
image: registry.home.deepblack.cloud/tony/shhh-monitor:latest
|
||||
deploy:
|
||||
replicas: 2
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
environment:
|
||||
- DATABASE_URL=postgresql://shhh:${SHHH_DB_PASSWORD}@postgres:5432/shhh_prod
|
||||
- HYPERCORE_LOG_PATH=/hypercore/current.log
|
||||
networks:
|
||||
- shhh_network
|
||||
- tengig # For dashboard access
|
||||
```
|
||||
|
||||
## Performance Requirements
|
||||
|
||||
### Latency Targets
|
||||
- **Log Processing**: <50ms per hypercore entry
|
||||
- **Secret Detection**: <10ms per BZZZ message
|
||||
- **Alert Generation**: <100ms for critical secrets
|
||||
- **Dashboard Response**: <200ms for UI queries
|
||||
|
||||
### Throughput Targets
|
||||
- **Message Scanning**: 1000 messages/second
|
||||
- **Concurrent Users**: 10+ dashboard users
|
||||
- **Alert Volume**: 100+ alerts/hour during peak
|
||||
- **Database Queries**: <5ms average response time
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
### Metrics Collection
|
||||
```python
|
||||
# Prometheus metrics
|
||||
messages_scanned_total = Counter('shhh_messages_scanned_total')
|
||||
secrets_detected_total = Counter('shhh_secrets_detected_total', ['secret_type', 'severity'])
|
||||
detection_latency = Histogram('shhh_detection_latency_seconds')
|
||||
quarantine_size = Gauge('shhh_quarantine_entries_total')
|
||||
```
|
||||
|
||||
### Health Checks
|
||||
- **Hypercore connectivity**: Verify log file access
|
||||
- **Database health**: Connection pool status
|
||||
- **BZZZ integration**: P2P network connectivity
|
||||
- **Alert system**: Webhook endpoint validation
|
||||
|
||||
### Logging Strategy
|
||||
```python
|
||||
# Structured logging with correlation IDs
|
||||
{
|
||||
"timestamp": "2025-08-02T13:45:00Z",
|
||||
"level": "WARNING",
|
||||
"event": "secret_detected",
|
||||
"correlation_id": "req_123",
|
||||
"secret_type": "AWS_ACCESS_KEY",
|
||||
"severity": "HIGH",
|
||||
"hypercore_position": 58321,
|
||||
"bzzz_message_id": "msg_abc123",
|
||||
"redacted_content": "AKIA****XYZ found in agent message"
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
- **Regex pattern validation**: Test against known secret formats
|
||||
- **Message parsing**: Verify hypercore and BZZZ format handling
|
||||
- **Quarantine logic**: Test storage and retrieval functions
|
||||
- **Alert generation**: Mock webhook endpoint testing
|
||||
|
||||
### Integration Tests
|
||||
- **End-to-end workflow**: Log → Detection → Quarantine → Alert
|
||||
- **Database operations**: PostgreSQL CRUD operations
|
||||
- **BZZZ integration**: Message interception and blocking
|
||||
- **API endpoints**: FastAPI route testing
|
||||
|
||||
### Security Tests
|
||||
- **Input validation**: SQL injection, XSS prevention
|
||||
- **Access control**: Role-based permission testing
|
||||
- **Data protection**: Verify secret redaction and hashing
|
||||
- **Performance**: Load testing with high message volume
|
||||
|
||||
## Rollout Plan
|
||||
|
||||
### Phase 1: Foundation (Weeks 1-2)
|
||||
- ✅ Core monitoring system with hypercore integration
|
||||
- ✅ Basic secret detection using SHHH patterns
|
||||
- ✅ PostgreSQL quarantine storage
|
||||
- ✅ Simple alerting via webhooks
|
||||
|
||||
### Phase 2: Integration (Weeks 3-4)
|
||||
- ✅ BZZZ network message interception
|
||||
- ✅ Real-time processing pipeline
|
||||
- ✅ Enhanced pattern management
|
||||
- ✅ Performance optimization
|
||||
|
||||
### Phase 3: Dashboard (Weeks 5-6)
|
||||
- ✅ FastAPI backend with full CRUD operations
|
||||
- ✅ React dashboard for quarantine management
|
||||
- ✅ Pattern editor and approval workflow
|
||||
- ✅ Security metrics and reporting
|
||||
|
||||
### Phase 4: Automation (Weeks 7-8)
|
||||
- ✅ Automated secret revocation APIs
|
||||
- ✅ Meta-learning system for pattern improvement
|
||||
- ✅ Production deployment and monitoring
|
||||
- ✅ Documentation and team training
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Security Effectiveness
|
||||
- **Zero secret leaks** in BZZZ P2P network after deployment
|
||||
- **<1% false positive rate** for secret detection
|
||||
- **<30 seconds** average time to detect and quarantine secrets
|
||||
- **99.9% uptime** for monitoring system
|
||||
|
||||
### Operational Excellence
|
||||
- **Complete audit trail** for all security events
|
||||
- **Self-improving** pattern detection through feedback
|
||||
- **Scalable architecture** supporting growth in CHORUS usage
|
||||
- **Team adoption** with trained security administrators
|
||||
|
||||
## Risk Mitigation
|
||||
|
||||
### Technical Risks
|
||||
- **Performance impact**: Monitor hypercore processing overhead
|
||||
- **False positives**: Implement feedback loop for pattern refinement
|
||||
- **BZZZ integration**: Maintain compatibility with P2P protocol evolution
|
||||
- **Data loss**: Backup quarantine database and implement recovery procedures
|
||||
|
||||
### Security Risks
|
||||
- **Bypassing detection**: Regular pattern updates and meta-learning
|
||||
- **System compromise**: Network isolation and access controls
|
||||
- **Secret exposure**: Implement proper redaction and audit procedures
|
||||
- **Alert fatigue**: Tune detection thresholds to minimize noise
|
||||
|
||||
## Conclusion
|
||||
|
||||
This SHHH Hypercore Log Monitor provides comprehensive protection against secret leakage in the CHORUS Services BZZZ P2P network. By implementing real-time detection, automated response, and continuous learning, we ensure that sensitive information remains secure while maintaining the performance and functionality of the distributed AI orchestration platform.
|
||||
|
||||
The system builds upon the existing SHHH framework while adding the specific hypercore and BZZZ integrations needed for CHORUS Services. The phased rollout ensures stability and allows for iterative improvement based on real-world usage patterns.
|
||||
@@ -1,251 +0,0 @@
|
||||
# 🛡️ CHORUS Services Secrets Sentinel Agent - System Prompt
|
||||
|
||||
## Agent Role & Mission
|
||||
|
||||
You are the **Secrets Sentinel**, a specialized security agent responsible for monitoring the CHORUS Services hypercore log and BZZZ P2P network messages to detect, quarantine, and prevent the leakage of sensitive credentials and secrets.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
### 🔍 **Detection & Analysis**
|
||||
- **Real-time Log Monitoring**: Continuously scan hypercore log entries for secret patterns
|
||||
- **BZZZ Message Inspection**: Analyze P2P messages before they propagate across the network
|
||||
- **Pattern Recognition**: Apply sophisticated regex patterns to identify various secret types
|
||||
- **Context Analysis**: Understand the context around detected patterns to minimize false positives
|
||||
|
||||
### 🚨 **Immediate Response Actions**
|
||||
- **Redaction**: Immediately redact detected secrets while preserving log context
|
||||
- **Quarantine**: Isolate HIGH severity log entries from normal processing
|
||||
- **Network Blocking**: Prevent BZZZ messages containing secrets from propagating
|
||||
- **Alert Generation**: Send immediate notifications to security team
|
||||
|
||||
### 🔄 **Automated Remediation**
|
||||
- **Revocation Triggers**: Automatically trigger webhook-based secret revocation
|
||||
- **API Integration**: Interface with AWS, GitHub, Slack APIs for immediate credential deactivation
|
||||
- **Audit Trail**: Maintain complete records of all detection and remediation actions
|
||||
|
||||
### 🧠 **Adaptive Learning**
|
||||
- **Pattern Evolution**: Update detection rules based on feedback and new secret types
|
||||
- **False Positive Reduction**: Refine patterns based on curator feedback
|
||||
- **Confidence Scoring**: Assign confidence levels to detections for proper escalation
|
||||
|
||||
## Detection Patterns & Rules
|
||||
|
||||
### **High Severity Secrets (Immediate Quarantine + Revocation)**
|
||||
```yaml
|
||||
AWS_ACCESS_KEY:
|
||||
regex: "AKIA[0-9A-Z]{16}"
|
||||
severity: "CRITICAL"
|
||||
confidence: 0.95
|
||||
action: "quarantine_and_revoke"
|
||||
|
||||
PRIVATE_KEY:
|
||||
regex: "-----BEGIN [A-Z ]*PRIVATE KEY-----"
|
||||
severity: "CRITICAL"
|
||||
confidence: 0.98
|
||||
action: "quarantine_and_revoke"
|
||||
|
||||
GITHUB_TOKEN:
|
||||
regex: "ghp_[0-9A-Za-z]{36}"
|
||||
severity: "HIGH"
|
||||
confidence: 0.92
|
||||
action: "quarantine_and_revoke"
|
||||
```
|
||||
|
||||
### **Medium Severity Secrets (Quarantine + Alert)**
|
||||
```yaml
|
||||
JWT_TOKEN:
|
||||
regex: "eyJ[A-Za-z0-9_-]+?\\.[A-Za-z0-9_-]+?\\.[A-Za-z0-9_-]+?"
|
||||
severity: "MEDIUM"
|
||||
confidence: 0.85
|
||||
action: "quarantine_and_alert"
|
||||
|
||||
SLACK_TOKEN:
|
||||
regex: "xox[baprs]-[0-9A-Za-z-]{10,48}"
|
||||
severity: "HIGH"
|
||||
confidence: 0.90
|
||||
action: "quarantine_and_revoke"
|
||||
```
|
||||
|
||||
## Behavioral Guidelines
|
||||
|
||||
### **Detection Behavior**
|
||||
1. **Scan Every Log Entry**: Process all hypercore entries in real-time
|
||||
2. **Parse BZZZ Messages**: Extract and analyze P2P message payloads
|
||||
3. **Apply Pattern Matching**: Use confidence-weighted regex patterns
|
||||
4. **Context Preservation**: Maintain enough context for security analysis
|
||||
|
||||
### **Response Behavior**
|
||||
1. **Immediate Action**: For CRITICAL/HIGH severity, act within seconds
|
||||
2. **Graduated Response**: Different actions based on severity levels
|
||||
3. **Human Escalation**: Flag uncertain cases for human review
|
||||
4. **Audit Everything**: Log all actions with timestamps and reasons
|
||||
|
||||
### **Learning Behavior**
|
||||
1. **Accept Feedback**: Process curator reports of false positives/missed secrets
|
||||
2. **Pattern Refinement**: Propose regex updates based on feedback
|
||||
3. **Version Control**: Track all pattern changes with confidence scores
|
||||
4. **Human Approval**: Submit new patterns for security admin approval
|
||||
|
||||
## Operational Procedures
|
||||
|
||||
### **Log Entry Processing Workflow**
|
||||
```
|
||||
1. Receive hypercore log entry
|
||||
2. Parse entry structure and extract content
|
||||
3. If BZZZ message → extract P2P payload
|
||||
4. Apply all active regex patterns
|
||||
5. Calculate confidence scores
|
||||
6. Determine severity level
|
||||
7. Execute appropriate response action
|
||||
8. Log detection event and actions taken
|
||||
```
|
||||
|
||||
### **Quarantine Procedure**
|
||||
```python
|
||||
def quarantine_log_entry(entry, secret_type, confidence):
|
||||
"""Quarantine sensitive log entry for security review"""
|
||||
redacted_content = redact_secrets(entry.content)
|
||||
quarantine_record = {
|
||||
"timestamp": datetime.utcnow().isoformat() + "Z",
|
||||
"hypercore_position": entry.position,
|
||||
"secret_type": secret_type,
|
||||
"severity": determine_severity(secret_type),
|
||||
"confidence": confidence,
|
||||
"redacted_content": redacted_content,
|
||||
"content_hash": hash(entry.content), # For audit
|
||||
"source_agent": entry.source_agent,
|
||||
"reason": f"Secret detected: {secret_type}"
|
||||
}
|
||||
store_in_quarantine_db(quarantine_record)
|
||||
if entry.is_bzzz_message:
|
||||
block_bzzz_propagation(entry.message_id)
|
||||
return quarantine_record
|
||||
```
|
||||
|
||||
### **Revocation Trigger Procedure**
|
||||
```python
|
||||
def trigger_secret_revocation(secret_type, redacted_sample):
|
||||
"""Trigger automated secret revocation via webhooks"""
|
||||
revocation_payload = {
|
||||
"event": "secret_leak_detected",
|
||||
"secret_type": secret_type,
|
||||
"redacted_key": redacted_sample,
|
||||
"hypercore_position": current_position,
|
||||
"severity": determine_severity(secret_type),
|
||||
"recommended_action": get_revocation_action(secret_type),
|
||||
"timestamp": datetime.utcnow().isoformat() + "Z"
|
||||
}
|
||||
|
||||
webhook_url = REVOCATION_HOOKS.get(secret_type)
|
||||
if webhook_url:
|
||||
send_webhook(webhook_url, revocation_payload)
|
||||
log_revocation_attempt(secret_type, "triggered")
|
||||
```
|
||||
|
||||
## Communication Protocols
|
||||
|
||||
### **Alert Format for Security Team**
|
||||
```json
|
||||
{
|
||||
"alert_id": "shhh_12345",
|
||||
"timestamp": "2025-08-02T13:45:00Z",
|
||||
"severity": "HIGH",
|
||||
"secret_type": "AWS_ACCESS_KEY",
|
||||
"source": "hypercore_position_58321",
|
||||
"agent_source": "whoosh_orchestrator",
|
||||
"redacted_content": "Found AWS key AKIA****XYZ in deployment config",
|
||||
"confidence": 0.95,
|
||||
"actions_taken": ["quarantined", "revocation_triggered"],
|
||||
"next_steps": "Manual verification recommended"
|
||||
}
|
||||
```
|
||||
|
||||
### **Feedback Processing Format**
|
||||
```json
|
||||
{
|
||||
"feedback_type": "false_positive|missed_secret|pattern_improvement",
|
||||
"alert_id": "shhh_12345",
|
||||
"secret_type": "AWS_ACCESS_KEY",
|
||||
"evidence": "Key was test data: AKIA-TESTKEY-123",
|
||||
"suggested_regex_fix": "AKIA[0-9A-Z]{16}(?!-TESTKEY)",
|
||||
"reviewer": "security_admin",
|
||||
"timestamp": "2025-08-02T14:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Requirements
|
||||
|
||||
### **Response Time Targets**
|
||||
- **Detection Latency**: <50ms per log entry
|
||||
- **Quarantine Action**: <100ms for high severity
|
||||
- **Revocation Trigger**: <200ms for webhook dispatch
|
||||
- **BZZZ Block**: <10ms to prevent propagation
|
||||
|
||||
### **Accuracy Standards**
|
||||
- **False Positive Rate**: <2% for high confidence patterns
|
||||
- **Detection Coverage**: >99% for known secret formats
|
||||
- **Pattern Confidence**: Minimum 0.80 for active patterns
|
||||
|
||||
## Error Handling & Recovery
|
||||
|
||||
### **System Failures**
|
||||
- **Database Connectivity**: Queue quarantine entries locally, sync when recovered
|
||||
- **Webhook Failures**: Retry with exponential backoff, alert on continued failure
|
||||
- **Pattern Loading**: Fall back to core patterns if external config unavailable
|
||||
- **Log Processing**: Never skip entries, flag for manual review if uncertain
|
||||
|
||||
### **Security Incident Response**
|
||||
- **Potential Breach**: Immediately escalate to security team
|
||||
- **Pattern Bypass**: Alert security team, request pattern review
|
||||
- **False Negative**: Update patterns, retroactively scan recent logs
|
||||
- **System Compromise**: Quarantine all recent activity, manual investigation
|
||||
|
||||
## Integration Points
|
||||
|
||||
### **CHORUS Services Components**
|
||||
- **Hypercore Log**: Primary data source for monitoring
|
||||
- **BZZZ Network**: P2P message inspection and blocking capability
|
||||
- **WHOOSH Orchestrator**: Agent activity monitoring
|
||||
- **SLURP Context**: Context-aware secret detection
|
||||
- **Security Dashboard**: Real-time alert display and management
|
||||
|
||||
### **External Systems**
|
||||
- **AWS IAM**: Automated access key revocation
|
||||
- **GitHub API**: Personal access token deactivation
|
||||
- **Slack Admin API**: Bot/user token revocation
|
||||
- **Security SIEM**: Alert forwarding and correlation
|
||||
- **Audit System**: Compliance logging and reporting
|
||||
|
||||
## Continuous Improvement
|
||||
|
||||
### **Pattern Learning Process**
|
||||
1. **Feedback Collection**: Gather curator reports on detection accuracy
|
||||
2. **Pattern Analysis**: Identify common false positive/negative patterns
|
||||
3. **Regex Generation**: Create new patterns using AI-assisted regex generation
|
||||
4. **Confidence Assessment**: Test new patterns against historical data
|
||||
5. **Human Review**: Submit high-confidence patterns for security admin approval
|
||||
6. **Production Deployment**: Activate approved patterns with monitoring
|
||||
|
||||
### **Meta-Learning Capabilities**
|
||||
- **Trend Analysis**: Identify emerging secret types and formats
|
||||
- **Context Learning**: Improve understanding of legitimate vs. malicious patterns
|
||||
- **Agent Behavior**: Learn which agents commonly handle sensitive data
|
||||
- **Temporal Patterns**: Understand when secret leaks are most likely to occur
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### **Security Effectiveness**
|
||||
- **Zero secret propagation** in BZZZ P2P network post-deployment
|
||||
- **Mean time to detection**: <1 minute for any secret exposure
|
||||
- **Revocation success rate**: >95% for automated responses
|
||||
- **Coverage improvement**: Continuous expansion of detectable secret types
|
||||
|
||||
### **Operational Excellence**
|
||||
- **System uptime**: >99.9% availability for log monitoring
|
||||
- **Processing throughput**: Handle 10,000+ log entries per minute
|
||||
- **Alert quality**: <5% false positive rate for security team alerts
|
||||
- **Response automation**: >90% of secrets handled without human intervention
|
||||
|
||||
You are now equipped to serve as the CHORUS Services Secrets Sentinel. Monitor vigilantly, respond swiftly, and continuously evolve your detection capabilities to protect our distributed AI orchestration platform from credential exposure and security breaches.
|
||||
|
||||
Remember: **Security is paramount. When in doubt, quarantine and escalate.**
|
||||
@@ -1,4 +0,0 @@
|
||||
# SHHH API Module
|
||||
"""
|
||||
FastAPI dashboard and API endpoints for SHHH Secrets Sentinel.
|
||||
"""
|
||||
@@ -1,374 +0,0 @@
|
||||
"""
|
||||
FastAPI Dashboard Backend for SHHH Secrets Sentinel
|
||||
Provides REST API endpoints for quarantine management and system monitoring.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
from typing import List, Optional
|
||||
from contextlib import asynccontextmanager
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Depends, BackgroundTasks
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.responses import JSONResponse
|
||||
import structlog
|
||||
|
||||
from .models import (
|
||||
QuarantineEntryResponse, QuarantineReviewRequest, RevocationEventResponse,
|
||||
PatternResponse, PatternUpdateRequest, StatsResponse, SystemHealthResponse,
|
||||
ProcessingStatsResponse, AlertRequest, WebhookTestRequest, WebhookTestResponse,
|
||||
PatternTestRequest, PatternTestResponse, SearchRequest, PaginatedResponse
|
||||
)
|
||||
from ..core.quarantine import QuarantineManager, QuarantineEntry
|
||||
from ..core.detector import SecretDetector
|
||||
from ..automation.revocation import SecretRevoker
|
||||
from ..pipeline.processor import MessageProcessor
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
# Global components (initialized in lifespan)
|
||||
quarantine_manager: Optional[QuarantineManager] = None
|
||||
detector: Optional[SecretDetector] = None
|
||||
revoker: Optional[SecretRevoker] = None
|
||||
processor: Optional[MessageProcessor] = None
|
||||
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Application lifespan manager"""
|
||||
global quarantine_manager, detector, revoker, processor
|
||||
|
||||
try:
|
||||
# Initialize components
|
||||
logger.info("Initializing SHHH API components")
|
||||
|
||||
# These would normally come from configuration
|
||||
config = {
|
||||
'database_url': 'postgresql://shhh:password@localhost:5432/shhh_sentinel',
|
||||
'patterns_file': 'patterns.yaml',
|
||||
'revocation_webhooks': {
|
||||
'AWS_ACCESS_KEY': 'https://security.chorus.services/hooks/aws-revoke',
|
||||
'GITHUB_TOKEN': 'https://security.chorus.services/hooks/github-revoke',
|
||||
'SLACK_TOKEN': 'https://security.chorus.services/hooks/slack-revoke'
|
||||
}
|
||||
}
|
||||
|
||||
# Initialize quarantine manager
|
||||
quarantine_manager = QuarantineManager(config['database_url'])
|
||||
await quarantine_manager.initialize()
|
||||
|
||||
# Initialize detector
|
||||
detector = SecretDetector(config['patterns_file'])
|
||||
|
||||
# Initialize revoker
|
||||
revoker = SecretRevoker(quarantine_manager, config['revocation_webhooks'])
|
||||
|
||||
logger.info("SHHH API components initialized successfully")
|
||||
|
||||
yield
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize SHHH API: {e}")
|
||||
raise
|
||||
finally:
|
||||
# Cleanup
|
||||
if quarantine_manager:
|
||||
await quarantine_manager.close()
|
||||
logger.info("SHHH API components shut down")
|
||||
|
||||
|
||||
app = FastAPI(
|
||||
title="SHHH Secrets Sentinel API",
|
||||
description="REST API for managing secrets detection, quarantine, and response",
|
||||
version="1.0.0",
|
||||
lifespan=lifespan
|
||||
)
|
||||
|
||||
# CORS middleware
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # Configure appropriately for production
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
|
||||
# Dependency functions
|
||||
async def get_quarantine_manager() -> QuarantineManager:
|
||||
if not quarantine_manager:
|
||||
raise HTTPException(status_code=503, detail="Quarantine manager not available")
|
||||
return quarantine_manager
|
||||
|
||||
|
||||
async def get_detector() -> SecretDetector:
|
||||
if not detector:
|
||||
raise HTTPException(status_code=503, detail="Secret detector not available")
|
||||
return detector
|
||||
|
||||
|
||||
async def get_revoker() -> SecretRevoker:
|
||||
if not revoker:
|
||||
raise HTTPException(status_code=503, detail="Secret revoker not available")
|
||||
return revoker
|
||||
|
||||
|
||||
# Health and status endpoints
|
||||
@app.get("/health", response_model=SystemHealthResponse)
|
||||
async def get_health():
|
||||
"""Get system health status"""
|
||||
health = {
|
||||
'status': 'healthy',
|
||||
'timestamp': datetime.now(),
|
||||
'components': {
|
||||
'quarantine_manager': {
|
||||
'initialized': quarantine_manager is not None,
|
||||
'database_connected': quarantine_manager.pool is not None if quarantine_manager else False
|
||||
},
|
||||
'detector': {
|
||||
'initialized': detector is not None,
|
||||
'patterns_loaded': len(detector.patterns) if detector else 0
|
||||
},
|
||||
'revoker': {
|
||||
'initialized': revoker is not None,
|
||||
'webhooks_configured': len(revoker.webhook_config) if revoker else 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return health
|
||||
|
||||
|
||||
@app.get("/stats", response_model=StatsResponse)
|
||||
async def get_stats(qm: QuarantineManager = Depends(get_quarantine_manager)):
|
||||
"""Get quarantine statistics"""
|
||||
stats = await qm.get_quarantine_stats()
|
||||
return stats
|
||||
|
||||
|
||||
# Quarantine management endpoints
|
||||
@app.get("/quarantine", response_model=List[QuarantineEntryResponse])
|
||||
async def get_quarantine_entries(
|
||||
limit: int = 100,
|
||||
offset: int = 0,
|
||||
severity: Optional[str] = None,
|
||||
reviewed: Optional[bool] = None,
|
||||
qm: QuarantineManager = Depends(get_quarantine_manager)
|
||||
):
|
||||
"""Get quarantine entries with optional filters"""
|
||||
entries = await qm.get_quarantine_entries(
|
||||
limit=limit,
|
||||
offset=offset,
|
||||
severity_filter=severity,
|
||||
reviewed_filter=reviewed
|
||||
)
|
||||
|
||||
return [QuarantineEntryResponse(**entry.__dict__) for entry in entries]
|
||||
|
||||
|
||||
@app.post("/quarantine/search", response_model=PaginatedResponse)
|
||||
async def search_quarantine_entries(
|
||||
search: SearchRequest,
|
||||
qm: QuarantineManager = Depends(get_quarantine_manager)
|
||||
):
|
||||
"""Search quarantine entries with advanced filters"""
|
||||
# This would implement more complex search logic
|
||||
entries = await qm.get_quarantine_entries(
|
||||
limit=search.limit,
|
||||
offset=search.offset,
|
||||
severity_filter=search.severity,
|
||||
reviewed_filter=search.reviewed
|
||||
)
|
||||
|
||||
items = [QuarantineEntryResponse(**entry.__dict__) for entry in entries]
|
||||
|
||||
return PaginatedResponse(
|
||||
items=items,
|
||||
total=len(items), # This would be the actual total from a count query
|
||||
limit=search.limit,
|
||||
offset=search.offset,
|
||||
has_more=len(items) == search.limit
|
||||
)
|
||||
|
||||
|
||||
@app.post("/quarantine/{entry_id}/review")
|
||||
async def review_quarantine_entry(
|
||||
entry_id: int,
|
||||
review: QuarantineReviewRequest,
|
||||
qm: QuarantineManager = Depends(get_quarantine_manager)
|
||||
):
|
||||
"""Mark a quarantine entry as reviewed"""
|
||||
success = await qm.mark_reviewed(entry_id, review.action, review.reviewer)
|
||||
|
||||
if not success:
|
||||
raise HTTPException(status_code=404, detail="Quarantine entry not found")
|
||||
|
||||
return {"status": "success", "message": f"Entry {entry_id} marked as {review.action}"}
|
||||
|
||||
|
||||
@app.get("/quarantine/{entry_id}")
|
||||
async def get_quarantine_entry(
|
||||
entry_id: int,
|
||||
qm: QuarantineManager = Depends(get_quarantine_manager)
|
||||
):
|
||||
"""Get a specific quarantine entry by ID"""
|
||||
# This would need to be implemented in QuarantineManager
|
||||
raise HTTPException(status_code=501, detail="Not implemented yet")
|
||||
|
||||
|
||||
# Pattern management endpoints
|
||||
@app.get("/patterns", response_model=List[PatternResponse])
|
||||
async def get_patterns(detector: SecretDetector = Depends(get_detector)):
|
||||
"""Get all detection patterns"""
|
||||
patterns = []
|
||||
for name, config in detector.patterns.items():
|
||||
patterns.append(PatternResponse(
|
||||
name=name,
|
||||
regex=config['regex'],
|
||||
description=config.get('description', ''),
|
||||
severity=config.get('severity', 'MEDIUM'),
|
||||
confidence=config.get('confidence', 0.8),
|
||||
active=config.get('active', True)
|
||||
))
|
||||
|
||||
return patterns
|
||||
|
||||
|
||||
@app.post("/patterns/{pattern_name}")
|
||||
async def update_pattern(
|
||||
pattern_name: str,
|
||||
pattern: PatternUpdateRequest,
|
||||
detector: SecretDetector = Depends(get_detector)
|
||||
):
|
||||
"""Update or create a detection pattern"""
|
||||
# This would update the patterns.yaml file
|
||||
# For now, just update in memory
|
||||
detector.patterns[pattern_name] = {
|
||||
'regex': pattern.regex,
|
||||
'description': pattern.description,
|
||||
'severity': pattern.severity,
|
||||
'confidence': pattern.confidence,
|
||||
'active': pattern.active
|
||||
}
|
||||
|
||||
# Recompile regex
|
||||
import re
|
||||
try:
|
||||
detector.patterns[pattern_name]['compiled_regex'] = re.compile(
|
||||
pattern.regex, re.MULTILINE | re.DOTALL
|
||||
)
|
||||
except re.error as e:
|
||||
raise HTTPException(status_code=400, detail=f"Invalid regex: {e}")
|
||||
|
||||
return {"status": "success", "message": f"Pattern {pattern_name} updated"}
|
||||
|
||||
|
||||
@app.post("/patterns/{pattern_name}/test", response_model=PatternTestResponse)
|
||||
async def test_pattern(
|
||||
pattern_name: str,
|
||||
test_request: PatternTestRequest,
|
||||
detector: SecretDetector = Depends(get_detector)
|
||||
):
|
||||
"""Test a detection pattern against sample text"""
|
||||
try:
|
||||
matches = detector.test_pattern(pattern_name, test_request.test_text)
|
||||
return PatternTestResponse(
|
||||
matches=[match.__dict__ for match in matches],
|
||||
match_count=len(matches)
|
||||
)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
|
||||
|
||||
@app.get("/patterns/stats")
|
||||
async def get_pattern_stats(detector: SecretDetector = Depends(get_detector)):
|
||||
"""Get pattern statistics"""
|
||||
return detector.get_pattern_stats()
|
||||
|
||||
|
||||
# Revocation management endpoints
|
||||
@app.get("/revocations", response_model=List[RevocationEventResponse])
|
||||
async def get_revocations(
|
||||
limit: int = 100,
|
||||
offset: int = 0,
|
||||
qm: QuarantineManager = Depends(get_quarantine_manager)
|
||||
):
|
||||
"""Get revocation events"""
|
||||
# This would need to be implemented in QuarantineManager
|
||||
raise HTTPException(status_code=501, detail="Not implemented yet")
|
||||
|
||||
|
||||
@app.post("/revocations/test", response_model=WebhookTestResponse)
|
||||
async def test_webhook(
|
||||
test_request: WebhookTestRequest,
|
||||
revoker: SecretRevoker = Depends(get_revoker)
|
||||
):
|
||||
"""Test a webhook endpoint"""
|
||||
result = await revoker.test_webhook_endpoint(test_request.secret_type)
|
||||
return WebhookTestResponse(**result)
|
||||
|
||||
|
||||
@app.get("/revocations/stats")
|
||||
async def get_revocation_stats(revoker: SecretRevoker = Depends(get_revoker)):
|
||||
"""Get revocation statistics"""
|
||||
return revoker.get_stats()
|
||||
|
||||
|
||||
# Administrative endpoints
|
||||
@app.post("/admin/cleanup")
|
||||
async def cleanup_old_entries(
|
||||
qm: QuarantineManager = Depends(get_quarantine_manager)
|
||||
):
|
||||
"""Clean up old quarantine entries"""
|
||||
deleted_count = await qm.cleanup_old_entries()
|
||||
return {"status": "success", "deleted_entries": deleted_count}
|
||||
|
||||
|
||||
@app.post("/admin/reload-patterns")
|
||||
async def reload_patterns(detector: SecretDetector = Depends(get_detector)):
|
||||
"""Reload detection patterns from file"""
|
||||
detector.load_patterns()
|
||||
return {"status": "success", "message": "Patterns reloaded"}
|
||||
|
||||
|
||||
@app.post("/admin/reset-stats")
|
||||
async def reset_stats(revoker: SecretRevoker = Depends(get_revoker)):
|
||||
"""Reset revocation statistics"""
|
||||
revoker.reset_stats()
|
||||
return {"status": "success", "message": "Statistics reset"}
|
||||
|
||||
|
||||
# Monitoring endpoints
|
||||
@app.get("/metrics/prometheus")
|
||||
async def get_prometheus_metrics():
|
||||
"""Get metrics in Prometheus format"""
|
||||
# This would generate Prometheus-formatted metrics
|
||||
raise HTTPException(status_code=501, detail="Prometheus metrics not implemented yet")
|
||||
|
||||
|
||||
@app.get("/logs/recent")
|
||||
async def get_recent_logs(limit: int = 100):
|
||||
"""Get recent system logs"""
|
||||
# This would return recent log entries
|
||||
raise HTTPException(status_code=501, detail="Log endpoint not implemented yet")
|
||||
|
||||
|
||||
# Error handlers
|
||||
@app.exception_handler(Exception)
|
||||
async def general_exception_handler(request, exc):
|
||||
logger.error(f"Unhandled exception: {exc}")
|
||||
return JSONResponse(
|
||||
status_code=500,
|
||||
content={"detail": "Internal server error"}
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(
|
||||
"api.main:app",
|
||||
host="127.0.0.1",
|
||||
port=8000,
|
||||
reload=True,
|
||||
log_level="info"
|
||||
)
|
||||
@@ -1,149 +0,0 @@
|
||||
"""
|
||||
Pydantic models for SHHH API endpoints.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import List, Dict, Any, Optional
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class QuarantineEntryResponse(BaseModel):
|
||||
"""Response model for quarantine entries"""
|
||||
id: int
|
||||
timestamp: datetime
|
||||
hypercore_position: int
|
||||
bzzz_message_id: Optional[str] = None
|
||||
secret_type: str
|
||||
severity: str
|
||||
confidence: float
|
||||
redacted_content: str
|
||||
content_hash: str
|
||||
source_agent: str
|
||||
match_count: int
|
||||
reviewed: bool
|
||||
review_action: Optional[str] = None
|
||||
reviewer: Optional[str] = None
|
||||
review_timestamp: Optional[datetime] = None
|
||||
metadata: Dict[str, Any] = {}
|
||||
|
||||
|
||||
class QuarantineReviewRequest(BaseModel):
|
||||
"""Request model for reviewing quarantine entries"""
|
||||
action: str = Field(..., description="Review action: 'false_positive', 'confirmed', 'uncertain'")
|
||||
reviewer: str = Field(..., description="Name or ID of the reviewer")
|
||||
notes: Optional[str] = Field(None, description="Optional review notes")
|
||||
|
||||
|
||||
class RevocationEventResponse(BaseModel):
|
||||
"""Response model for revocation events"""
|
||||
id: int
|
||||
quarantine_id: int
|
||||
secret_type: str
|
||||
revocation_method: str
|
||||
status: str
|
||||
response_data: Dict[str, Any] = {}
|
||||
timestamp: datetime
|
||||
|
||||
|
||||
class PatternResponse(BaseModel):
|
||||
"""Response model for detection patterns"""
|
||||
name: str
|
||||
regex: str
|
||||
description: str
|
||||
severity: str
|
||||
confidence: float
|
||||
active: bool
|
||||
|
||||
|
||||
class PatternUpdateRequest(BaseModel):
|
||||
"""Request model for updating patterns"""
|
||||
regex: str = Field(..., description="Regular expression pattern")
|
||||
description: Optional[str] = Field(None, description="Pattern description")
|
||||
severity: str = Field(..., description="Severity level: LOW, MEDIUM, HIGH, CRITICAL")
|
||||
confidence: float = Field(..., ge=0.0, le=1.0, description="Confidence score")
|
||||
active: bool = Field(True, description="Whether pattern is active")
|
||||
|
||||
|
||||
class StatsResponse(BaseModel):
|
||||
"""Response model for system statistics"""
|
||||
total_entries: int
|
||||
pending_review: int
|
||||
critical_count: int
|
||||
high_count: int
|
||||
medium_count: int
|
||||
low_count: int
|
||||
last_24h: int
|
||||
last_7d: int
|
||||
|
||||
|
||||
class SystemHealthResponse(BaseModel):
|
||||
"""Response model for system health"""
|
||||
status: str
|
||||
timestamp: datetime
|
||||
components: Dict[str, Dict[str, Any]]
|
||||
|
||||
|
||||
class ProcessingStatsResponse(BaseModel):
|
||||
"""Response model for processing statistics"""
|
||||
entries_processed: int
|
||||
secrets_detected: int
|
||||
messages_quarantined: int
|
||||
revocations_triggered: int
|
||||
processing_errors: int
|
||||
uptime_hours: Optional[float] = None
|
||||
entries_per_second: Optional[float] = None
|
||||
secrets_per_hour: Optional[float] = None
|
||||
is_running: bool
|
||||
|
||||
|
||||
class AlertRequest(BaseModel):
|
||||
"""Request model for manual alerts"""
|
||||
message: str = Field(..., description="Alert message")
|
||||
severity: str = Field(..., description="Alert severity")
|
||||
source: str = Field(..., description="Alert source")
|
||||
|
||||
|
||||
class WebhookTestRequest(BaseModel):
|
||||
"""Request model for testing webhook endpoints"""
|
||||
secret_type: str = Field(..., description="Secret type to test")
|
||||
|
||||
|
||||
class WebhookTestResponse(BaseModel):
|
||||
"""Response model for webhook tests"""
|
||||
success: bool
|
||||
method: Optional[str] = None
|
||||
response_data: Dict[str, Any] = {}
|
||||
error: Optional[str] = None
|
||||
|
||||
|
||||
class PatternTestRequest(BaseModel):
|
||||
"""Request model for testing detection patterns"""
|
||||
pattern_name: str = Field(..., description="Name of pattern to test")
|
||||
test_text: str = Field(..., description="Text to test against pattern")
|
||||
|
||||
|
||||
class PatternTestResponse(BaseModel):
|
||||
"""Response model for pattern testing"""
|
||||
matches: List[Dict[str, Any]]
|
||||
match_count: int
|
||||
|
||||
|
||||
class SearchRequest(BaseModel):
|
||||
"""Request model for searching quarantine entries"""
|
||||
query: Optional[str] = Field(None, description="Search query")
|
||||
secret_type: Optional[str] = Field(None, description="Filter by secret type")
|
||||
severity: Optional[str] = Field(None, description="Filter by severity")
|
||||
reviewed: Optional[bool] = Field(None, description="Filter by review status")
|
||||
start_date: Optional[datetime] = Field(None, description="Start date filter")
|
||||
end_date: Optional[datetime] = Field(None, description="End date filter")
|
||||
limit: int = Field(100, ge=1, le=1000, description="Result limit")
|
||||
offset: int = Field(0, ge=0, description="Result offset")
|
||||
|
||||
|
||||
class PaginatedResponse(BaseModel):
|
||||
"""Generic paginated response model"""
|
||||
items: List[Any]
|
||||
total: int
|
||||
limit: int
|
||||
offset: int
|
||||
has_more: bool
|
||||
@@ -1,4 +0,0 @@
|
||||
# SHHH Automation Module
|
||||
"""
|
||||
Automated response and revocation systems for secret detection.
|
||||
"""
|
||||
@@ -1,474 +0,0 @@
|
||||
"""
|
||||
Automated Secret Revocation System for SHHH Secrets Sentinel
|
||||
Provides automated response capabilities for different types of detected secrets.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import aiohttp
|
||||
import json
|
||||
import time
|
||||
from typing import Dict, Any, Optional, List
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
import structlog
|
||||
|
||||
from ..core.quarantine import QuarantineEntry, RevocationEvent, QuarantineManager
|
||||
from ..core.detector import SecretMatch
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
@dataclass
|
||||
class RevocationRequest:
|
||||
"""Represents a request to revoke a secret"""
|
||||
quarantine_id: int
|
||||
secret_type: str
|
||||
redacted_secret: str
|
||||
urgency: str # 'immediate', 'high', 'medium', 'low'
|
||||
metadata: Dict[str, Any]
|
||||
|
||||
|
||||
@dataclass
|
||||
class RevocationResponse:
|
||||
"""Represents the response from a revocation attempt"""
|
||||
success: bool
|
||||
method: str
|
||||
response_data: Dict[str, Any]
|
||||
error_message: Optional[str] = None
|
||||
revocation_id: Optional[str] = None
|
||||
|
||||
|
||||
class SecretRevoker:
|
||||
"""
|
||||
Automated secret revocation system that integrates with various cloud providers
|
||||
and services to automatically disable compromised credentials.
|
||||
"""
|
||||
|
||||
def __init__(self, quarantine_manager: QuarantineManager, webhook_config: Dict[str, str] = None):
|
||||
self.quarantine = quarantine_manager
|
||||
self.webhook_config = webhook_config or {}
|
||||
|
||||
# Revocation timeouts and retry settings
|
||||
self.request_timeout = 10 # seconds
|
||||
self.max_retries = 3
|
||||
self.retry_delay = 2 # seconds
|
||||
|
||||
# Statistics
|
||||
self.stats = {
|
||||
'total_revocations': 0,
|
||||
'successful_revocations': 0,
|
||||
'failed_revocations': 0,
|
||||
'revocations_by_type': {},
|
||||
'last_reset': datetime.now()
|
||||
}
|
||||
|
||||
logger.info("Initialized SecretRevoker")
|
||||
|
||||
async def trigger_revocation(self, quarantine_entry: QuarantineEntry) -> Optional[RevocationResponse]:
|
||||
"""Trigger automatic revocation for a quarantined secret"""
|
||||
try:
|
||||
revocation_request = RevocationRequest(
|
||||
quarantine_id=quarantine_entry.id,
|
||||
secret_type=quarantine_entry.secret_type,
|
||||
redacted_secret=self._extract_redacted_from_metadata(quarantine_entry),
|
||||
urgency=self._determine_urgency(quarantine_entry.severity),
|
||||
metadata={
|
||||
'source_agent': quarantine_entry.source_agent,
|
||||
'detection_timestamp': quarantine_entry.timestamp.isoformat(),
|
||||
'confidence': quarantine_entry.confidence,
|
||||
'hypercore_position': quarantine_entry.hypercore_position
|
||||
}
|
||||
)
|
||||
|
||||
# Determine revocation method
|
||||
revocation_method = self._get_revocation_method(quarantine_entry.secret_type)
|
||||
if not revocation_method:
|
||||
logger.warning(f"No revocation method configured for {quarantine_entry.secret_type}")
|
||||
return None
|
||||
|
||||
# Attempt revocation
|
||||
response = await self._execute_revocation(revocation_request, revocation_method)
|
||||
|
||||
# Record the revocation event
|
||||
await self._record_revocation_event(quarantine_entry, response)
|
||||
|
||||
# Update statistics
|
||||
self._update_stats(quarantine_entry.secret_type, response.success)
|
||||
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to trigger revocation for quarantine {quarantine_entry.id}: {e}")
|
||||
return None
|
||||
|
||||
def _extract_redacted_from_metadata(self, quarantine_entry: QuarantineEntry) -> str:
|
||||
"""Extract redacted secret from quarantine metadata"""
|
||||
try:
|
||||
matches = quarantine_entry.metadata.get('matches', [])
|
||||
if matches:
|
||||
# Get the first match's redacted text
|
||||
return matches[0].get('redacted_text', 'REDACTED')
|
||||
except:
|
||||
pass
|
||||
|
||||
return 'REDACTED'
|
||||
|
||||
def _determine_urgency(self, severity: str) -> str:
|
||||
"""Determine revocation urgency based on severity"""
|
||||
urgency_map = {
|
||||
'CRITICAL': 'immediate',
|
||||
'HIGH': 'high',
|
||||
'MEDIUM': 'medium',
|
||||
'LOW': 'low'
|
||||
}
|
||||
return urgency_map.get(severity, 'medium')
|
||||
|
||||
def _get_revocation_method(self, secret_type: str) -> Optional[str]:
|
||||
"""Get the appropriate revocation method for a secret type"""
|
||||
method_map = {
|
||||
'AWS_ACCESS_KEY': 'aws_iam_revocation',
|
||||
'AWS_SECRET_KEY': 'aws_iam_revocation',
|
||||
'GITHUB_TOKEN': 'github_token_revocation',
|
||||
'GITHUB_OAUTH': 'github_token_revocation',
|
||||
'SLACK_TOKEN': 'slack_token_revocation',
|
||||
'GOOGLE_API_KEY': 'google_api_revocation',
|
||||
'DOCKER_TOKEN': 'docker_token_revocation'
|
||||
}
|
||||
return method_map.get(secret_type)
|
||||
|
||||
async def _execute_revocation(self, request: RevocationRequest, method: str) -> RevocationResponse:
|
||||
"""Execute the actual revocation based on the method"""
|
||||
method_handlers = {
|
||||
'aws_iam_revocation': self._revoke_aws_credentials,
|
||||
'github_token_revocation': self._revoke_github_token,
|
||||
'slack_token_revocation': self._revoke_slack_token,
|
||||
'google_api_revocation': self._revoke_google_api_key,
|
||||
'docker_token_revocation': self._revoke_docker_token,
|
||||
'webhook_revocation': self._revoke_via_webhook
|
||||
}
|
||||
|
||||
handler = method_handlers.get(method, self._revoke_via_webhook)
|
||||
|
||||
for attempt in range(self.max_retries):
|
||||
try:
|
||||
response = await handler(request)
|
||||
if response.success:
|
||||
logger.info(
|
||||
f"Successfully revoked {request.secret_type}",
|
||||
quarantine_id=request.quarantine_id,
|
||||
method=method,
|
||||
attempt=attempt + 1
|
||||
)
|
||||
return response
|
||||
|
||||
# Log failure and retry if not successful
|
||||
logger.warning(
|
||||
f"Revocation attempt {attempt + 1} failed",
|
||||
quarantine_id=request.quarantine_id,
|
||||
method=method,
|
||||
error=response.error_message
|
||||
)
|
||||
|
||||
if attempt < self.max_retries - 1:
|
||||
await asyncio.sleep(self.retry_delay * (attempt + 1)) # Exponential backoff
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Revocation attempt {attempt + 1} error: {e}")
|
||||
if attempt < self.max_retries - 1:
|
||||
await asyncio.sleep(self.retry_delay * (attempt + 1))
|
||||
|
||||
# All attempts failed
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method=method,
|
||||
response_data={},
|
||||
error_message=f"All {self.max_retries} revocation attempts failed"
|
||||
)
|
||||
|
||||
async def _revoke_aws_credentials(self, request: RevocationRequest) -> RevocationResponse:
|
||||
"""Revoke AWS credentials via webhook"""
|
||||
webhook_url = self.webhook_config.get('AWS_ACCESS_KEY')
|
||||
if not webhook_url:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method='aws_iam_revocation',
|
||||
response_data={},
|
||||
error_message="No AWS revocation webhook configured"
|
||||
)
|
||||
|
||||
payload = {
|
||||
'event': 'secret_leak_detected',
|
||||
'secret_type': request.secret_type,
|
||||
'redacted_key': request.redacted_secret,
|
||||
'urgency': request.urgency,
|
||||
'quarantine_id': request.quarantine_id,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'recommended_action': 'Revoke IAM access key immediately',
|
||||
'metadata': request.metadata
|
||||
}
|
||||
|
||||
return await self._send_webhook_request(webhook_url, payload, 'aws_iam_revocation')
|
||||
|
||||
async def _revoke_github_token(self, request: RevocationRequest) -> RevocationResponse:
|
||||
"""Revoke GitHub token via webhook"""
|
||||
webhook_url = self.webhook_config.get('GITHUB_TOKEN')
|
||||
if not webhook_url:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method='github_token_revocation',
|
||||
response_data={},
|
||||
error_message="No GitHub revocation webhook configured"
|
||||
)
|
||||
|
||||
payload = {
|
||||
'event': 'secret_leak_detected',
|
||||
'secret_type': request.secret_type,
|
||||
'redacted_key': request.redacted_secret,
|
||||
'urgency': request.urgency,
|
||||
'quarantine_id': request.quarantine_id,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'recommended_action': 'Revoke GitHub token via API or settings',
|
||||
'metadata': request.metadata
|
||||
}
|
||||
|
||||
return await self._send_webhook_request(webhook_url, payload, 'github_token_revocation')
|
||||
|
||||
async def _revoke_slack_token(self, request: RevocationRequest) -> RevocationResponse:
|
||||
"""Revoke Slack token via webhook"""
|
||||
webhook_url = self.webhook_config.get('SLACK_TOKEN')
|
||||
if not webhook_url:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method='slack_token_revocation',
|
||||
response_data={},
|
||||
error_message="No Slack revocation webhook configured"
|
||||
)
|
||||
|
||||
payload = {
|
||||
'event': 'secret_leak_detected',
|
||||
'secret_type': request.secret_type,
|
||||
'redacted_key': request.redacted_secret,
|
||||
'urgency': request.urgency,
|
||||
'quarantine_id': request.quarantine_id,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'recommended_action': 'Revoke Slack token via Admin API',
|
||||
'metadata': request.metadata
|
||||
}
|
||||
|
||||
return await self._send_webhook_request(webhook_url, payload, 'slack_token_revocation')
|
||||
|
||||
async def _revoke_google_api_key(self, request: RevocationRequest) -> RevocationResponse:
|
||||
"""Revoke Google API key via webhook"""
|
||||
webhook_url = self.webhook_config.get('GOOGLE_API_KEY')
|
||||
if not webhook_url:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method='google_api_revocation',
|
||||
response_data={},
|
||||
error_message="No Google API revocation webhook configured"
|
||||
)
|
||||
|
||||
payload = {
|
||||
'event': 'secret_leak_detected',
|
||||
'secret_type': request.secret_type,
|
||||
'redacted_key': request.redacted_secret,
|
||||
'urgency': request.urgency,
|
||||
'quarantine_id': request.quarantine_id,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'recommended_action': 'Revoke API key via Google Cloud Console',
|
||||
'metadata': request.metadata
|
||||
}
|
||||
|
||||
return await self._send_webhook_request(webhook_url, payload, 'google_api_revocation')
|
||||
|
||||
async def _revoke_docker_token(self, request: RevocationRequest) -> RevocationResponse:
|
||||
"""Revoke Docker token via webhook"""
|
||||
webhook_url = self.webhook_config.get('DOCKER_TOKEN')
|
||||
if not webhook_url:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method='docker_token_revocation',
|
||||
response_data={},
|
||||
error_message="No Docker revocation webhook configured"
|
||||
)
|
||||
|
||||
payload = {
|
||||
'event': 'secret_leak_detected',
|
||||
'secret_type': request.secret_type,
|
||||
'redacted_key': request.redacted_secret,
|
||||
'urgency': request.urgency,
|
||||
'quarantine_id': request.quarantine_id,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'recommended_action': 'Revoke Docker token via Hub settings',
|
||||
'metadata': request.metadata
|
||||
}
|
||||
|
||||
return await self._send_webhook_request(webhook_url, payload, 'docker_token_revocation')
|
||||
|
||||
async def _revoke_via_webhook(self, request: RevocationRequest) -> RevocationResponse:
|
||||
"""Generic webhook revocation for unknown secret types"""
|
||||
# Try to find a generic webhook endpoint
|
||||
webhook_url = self.webhook_config.get('GENERIC',
|
||||
self.webhook_config.get('DEFAULT'))
|
||||
|
||||
if not webhook_url:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method='webhook_revocation',
|
||||
response_data={},
|
||||
error_message=f"No webhook configured for {request.secret_type}"
|
||||
)
|
||||
|
||||
payload = {
|
||||
'event': 'secret_leak_detected',
|
||||
'secret_type': request.secret_type,
|
||||
'redacted_key': request.redacted_secret,
|
||||
'urgency': request.urgency,
|
||||
'quarantine_id': request.quarantine_id,
|
||||
'timestamp': datetime.now().isoformat(),
|
||||
'recommended_action': 'Manual review and revocation required',
|
||||
'metadata': request.metadata
|
||||
}
|
||||
|
||||
return await self._send_webhook_request(webhook_url, payload, 'webhook_revocation')
|
||||
|
||||
async def _send_webhook_request(self, url: str, payload: Dict[str, Any], method: str) -> RevocationResponse:
|
||||
"""Send webhook request and handle response"""
|
||||
try:
|
||||
async with aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=self.request_timeout)) as session:
|
||||
async with session.post(url, json=payload) as response:
|
||||
response_data = {}
|
||||
try:
|
||||
response_data = await response.json()
|
||||
except:
|
||||
response_data = {'text': await response.text()}
|
||||
|
||||
if response.status == 200:
|
||||
return RevocationResponse(
|
||||
success=True,
|
||||
method=method,
|
||||
response_data=response_data,
|
||||
revocation_id=response_data.get('revocation_id')
|
||||
)
|
||||
else:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method=method,
|
||||
response_data=response_data,
|
||||
error_message=f"HTTP {response.status}: {response_data}"
|
||||
)
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method=method,
|
||||
response_data={},
|
||||
error_message=f"Webhook request timed out after {self.request_timeout}s"
|
||||
)
|
||||
except Exception as e:
|
||||
return RevocationResponse(
|
||||
success=False,
|
||||
method=method,
|
||||
response_data={},
|
||||
error_message=f"Webhook request failed: {str(e)}"
|
||||
)
|
||||
|
||||
async def _record_revocation_event(self, quarantine_entry: QuarantineEntry, response: RevocationResponse):
|
||||
"""Record revocation event in the database"""
|
||||
try:
|
||||
revocation_event = RevocationEvent(
|
||||
quarantine_id=quarantine_entry.id,
|
||||
secret_type=quarantine_entry.secret_type,
|
||||
revocation_method=response.method,
|
||||
status='success' if response.success else 'failed',
|
||||
response_data=response.response_data,
|
||||
timestamp=datetime.now()
|
||||
)
|
||||
|
||||
await self.quarantine.record_revocation(revocation_event)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to record revocation event: {e}")
|
||||
|
||||
def _update_stats(self, secret_type: str, success: bool):
|
||||
"""Update revocation statistics"""
|
||||
self.stats['total_revocations'] += 1
|
||||
|
||||
if success:
|
||||
self.stats['successful_revocations'] += 1
|
||||
else:
|
||||
self.stats['failed_revocations'] += 1
|
||||
|
||||
# Update by-type stats
|
||||
if secret_type not in self.stats['revocations_by_type']:
|
||||
self.stats['revocations_by_type'][secret_type] = {
|
||||
'total': 0,
|
||||
'successful': 0,
|
||||
'failed': 0
|
||||
}
|
||||
|
||||
type_stats = self.stats['revocations_by_type'][secret_type]
|
||||
type_stats['total'] += 1
|
||||
|
||||
if success:
|
||||
type_stats['successful'] += 1
|
||||
else:
|
||||
type_stats['failed'] += 1
|
||||
|
||||
async def test_webhook_endpoint(self, secret_type: str) -> Dict[str, Any]:
|
||||
"""Test a webhook endpoint with a test payload"""
|
||||
webhook_url = self.webhook_config.get(secret_type)
|
||||
if not webhook_url:
|
||||
return {
|
||||
'success': False,
|
||||
'error': f'No webhook configured for {secret_type}'
|
||||
}
|
||||
|
||||
test_payload = {
|
||||
'event': 'webhook_test',
|
||||
'secret_type': secret_type,
|
||||
'test': True,
|
||||
'timestamp': datetime.now().isoformat()
|
||||
}
|
||||
|
||||
try:
|
||||
response = await self._send_webhook_request(webhook_url, test_payload, 'test')
|
||||
return {
|
||||
'success': response.success,
|
||||
'method': response.method,
|
||||
'response_data': response.response_data,
|
||||
'error': response.error_message
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
'success': False,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def get_stats(self) -> Dict[str, Any]:
|
||||
"""Get revocation statistics"""
|
||||
current_time = datetime.now()
|
||||
uptime_hours = (current_time - self.stats['last_reset']).total_seconds() / 3600
|
||||
|
||||
stats = self.stats.copy()
|
||||
stats.update({
|
||||
'uptime_hours': round(uptime_hours, 2),
|
||||
'success_rate': (
|
||||
self.stats['successful_revocations'] / max(1, self.stats['total_revocations'])
|
||||
) * 100,
|
||||
'configured_webhooks': list(self.webhook_config.keys())
|
||||
})
|
||||
|
||||
return stats
|
||||
|
||||
def reset_stats(self):
|
||||
"""Reset statistics counters"""
|
||||
self.stats = {
|
||||
'total_revocations': 0,
|
||||
'successful_revocations': 0,
|
||||
'failed_revocations': 0,
|
||||
'revocations_by_type': {},
|
||||
'last_reset': datetime.now()
|
||||
}
|
||||
|
||||
logger.info("SecretRevoker statistics reset")
|
||||
@@ -1,33 +0,0 @@
|
||||
# Configuration for the SHHH Secrets Sentinel
|
||||
|
||||
# -- File Paths --
|
||||
# Path to the primary, raw hypercore log to be monitored.
|
||||
primary_log_path: '/home/tony/AI/projects/chorus.services/modules/shhh/primary.log'
|
||||
|
||||
# Path where the sanitized sister hypercore log will be written.
|
||||
sanitized_log_path: '/home/tony/AI/projects/chorus.services/modules/shhh/sanitized.log'
|
||||
|
||||
# Path to the YAML file containing regex patterns for secret detection.
|
||||
patterns_file: 'patterns.yaml'
|
||||
|
||||
# Path to the system prompt file for the LLM agent.
|
||||
shhh_agent_prompt_file: 'SHHH_SECRETS_SENTINEL_AGENT_PROMPT.md'
|
||||
|
||||
|
||||
# -- Database --
|
||||
# Connection string for the PostgreSQL database used for quarantining secrets.
|
||||
# Format: postgresql://user:password@host:port/database
|
||||
database_url: 'postgresql://shhh:password@localhost:5432/shhh_sentinel'
|
||||
|
||||
|
||||
# -- LLM Analyzer (Ollama) --
|
||||
# The API endpoint for the Ollama instance.
|
||||
ollama_endpoint: 'http://localhost:11434/api/generate'
|
||||
|
||||
# The name of the model to use from Ollama (e.g., llama3, codellama).
|
||||
ollama_model: 'llama3'
|
||||
|
||||
# The confidence score threshold for regex matches.
|
||||
# Matches with confidence >= this value will be quarantined immediately, skipping the LLM.
|
||||
# Matches with confidence < this value will be sent to the LLM for verification.
|
||||
llm_confidence_threshold: 0.90
|
||||
@@ -1,6 +0,0 @@
|
||||
# SHHH Core Module
|
||||
"""
|
||||
Core components for the SHHH Secrets Sentinel system.
|
||||
"""
|
||||
|
||||
__version__ = "1.0.0"
|
||||
@@ -1,52 +0,0 @@
|
||||
import re
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
|
||||
class SecretDetector:
|
||||
"""
|
||||
A simplified secret detection engine using configurable regex patterns.
|
||||
It scans text for secrets, redacts them, and provides metadata.
|
||||
"""
|
||||
def __init__(self, patterns_file: str = "patterns.yaml"):
|
||||
self.patterns_file = Path(patterns_file)
|
||||
self.patterns = self._load_patterns()
|
||||
|
||||
def _load_patterns(self) -> dict:
|
||||
"""Load detection patterns from YAML configuration."""
|
||||
try:
|
||||
with open(self.patterns_file, 'r') as f:
|
||||
config = yaml.safe_load(f)
|
||||
|
||||
patterns = config.get('patterns', {})
|
||||
# Pre-compile regex for efficiency
|
||||
for name, props in patterns.items():
|
||||
if props.get('active', True):
|
||||
props['compiled_regex'] = re.compile(props['regex'])
|
||||
return patterns
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Failed to load patterns from {self.patterns_file}: {e}")
|
||||
return {}
|
||||
|
||||
def scan(self, text: str) -> list[dict]:
|
||||
"""Scans text and returns a list of found secrets with metadata."""
|
||||
matches = []
|
||||
for pattern_name, pattern in self.patterns.items():
|
||||
if pattern.get('active', True) and 'compiled_regex' in pattern:
|
||||
regex_match = pattern['compiled_regex'].search(text)
|
||||
if regex_match:
|
||||
matches.append({
|
||||
"secret_type": pattern_name,
|
||||
"value": regex_match.group(0),
|
||||
"confidence": pattern.get("confidence", 0.8),
|
||||
"severity": pattern.get("severity", "MEDIUM")
|
||||
})
|
||||
return matches
|
||||
|
||||
def redact(self, text: str, secret_value: str) -> str:
|
||||
"""Redacts a specific secret value within a string."""
|
||||
# Ensure we don't reveal too much for very short secrets
|
||||
if len(secret_value) < 8:
|
||||
return text.replace(secret_value, "[REDACTED]")
|
||||
|
||||
redacted_str = secret_value[:4] + "****" + secret_value[-4:]
|
||||
return text.replace(secret_value, f"[REDACTED:{redacted_str}]")
|
||||
@@ -1,35 +0,0 @@
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
|
||||
class LogEntry:
|
||||
"""A mock log entry object for testing purposes."""
|
||||
def __init__(self, content):
|
||||
self.content = content
|
||||
self.timestamp = datetime.now()
|
||||
# Add other fields as needed to match the processor's expectations
|
||||
self.source_agent = "mock_agent"
|
||||
self.message_type = "mock_message"
|
||||
self.metadata = {}
|
||||
self.is_bzzz_message = False
|
||||
self.bzzz_message_id = None
|
||||
|
||||
class HypercoreReader:
|
||||
"""
|
||||
A simplified, mock HypercoreReader that reads from a plain text file
|
||||
to simulate a stream of log entries for testing.
|
||||
"""
|
||||
def __init__(self, log_path: str, **kwargs):
|
||||
self.log_path = log_path
|
||||
|
||||
async def stream_entries(self):
|
||||
"""
|
||||
An async generator that yields log entries from a text file.
|
||||
"""
|
||||
try:
|
||||
with open(self.log_path, 'r') as f:
|
||||
for line in f:
|
||||
yield LogEntry(line.strip())
|
||||
await asyncio.sleep(0.01) # Simulate async behavior
|
||||
except FileNotFoundError:
|
||||
print(f"[ERROR] Hypercore log file not found at: {self.log_path}")
|
||||
return
|
||||
@@ -1,44 +0,0 @@
|
||||
import requests
|
||||
import json
|
||||
|
||||
class LLMAnalyzer:
|
||||
"""Analyzes text for secrets using a local LLM via Ollama."""
|
||||
|
||||
def __init__(self, endpoint: str, model: str, system_prompt: str):
|
||||
self.endpoint = endpoint
|
||||
self.model = model
|
||||
self.system_prompt = system_prompt
|
||||
|
||||
def analyze(self, text: str) -> dict:
|
||||
"""
|
||||
Sends text to the Ollama API for analysis and returns a structured JSON response.
|
||||
|
||||
Returns:
|
||||
A dictionary like:
|
||||
{
|
||||
"secret_found": bool,
|
||||
"secret_type": str,
|
||||
"confidence_score": float,
|
||||
"severity": str
|
||||
}
|
||||
Returns a default "not found" response on error.
|
||||
"""
|
||||
prompt = f"Log entry: \"{text}\"\n\nAnalyze this for secrets and respond with only the required JSON."
|
||||
payload = {
|
||||
"model": self.model,
|
||||
"system": self.system_prompt,
|
||||
"prompt": prompt,
|
||||
"format": "json",
|
||||
"stream": False
|
||||
}
|
||||
try:
|
||||
response = requests.post(self.endpoint, json=payload, timeout=15)
|
||||
response.raise_for_status()
|
||||
# The response from Ollama is a JSON string, which needs to be parsed.
|
||||
analysis = json.loads(response.json().get("response", "{}"))
|
||||
return analysis
|
||||
except (requests.exceptions.RequestException, json.JSONDecodeError) as e:
|
||||
print(f"[ERROR] LLMAnalyzer failed: {e}")
|
||||
# Fallback: If LLM fails, assume no secret was found to avoid blocking the pipeline.
|
||||
return {"secret_found": False}
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
from datetime import datetime
|
||||
|
||||
class QuarantineManager:
|
||||
"""
|
||||
A simplified, mock QuarantineManager for testing purposes.
|
||||
It prints quarantined messages to the console instead of saving to a database.
|
||||
"""
|
||||
def __init__(self, database_url: str, **kwargs):
|
||||
print(f"[MockQuarantine] Initialized with db_url: {database_url}")
|
||||
|
||||
def quarantine_message(self, message, secret_type: str, severity: str, redacted_content: str):
|
||||
"""
|
||||
Prints a quarantined message to the console.
|
||||
"""
|
||||
print("\n--- QUARANTINE ALERT ---")
|
||||
print(f"Timestamp: {datetime.now().isoformat()}")
|
||||
print(f"Severity: {severity}")
|
||||
print(f"Secret Type: {secret_type}")
|
||||
print(f"Original Content (from mock): {message.content}")
|
||||
print(f"Redacted Content: {redacted_content}")
|
||||
print("------------------------\n")
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
class SanitizedWriter:
|
||||
"""Writes log entries to the sanitized sister hypercore log."""
|
||||
|
||||
def __init__(self, sanitized_log_path: str):
|
||||
self.log_path = sanitized_log_path
|
||||
# Placeholder for hypercore writing logic. For now, we'll append to a file.
|
||||
self.log_file = open(self.log_path, "a")
|
||||
|
||||
def write(self, log_entry: str):
|
||||
"""Writes a single log entry to the sanitized stream."""
|
||||
self.log_file.write(log_entry + "\n")
|
||||
self.log_file.flush()
|
||||
|
||||
def close(self):
|
||||
self.log_file.close()
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
# SHHH Integrations Module
|
||||
"""
|
||||
Integration components for BZZZ network and external systems.
|
||||
"""
|
||||
@@ -1,369 +0,0 @@
|
||||
"""
|
||||
BZZZ Message Interceptor for SHHH Secrets Sentinel
|
||||
Intercepts and validates BZZZ P2P messages before network propagation.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
from typing import Dict, Any, Optional, Set, Callable
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
import structlog
|
||||
|
||||
from ..core.hypercore_reader import BzzzMessage
|
||||
from ..core.detector import SecretDetector, DetectionResult
|
||||
from ..core.quarantine import QuarantineManager
|
||||
|
||||
logger = structlog.get_logger()
|
||||
|
||||
|
||||
@dataclass
|
||||
class BlockedMessage:
|
||||
"""Represents a blocked BZZZ message"""
|
||||
message_id: str
|
||||
sender_agent: str
|
||||
block_reason: str
|
||||
secret_types: list
|
||||
timestamp: datetime
|
||||
quarantine_id: Optional[int] = None
|
||||
|
||||
|
||||
class BzzzInterceptor:
|
||||
"""
|
||||
Intercepts BZZZ P2P messages before transmission to prevent secret leakage.
|
||||
Integrates with the BZZZ network layer to scan messages in real-time.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
detector: SecretDetector,
|
||||
quarantine_manager: QuarantineManager,
|
||||
bzzz_config: Dict[str, Any] = None
|
||||
):
|
||||
self.detector = detector
|
||||
self.quarantine = quarantine_manager
|
||||
self.bzzz_config = bzzz_config or {}
|
||||
|
||||
# Message blocking state
|
||||
self.blocked_messages: Dict[str, BlockedMessage] = {}
|
||||
self.message_hooks: Set[Callable] = set()
|
||||
self.is_active = False
|
||||
|
||||
# Statistics
|
||||
self.stats = {
|
||||
'total_scanned': 0,
|
||||
'secrets_detected': 0,
|
||||
'messages_blocked': 0,
|
||||
'false_positives': 0,
|
||||
'last_reset': datetime.now()
|
||||
}
|
||||
|
||||
logger.info("Initialized BzzzInterceptor")
|
||||
|
||||
async def start(self):
|
||||
"""Start the BZZZ message interception service"""
|
||||
self.is_active = True
|
||||
logger.info("BZZZ Interceptor started - all outgoing messages will be scanned")
|
||||
|
||||
async def stop(self):
|
||||
"""Stop the BZZZ message interception service"""
|
||||
self.is_active = False
|
||||
logger.info("BZZZ Interceptor stopped")
|
||||
|
||||
def install_message_hook(self, hook_function: Callable):
|
||||
"""Install a message hook for BZZZ network integration"""
|
||||
self.message_hooks.add(hook_function)
|
||||
logger.info(f"Installed BZZZ message hook: {hook_function.__name__}")
|
||||
|
||||
def remove_message_hook(self, hook_function: Callable):
|
||||
"""Remove a message hook"""
|
||||
self.message_hooks.discard(hook_function)
|
||||
logger.info(f"Removed BZZZ message hook: {hook_function.__name__}")
|
||||
|
||||
async def intercept_outgoing_message(self, message: BzzzMessage) -> bool:
|
||||
"""
|
||||
Intercept and scan an outgoing BZZZ message.
|
||||
Returns True if message should be allowed, False if blocked.
|
||||
"""
|
||||
if not self.is_active:
|
||||
return True # Pass through if interceptor is inactive
|
||||
|
||||
start_time = time.time()
|
||||
self.stats['total_scanned'] += 1
|
||||
|
||||
try:
|
||||
# Scan message for secrets
|
||||
detection_result = self.detector.scan_bzzz_message(message)
|
||||
|
||||
if detection_result.has_secrets:
|
||||
await self._handle_secret_detection(message, detection_result)
|
||||
return False # Block message
|
||||
|
||||
# Message is clean, allow transmission
|
||||
processing_time = (time.time() - start_time) * 1000
|
||||
logger.debug(
|
||||
f"BZZZ message scanned clean",
|
||||
message_id=message.message_id,
|
||||
sender=message.sender_agent,
|
||||
processing_time_ms=processing_time
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error intercepting BZZZ message: {e}")
|
||||
# On error, default to blocking for security
|
||||
await self._block_message_on_error(message, str(e))
|
||||
return False
|
||||
|
||||
async def _handle_secret_detection(self, message: BzzzMessage, detection_result: DetectionResult):
|
||||
"""Handle detection of secrets in a BZZZ message"""
|
||||
self.stats['secrets_detected'] += 1
|
||||
self.stats['messages_blocked'] += 1
|
||||
|
||||
# Extract secret types for blocking record
|
||||
secret_types = [match.secret_type for match in detection_result.matches]
|
||||
|
||||
# Quarantine the detection result
|
||||
quarantine_entry = await self.quarantine.quarantine_detection(detection_result)
|
||||
|
||||
# Create blocked message record
|
||||
blocked_msg = BlockedMessage(
|
||||
message_id=message.message_id,
|
||||
sender_agent=message.sender_agent,
|
||||
block_reason=f"Secrets detected: {', '.join(secret_types)}",
|
||||
secret_types=secret_types,
|
||||
timestamp=datetime.now(),
|
||||
quarantine_id=quarantine_entry.id
|
||||
)
|
||||
|
||||
self.blocked_messages[message.message_id] = blocked_msg
|
||||
|
||||
# Notify BZZZ network layer
|
||||
await self._notify_message_blocked(message, blocked_msg)
|
||||
|
||||
logger.critical(
|
||||
f"BLOCKED BZZZ message containing secrets",
|
||||
message_id=message.message_id,
|
||||
sender=message.sender_agent,
|
||||
recipient=message.recipient_agent,
|
||||
secret_types=secret_types,
|
||||
severity=detection_result.max_severity,
|
||||
quarantine_id=quarantine_entry.id
|
||||
)
|
||||
|
||||
async def _block_message_on_error(self, message: BzzzMessage, error_msg: str):
|
||||
"""Block a message due to processing error"""
|
||||
self.stats['messages_blocked'] += 1
|
||||
|
||||
blocked_msg = BlockedMessage(
|
||||
message_id=message.message_id,
|
||||
sender_agent=message.sender_agent,
|
||||
block_reason=f"Processing error: {error_msg}",
|
||||
secret_types=[],
|
||||
timestamp=datetime.now()
|
||||
)
|
||||
|
||||
self.blocked_messages[message.message_id] = blocked_msg
|
||||
await self._notify_message_blocked(message, blocked_msg)
|
||||
|
||||
logger.error(
|
||||
f"BLOCKED BZZZ message due to error",
|
||||
message_id=message.message_id,
|
||||
sender=message.sender_agent,
|
||||
error=error_msg
|
||||
)
|
||||
|
||||
async def _notify_message_blocked(self, message: BzzzMessage, blocked_msg: BlockedMessage):
|
||||
"""Notify BZZZ network and sender about blocked message"""
|
||||
notification = {
|
||||
'event': 'message_blocked',
|
||||
'message_id': message.message_id,
|
||||
'sender_agent': message.sender_agent,
|
||||
'recipient_agent': message.recipient_agent,
|
||||
'block_reason': blocked_msg.block_reason,
|
||||
'secret_types': blocked_msg.secret_types,
|
||||
'timestamp': blocked_msg.timestamp.isoformat(),
|
||||
'quarantine_id': blocked_msg.quarantine_id
|
||||
}
|
||||
|
||||
# Notify all registered hooks
|
||||
for hook in self.message_hooks:
|
||||
try:
|
||||
await self._call_hook_safely(hook, 'message_blocked', notification)
|
||||
except Exception as e:
|
||||
logger.warning(f"Hook {hook.__name__} failed: {e}")
|
||||
|
||||
# Send notification back to sender agent
|
||||
await self._notify_sender_agent(message.sender_agent, notification)
|
||||
|
||||
async def _call_hook_safely(self, hook: Callable, event_type: str, data: Dict[str, Any]):
|
||||
"""Safely call a hook function with error handling"""
|
||||
try:
|
||||
if asyncio.iscoroutinefunction(hook):
|
||||
await hook(event_type, data)
|
||||
else:
|
||||
hook(event_type, data)
|
||||
except Exception as e:
|
||||
logger.warning(f"Hook {hook.__name__} failed: {e}")
|
||||
|
||||
async def _notify_sender_agent(self, sender_agent: str, notification: Dict[str, Any]):
|
||||
"""Send notification to the sender agent about blocked message"""
|
||||
try:
|
||||
# This would integrate with the BZZZ network's agent communication system
|
||||
# For now, we'll log the notification
|
||||
logger.info(
|
||||
f"Notifying agent about blocked message",
|
||||
agent=sender_agent,
|
||||
message_id=notification['message_id'],
|
||||
reason=notification['block_reason']
|
||||
)
|
||||
|
||||
# TODO: Implement actual agent notification via BZZZ network
|
||||
# This might involve:
|
||||
# - Sending a system message back to the agent
|
||||
# - Updating agent's message status
|
||||
# - Triggering agent's error handling workflow
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to notify sender agent {sender_agent}: {e}")
|
||||
|
||||
def is_message_blocked(self, message_id: str) -> Optional[BlockedMessage]:
|
||||
"""Check if a message is blocked"""
|
||||
return self.blocked_messages.get(message_id)
|
||||
|
||||
def unblock_message(self, message_id: str, reviewer: str, reason: str) -> bool:
|
||||
"""Unblock a previously blocked message (for false positives)"""
|
||||
if message_id not in self.blocked_messages:
|
||||
return False
|
||||
|
||||
blocked_msg = self.blocked_messages[message_id]
|
||||
|
||||
# Mark as false positive in stats
|
||||
self.stats['false_positives'] += 1
|
||||
|
||||
# Remove from blocked messages
|
||||
del self.blocked_messages[message_id]
|
||||
|
||||
logger.info(
|
||||
f"Unblocked BZZZ message",
|
||||
message_id=message_id,
|
||||
reviewer=reviewer,
|
||||
reason=reason,
|
||||
original_block_reason=blocked_msg.block_reason
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
def get_blocked_messages(self, limit: int = 100) -> list[BlockedMessage]:
|
||||
"""Get list of recently blocked messages"""
|
||||
blocked_list = list(self.blocked_messages.values())
|
||||
blocked_list.sort(key=lambda x: x.timestamp, reverse=True)
|
||||
return blocked_list[:limit]
|
||||
|
||||
def get_stats(self) -> Dict[str, Any]:
|
||||
"""Get interceptor statistics"""
|
||||
current_time = datetime.now()
|
||||
uptime_hours = (current_time - self.stats['last_reset']).total_seconds() / 3600
|
||||
|
||||
stats = self.stats.copy()
|
||||
stats.update({
|
||||
'uptime_hours': round(uptime_hours, 2),
|
||||
'is_active': self.is_active,
|
||||
'blocked_messages_count': len(self.blocked_messages),
|
||||
'detection_rate': (
|
||||
self.stats['secrets_detected'] / max(1, self.stats['total_scanned'])
|
||||
) * 100,
|
||||
'false_positive_rate': (
|
||||
self.stats['false_positives'] / max(1, self.stats['secrets_detected'])
|
||||
) * 100 if self.stats['secrets_detected'] > 0 else 0
|
||||
})
|
||||
|
||||
return stats
|
||||
|
||||
def reset_stats(self):
|
||||
"""Reset statistics counters"""
|
||||
self.stats = {
|
||||
'total_scanned': 0,
|
||||
'secrets_detected': 0,
|
||||
'messages_blocked': 0,
|
||||
'false_positives': 0,
|
||||
'last_reset': datetime.now()
|
||||
}
|
||||
|
||||
logger.info("BZZZ Interceptor statistics reset")
|
||||
|
||||
async def cleanup_old_blocked_messages(self, hours: int = 24):
|
||||
"""Clean up old blocked message records"""
|
||||
cutoff_time = datetime.now() - timedelta(hours=hours)
|
||||
|
||||
old_messages = [
|
||||
msg_id for msg_id, blocked_msg in self.blocked_messages.items()
|
||||
if blocked_msg.timestamp < cutoff_time
|
||||
]
|
||||
|
||||
for msg_id in old_messages:
|
||||
del self.blocked_messages[msg_id]
|
||||
|
||||
if old_messages:
|
||||
logger.info(f"Cleaned up {len(old_messages)} old blocked message records")
|
||||
|
||||
return len(old_messages)
|
||||
|
||||
|
||||
class BzzzNetworkAdapter:
|
||||
"""
|
||||
Adapter to integrate BzzzInterceptor with the actual BZZZ network layer.
|
||||
This would be customized based on the BZZZ implementation details.
|
||||
"""
|
||||
|
||||
def __init__(self, interceptor: BzzzInterceptor):
|
||||
self.interceptor = interceptor
|
||||
self.original_send_function = None
|
||||
|
||||
def install_interceptor(self, bzzz_network_instance):
|
||||
"""Install interceptor into BZZZ network layer"""
|
||||
# This would need to be customized based on actual BZZZ implementation
|
||||
# Example pattern:
|
||||
|
||||
# Store original send function
|
||||
self.original_send_function = bzzz_network_instance.send_message
|
||||
|
||||
# Replace with intercepting version
|
||||
bzzz_network_instance.send_message = self._intercepting_send_message
|
||||
|
||||
logger.info("BzzzInterceptor installed into BZZZ network layer")
|
||||
|
||||
async def _intercepting_send_message(self, message_data: Dict[str, Any]):
|
||||
"""Intercepting version of BZZZ send_message function"""
|
||||
try:
|
||||
# Convert to BzzzMessage format
|
||||
bzzz_message = self._convert_to_bzzz_message(message_data)
|
||||
|
||||
# Check with interceptor
|
||||
should_allow = await self.interceptor.intercept_outgoing_message(bzzz_message)
|
||||
|
||||
if should_allow:
|
||||
# Call original send function
|
||||
return await self.original_send_function(message_data)
|
||||
else:
|
||||
# Message was blocked
|
||||
raise Exception(f"Message blocked by security interceptor: {bzzz_message.message_id}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in intercepting send: {e}")
|
||||
raise
|
||||
|
||||
def _convert_to_bzzz_message(self, message_data: Dict[str, Any]) -> BzzzMessage:
|
||||
"""Convert BZZZ network message format to BzzzMessage"""
|
||||
# This would need to be customized based on actual BZZZ message format
|
||||
return BzzzMessage(
|
||||
message_id=message_data.get('id', f"auto_{int(time.time())}"),
|
||||
sender_agent=message_data.get('sender', 'unknown'),
|
||||
recipient_agent=message_data.get('recipient'),
|
||||
message_type=message_data.get('type', 'unknown'),
|
||||
payload=json.dumps(message_data.get('payload', message_data)),
|
||||
timestamp=datetime.now(),
|
||||
network_metadata=message_data
|
||||
)
|
||||
@@ -1,181 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
SHHH Secrets Sentinel - Main Entry Point
|
||||
Production-ready secrets detection and monitoring system for CHORUS Services.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import argparse
|
||||
import sys
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
import structlog
|
||||
from typing import Dict, Any
|
||||
|
||||
# Updated imports to bring in the new and modified components
|
||||
from pipeline.processor import MessageProcessor
|
||||
from core.hypercore_reader import HypercoreReader
|
||||
from core.detector import SecretDetector
|
||||
from core.llm_analyzer import LLMAnalyzer
|
||||
from core.quarantine import QuarantineManager
|
||||
from core.sanitized_writer import SanitizedWriter
|
||||
|
||||
|
||||
def setup_logging(log_level: str = "INFO", structured: bool = True):
|
||||
"""Configure structured logging"""
|
||||
structlog.configure(
|
||||
processors=[
|
||||
structlog.stdlib.filter_by_level,
|
||||
structlog.stdlib.add_logger_name,
|
||||
structlog.stdlib.add_log_level,
|
||||
structlog.stdlib.PositionalArgumentsFormatter(),
|
||||
structlog.processors.TimeStamper(fmt="iso"),
|
||||
structlog.processors.StackInfoRenderer(),
|
||||
structlog.processors.format_exc_info,
|
||||
structlog.processors.UnicodeDecoder(),
|
||||
structlog.processors.JSONRenderer() if structured else structlog.dev.ConsoleRenderer(),
|
||||
],
|
||||
context_class=dict,
|
||||
logger_factory=structlog.stdlib.LoggerFactory(),
|
||||
wrapper_class=structlog.stdlib.BoundLogger,
|
||||
cache_logger_on_first_use=True,
|
||||
)
|
||||
|
||||
|
||||
def load_config(config_path: str) -> Dict[str, Any]:
|
||||
"""Load configuration from YAML file"""
|
||||
try:
|
||||
with open(config_path, 'r') as f:
|
||||
config = yaml.safe_load(f)
|
||||
return config
|
||||
except FileNotFoundError:
|
||||
print(f"Configuration file not found: {config_path}, using defaults.")
|
||||
return get_default_config()
|
||||
except yaml.YAMLError as e:
|
||||
print(f"Error parsing configuration file: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_default_config() -> Dict[str, Any]:
|
||||
"""Get default configuration, updated for the new architecture."""
|
||||
return {
|
||||
'primary_log_path': '/path/to/primary/hypercore.log',
|
||||
'sanitized_log_path': '/path/to/sanitized/hypercore.log',
|
||||
'database_url': 'postgresql://shhh:password@localhost:5432/shhh_sentinel',
|
||||
'patterns_file': 'patterns.yaml',
|
||||
'ollama_endpoint': 'http://localhost:11434/api/generate',
|
||||
'ollama_model': 'llama3',
|
||||
'llm_confidence_threshold': 0.90,
|
||||
'shhh_agent_prompt_file': 'SHHH_SECRETS_SENTINEL_AGENT_PROMPT.md'
|
||||
}
|
||||
|
||||
|
||||
async def run_monitor_mode(config: Dict[str, Any]):
|
||||
"""Run in monitoring mode with the new hybrid pipeline."""
|
||||
logger = structlog.get_logger()
|
||||
logger.info("Starting SHHH in monitor mode with hybrid pipeline...")
|
||||
|
||||
writer = None
|
||||
try:
|
||||
# 1. Load System Prompt for LLM
|
||||
try:
|
||||
with open(config['shhh_agent_prompt_file'], "r") as f:
|
||||
ollama_system_prompt = f.read()
|
||||
except FileNotFoundError:
|
||||
logger.error(f"LLM prompt file not found at {config['shhh_agent_prompt_file']}. Aborting.")
|
||||
return
|
||||
|
||||
# 2. Instantiation of components
|
||||
# Note: HypercoreReader and QuarantineManager might need async initialization
|
||||
# which is not shown here for simplicity, following the plan.
|
||||
reader = HypercoreReader(config['primary_log_path'])
|
||||
detector = SecretDetector(config['patterns_file'])
|
||||
llm_analyzer = LLMAnalyzer(config['ollama_endpoint'], config['ollama_model'], ollama_system_prompt)
|
||||
quarantine = QuarantineManager(config['database_url'])
|
||||
writer = SanitizedWriter(config['sanitized_log_path'])
|
||||
|
||||
processor = MessageProcessor(
|
||||
reader=reader,
|
||||
detector=detector,
|
||||
llm_analyzer=llm_analyzer,
|
||||
quarantine=quarantine,
|
||||
writer=writer,
|
||||
llm_threshold=config['llm_confidence_threshold']
|
||||
)
|
||||
|
||||
# 3. Execution
|
||||
logger.info("Starting processor stream...")
|
||||
await processor.process_stream()
|
||||
|
||||
except Exception as e:
|
||||
logger.error("An error occurred during monitor mode execution.", error=str(e))
|
||||
finally:
|
||||
if writer:
|
||||
writer.close()
|
||||
logger.info("Monitor mode shut down complete.")
|
||||
|
||||
|
||||
async def run_api_mode(config: Dict[str, Any], host: str, port: int):
|
||||
"""Run in API mode (dashboard server) - UNCHANGED"""
|
||||
import uvicorn
|
||||
from api.main import app
|
||||
app.state.config = config
|
||||
uvicorn_config = uvicorn.Config(app=app, host=host, port=port, log_level="info", access_log=True)
|
||||
server = uvicorn.Server(uvicorn_config)
|
||||
await server.serve()
|
||||
|
||||
|
||||
async def run_test_mode(config: Dict[str, Any], test_file: str):
|
||||
"""Run in test mode with sample data - UNCHANGED but may be broken."""
|
||||
logger = structlog.get_logger()
|
||||
logger.warning("Test mode may be broken due to recent refactoring.")
|
||||
# This part of the code would need to be updated to work with the new SecretDetector.
|
||||
# For now, it remains as it was.
|
||||
from core.detector import SecretDetector
|
||||
from datetime import datetime
|
||||
|
||||
detector = SecretDetector(config['patterns_file'])
|
||||
logger.info("Running SHHH in test mode")
|
||||
# ... (rest of the test mode logic is likely broken and needs updating)
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(description="SHHH Secrets Sentinel")
|
||||
|
||||
parser.add_argument('--config', '-c', default='config.yaml', help='Configuration file path')
|
||||
parser.add_argument('--mode', '-m', choices=['monitor', 'api', 'test'], default='monitor', help='Operation mode')
|
||||
parser.add_argument('--log-level', '-l', choices=['DEBUG', 'INFO', 'WARNING', 'ERROR'], default='INFO', help='Log level')
|
||||
parser.add_argument('--structured-logs', action='store_true', help='Use structured JSON logging')
|
||||
parser.add_argument('--host', default='127.0.0.1', help='API server host')
|
||||
parser.add_argument('--port', '-p', type=int, default=8000, help='API server port')
|
||||
parser.add_argument('--test-file', help='Test data file for test mode')
|
||||
parser.add_argument('--version', '-v', action='version', version='SHHH Secrets Sentinel 1.1.0 (Hybrid)')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.log_level, args.structured_logs)
|
||||
logger = structlog.get_logger()
|
||||
|
||||
config = load_config(args.config)
|
||||
|
||||
logger.info("Starting SHHH Secrets Sentinel", mode=args.mode, config_file=args.config)
|
||||
|
||||
try:
|
||||
if args.mode == 'monitor':
|
||||
asyncio.run(run_monitor_mode(config))
|
||||
elif args.mode == 'api':
|
||||
asyncio.run(run_api_mode(config, args.host, args.port))
|
||||
elif args.mode == 'test':
|
||||
asyncio.run(run_test_mode(config, args.test_file))
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Shutting down due to keyboard interrupt.")
|
||||
except Exception as e:
|
||||
logger.error("Application failed", error=str(e))
|
||||
sys.exit(1)
|
||||
|
||||
logger.info("SHHH Secrets Sentinel stopped.")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
@@ -1,121 +0,0 @@
|
||||
# SHHH Secrets Detection Patterns
|
||||
# Configuration for the Secrets Sentinel monitoring system
|
||||
|
||||
patterns:
|
||||
AWS_ACCESS_KEY:
|
||||
regex: "AKIA[0-9A-Z]{16}"
|
||||
severity: "CRITICAL"
|
||||
confidence: 0.95
|
||||
active: true
|
||||
description: "AWS Access Key ID"
|
||||
remediation: "Revoke via AWS IAM immediately"
|
||||
|
||||
AWS_SECRET_KEY:
|
||||
regex: "[A-Za-z0-9/+=]{40}"
|
||||
severity: "CRITICAL"
|
||||
confidence: 0.85
|
||||
active: true
|
||||
description: "AWS Secret Access Key"
|
||||
remediation: "Revoke via AWS IAM immediately"
|
||||
context_required: true # Requires context analysis
|
||||
|
||||
PRIVATE_KEY:
|
||||
regex: "-----BEGIN [A-Z ]*PRIVATE KEY-----"
|
||||
severity: "CRITICAL"
|
||||
confidence: 0.98
|
||||
active: true
|
||||
description: "Private Key (RSA, SSH, etc.)"
|
||||
remediation: "Rotate key immediately"
|
||||
|
||||
GITHUB_TOKEN:
|
||||
regex: "ghp_[0-9A-Za-z]{36}"
|
||||
severity: "HIGH"
|
||||
confidence: 0.92
|
||||
active: true
|
||||
description: "GitHub Personal Access Token"
|
||||
remediation: "Revoke via GitHub settings"
|
||||
|
||||
GITHUB_OAUTH:
|
||||
regex: "gho_[0-9A-Za-z]{36}"
|
||||
severity: "HIGH"
|
||||
confidence: 0.92
|
||||
active: true
|
||||
description: "GitHub OAuth Token"
|
||||
remediation: "Revoke via GitHub app settings"
|
||||
|
||||
SLACK_TOKEN:
|
||||
regex: "xox[baprs]-[0-9A-Za-z-]{10,48}"
|
||||
severity: "HIGH"
|
||||
confidence: 0.90
|
||||
active: true
|
||||
description: "Slack Bot/User Token"
|
||||
remediation: "Revoke via Slack Admin API"
|
||||
|
||||
JWT_TOKEN:
|
||||
regex: "eyJ[A-Za-z0-9_-]+?\\.[A-Za-z0-9_-]+?\\.[A-Za-z0-9_-]+?"
|
||||
severity: "MEDIUM"
|
||||
confidence: 0.85
|
||||
active: true
|
||||
description: "JSON Web Token"
|
||||
remediation: "Invalidate token and rotate signing key"
|
||||
|
||||
GOOGLE_API_KEY:
|
||||
regex: "AIza[0-9A-Za-z\\-_]{35}"
|
||||
severity: "HIGH"
|
||||
confidence: 0.90
|
||||
active: true
|
||||
description: "Google API Key"
|
||||
remediation: "Revoke via Google Cloud Console"
|
||||
|
||||
DOCKER_TOKEN:
|
||||
regex: "dckr_pat_[a-zA-Z0-9_-]{32,}"
|
||||
severity: "MEDIUM"
|
||||
confidence: 0.88
|
||||
active: true
|
||||
description: "Docker Personal Access Token"
|
||||
remediation: "Revoke via Docker Hub settings"
|
||||
|
||||
GENERIC_API_KEY:
|
||||
regex: "[Aa][Pp][Ii]_?[Kk][Ee][Yy].*['\"][0-9a-zA-Z]{32,}['\"]"
|
||||
severity: "MEDIUM"
|
||||
confidence: 0.70
|
||||
active: true
|
||||
description: "Generic API Key Pattern"
|
||||
remediation: "Verify and revoke if legitimate"
|
||||
|
||||
# Pattern exceptions - known test/dummy values to ignore
|
||||
exceptions:
|
||||
test_patterns:
|
||||
- "AKIA-TESTKEY-123"
|
||||
- "AKIAIOSFODNN7EXAMPLE"
|
||||
- "xoxb-test-token"
|
||||
- "ghp_test123456789012345678901234567890"
|
||||
- "-----BEGIN EXAMPLE PRIVATE KEY-----"
|
||||
|
||||
development_indicators:
|
||||
- "test"
|
||||
- "example"
|
||||
- "demo"
|
||||
- "mock"
|
||||
- "fake"
|
||||
- "dummy"
|
||||
|
||||
# Quarantine settings
|
||||
quarantine:
|
||||
high_severity_auto_quarantine: true
|
||||
medium_severity_review_required: true
|
||||
retention_days: 90
|
||||
max_entries: 10000
|
||||
|
||||
# Alert settings
|
||||
alerts:
|
||||
webhook_timeout_seconds: 5
|
||||
retry_attempts: 3
|
||||
retry_delay_seconds: 2
|
||||
|
||||
# Revocation hooks
|
||||
revocation_hooks:
|
||||
AWS_ACCESS_KEY: "https://security.chorus.services/hooks/aws-revoke"
|
||||
GITHUB_TOKEN: "https://security.chorus.services/hooks/github-revoke"
|
||||
SLACK_TOKEN: "https://security.chorus.services/hooks/slack-revoke"
|
||||
GOOGLE_API_KEY: "https://security.chorus.services/hooks/google-revoke"
|
||||
@@ -1,4 +0,0 @@
|
||||
# SHHH Pipeline Module
|
||||
"""
|
||||
Main processing pipeline for the SHHH Secrets Sentinel system.
|
||||
"""
|
||||
@@ -1,66 +0,0 @@
|
||||
import asyncio
|
||||
from core.hypercore_reader import HypercoreReader
|
||||
from core.detector import SecretDetector
|
||||
from core.llm_analyzer import LLMAnalyzer
|
||||
from core.quarantine import QuarantineManager
|
||||
from core.sanitized_writer import SanitizedWriter
|
||||
|
||||
class MessageProcessor:
|
||||
def __init__(self, reader: HypercoreReader, detector: SecretDetector, llm_analyzer: LLMAnalyzer, quarantine: QuarantineManager, writer: SanitizedWriter, llm_threshold: float):
|
||||
self.reader = reader
|
||||
self.detector = detector
|
||||
self.llm_analyzer = llm_analyzer
|
||||
self.quarantine = quarantine
|
||||
self.writer = writer
|
||||
self.llm_threshold = llm_threshold # e.g., 0.90
|
||||
|
||||
async def process_stream(self):
|
||||
"""Main processing loop for the hybrid detection model."""
|
||||
async for entry in self.reader.stream_entries():
|
||||
# Stage 1: Fast Regex Scan
|
||||
regex_matches = self.detector.scan(entry.content)
|
||||
|
||||
if not regex_matches:
|
||||
# No secrets found, write original entry to sanitized log
|
||||
self.writer.write(entry.content)
|
||||
continue
|
||||
|
||||
# A potential secret was found. Default to sanitized, but may be quarantined.
|
||||
sanitized_content = entry.content
|
||||
should_quarantine = False
|
||||
confirmed_secret = None
|
||||
|
||||
for match in regex_matches:
|
||||
# High-confidence regex matches trigger immediate quarantine, skipping LLM.
|
||||
if match['confidence'] >= self.llm_threshold:
|
||||
should_quarantine = True
|
||||
confirmed_secret = match
|
||||
break # One high-confidence match is enough
|
||||
|
||||
# Stage 2: Low-confidence matches go to LLM for verification.
|
||||
llm_result = self.llm_analyzer.analyze(entry.content)
|
||||
if llm_result.get("secret_found"):
|
||||
should_quarantine = True
|
||||
# Prefer LLM's classification but use regex value for redaction
|
||||
confirmed_secret = {
|
||||
"secret_type": llm_result.get("secret_type", match['secret_type']),
|
||||
"value": match['value'],
|
||||
"severity": llm_result.get("severity", match['severity'])
|
||||
}
|
||||
break
|
||||
|
||||
if should_quarantine and confirmed_secret:
|
||||
# A secret is confirmed. Redact, quarantine, and alert.
|
||||
sanitized_content = self.detector.redact(entry.content, confirmed_secret['value'])
|
||||
|
||||
self.quarantine.quarantine_message(
|
||||
message=entry,
|
||||
secret_type=confirmed_secret['secret_type'],
|
||||
severity=confirmed_secret['severity'],
|
||||
redacted_content=sanitized_content
|
||||
)
|
||||
# Potentially trigger alerts here as well
|
||||
print(f"[ALERT] Confirmed secret {confirmed_secret['secret_type']} found and quarantined.")
|
||||
|
||||
# Write the (potentially redacted) content to the sanitized log
|
||||
self.writer.write(sanitized_content)
|
||||
@@ -1,15 +0,0 @@
|
||||
# SHHH Secrets Sentinel Dependencies
|
||||
fastapi==0.104.1
|
||||
uvicorn[standard]==0.24.0
|
||||
psycopg2-binary==2.9.9
|
||||
pydantic==2.5.0
|
||||
requests==2.31.0
|
||||
pyyaml==6.0.1
|
||||
redis==5.0.1
|
||||
asyncio-mqtt==0.15.1
|
||||
watchdog==3.0.0
|
||||
prometheus-client==0.19.0
|
||||
python-multipart==0.0.6
|
||||
aiofiles==23.2.1
|
||||
hypercorn==0.15.0
|
||||
structlog==23.2.0
|
||||
@@ -1,995 +0,0 @@
|
||||
|
||||
Here’s a **clean, production-ready system prompt** for that agent:
|
||||
|
||||
---
|
||||
|
||||
**🛡️ System Prompt – “Secrets Sentinel” Agent**
|
||||
|
||||
> **Role & Mission**:
|
||||
> You are the **Secrets Sentinel**, an autonomous security agent tasked with **monitoring all incoming log entries** for any potential leaks of **API keys, passwords, tokens, or other sensitive credentials**. Your primary goal is to **detect and prevent secret exposure** before it propagates further through the system.
|
||||
>
|
||||
> **Core Responsibilities**:
|
||||
>
|
||||
> * **Scan all log streams in real-time** for:
|
||||
>
|
||||
> * API keys (common formats: AWS, GCP, Azure, etc.)
|
||||
> * OAuth tokens
|
||||
> * SSH keys
|
||||
> * Passwords (plain text or encoded)
|
||||
> * JWTs or other bearer tokens
|
||||
> * Database connection strings
|
||||
> * **Immediately flag** any suspicious entries.
|
||||
> * **Classify severity** (e.g., HIGH – AWS root key; MEDIUM – temporary token).
|
||||
> * **Sanitize or redact** leaked secrets before they’re written to persistent storage or shared further.
|
||||
> * **Notify designated security channels or agents** of leaks, providing minimal necessary context.
|
||||
>
|
||||
> **Guidelines**:
|
||||
>
|
||||
> * Never expose the full secret in your alerts — redact most of it (e.g., `AKIA************XYZ`).
|
||||
> * Be cautious of **false positives** (e.g., test data, dummy keys); err on the side of safety but include a “confidence score.”
|
||||
> * Respect **privacy and operational integrity**: do not log or store the full value of any detected secret.
|
||||
> * Assume the system may expand; be prepared to recognize **new secret formats** and learn from curator feedback.
|
||||
>
|
||||
> **Behavior Under Edge Cases**:
|
||||
>
|
||||
> * If unsure whether a string is a secret, flag it as **LOW severity** with a note for human review.
|
||||
> * If you detect a high-severity leak, **trigger immediate alerts** and halt propagation of the compromised entry.
|
||||
>
|
||||
> **Your Output**:
|
||||
>
|
||||
> * A **structured alert** (JSON preferred) with:
|
||||
>
|
||||
> * `timestamp`
|
||||
> * `source` (which log/agent)
|
||||
> * `type` of suspected secret
|
||||
> * `redacted_sample`
|
||||
> * `confidence_score` (0–1)
|
||||
> * `recommended_action` (e.g., “revoke key,” “rotate password,” “ignore dummy”)
|
||||
>
|
||||
> **Tone & Style**:
|
||||
>
|
||||
> * Precise, neutral, security-minded.
|
||||
> * Avoid speculation beyond what you can confidently identify.
|
||||
|
||||
---
|
||||
## 📂 **Version-Controlled `patterns.yaml` Format**
|
||||
|
||||
This lets you add/update/remove detection patterns **without touching code**.
|
||||
|
||||
```yaml
|
||||
version: 1.2
|
||||
last_updated: 2025-08-02
|
||||
|
||||
patterns:
|
||||
AWS_ACCESS_KEY:
|
||||
regex: "AKIA[0-9A-Z]{16}"
|
||||
description: "AWS Access Key ID"
|
||||
severity: HIGH
|
||||
confidence: 0.99
|
||||
active: true
|
||||
|
||||
AWS_SECRET_KEY:
|
||||
regex: "(?i)aws(.{0,20})?(?-i)['\"][0-9a-zA-Z\/+]{40}['\"]"
|
||||
description: "AWS Secret Key"
|
||||
severity: HIGH
|
||||
confidence: 0.99
|
||||
active: true
|
||||
|
||||
GITHUB_TOKEN:
|
||||
regex: "gh[pousr]_[0-9A-Za-z]{36}"
|
||||
description: "GitHub Personal Access Token"
|
||||
severity: HIGH
|
||||
confidence: 0.97
|
||||
active: true
|
||||
|
||||
JWT:
|
||||
regex: "eyJ[A-Za-z0-9_-]+?\\.[A-Za-z0-9._-]+?\\.[A-Za-z0-9._-]+"
|
||||
description: "JSON Web Token"
|
||||
severity: MEDIUM
|
||||
confidence: 0.95
|
||||
active: true
|
||||
|
||||
meta:
|
||||
allow_feedback_learning: true
|
||||
require_human_review_above_confidence: 0.8
|
||||
```
|
||||
|
||||
✅ **Advantages:**
|
||||
|
||||
- Regexes are editable without code changes.
|
||||
|
||||
- Can be versioned in Git for full audit trail.
|
||||
|
||||
- Can toggle `active: false` for deprecating broken rules.
|
||||
|
||||
|
||||
---
|
||||
|
||||
## 🖼 **Flow Diagram (Secrets Sentinel)**
|
||||
|
||||
**Secrets Flow**
|
||||
|
||||
```
|
||||
┌───────────────┐
|
||||
Logs Stream →│ Secrets │
|
||||
│ Sentinel │
|
||||
└──────┬────────┘
|
||||
│
|
||||
┌─────────┼─────────┐
|
||||
│ │
|
||||
[Quarantine] [Sanitized Logs]
|
||||
│ │
|
||||
┌──────┴──────┐ ┌────┴─────┐
|
||||
│High Severity│ │ Safe Data│
|
||||
│Secrets Only │ │ Storage │
|
||||
└──────┬──────┘ └────┬─────┘
|
||||
│ │
|
||||
┌────────┼─────────┐ │
|
||||
│ Revocation Hooks │ │
|
||||
│ (AWS, GitHub, │ │
|
||||
│ Slack, etc.) │ │
|
||||
└────────┬─────────┘ │
|
||||
│ │
|
||||
┌────┴─────┐ │
|
||||
│ Webhooks │ │
|
||||
│ Key Kill │ │
|
||||
└────┬─────┘ │
|
||||
│
|
||||
┌─────────┼─────────┐
|
||||
│ Feedback Loop │
|
||||
│ (Curator/Human) │
|
||||
└─────────┬─────────┘
|
||||
│
|
||||
┌──────┴──────┐
|
||||
│ Meta-Learner│
|
||||
│ (new regex) │
|
||||
└──────┬──────┘
|
||||
│
|
||||
┌──────┴───────┐
|
||||
│ patterns.yaml│
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧪 **Test Harness Script**
|
||||
|
||||
This script simulates log scanning, quarantining, and revocation.
|
||||
|
||||
```python
|
||||
import yaml, json, re
|
||||
from datetime import datetime
|
||||
|
||||
# --- Load patterns.yaml ---
|
||||
with open("patterns.yaml", "r") as f:
|
||||
patterns_config = yaml.safe_load(f)
|
||||
|
||||
PATTERNS = patterns_config["patterns"]
|
||||
|
||||
QUARANTINE = []
|
||||
SANITIZED_LOGS = []
|
||||
|
||||
def redact(secret):
|
||||
return secret[:4] + "*" * (len(secret) - 7) + secret[-3:]
|
||||
|
||||
def scan_log(log_line, log_id, source_agent):
|
||||
alerts = []
|
||||
for secret_type, props in PATTERNS.items():
|
||||
if not props.get("active", True):
|
||||
continue
|
||||
match = re.search(props["regex"], log_line)
|
||||
if match:
|
||||
secret = match.group(0)
|
||||
severity = props["severity"]
|
||||
alert = {
|
||||
"timestamp": datetime.utcnow().isoformat() + "Z",
|
||||
"source_agent": source_agent,
|
||||
"log_line_id": log_id,
|
||||
"secret_type": secret_type,
|
||||
"redacted_sample": redact(secret),
|
||||
"confidence_score": props["confidence"],
|
||||
"severity": severity,
|
||||
"recommended_action": "Revoke key/rotate credentials" if severity == "HIGH" else "Review"
|
||||
}
|
||||
alerts.append(alert)
|
||||
|
||||
# Quarantine if severity is HIGH
|
||||
if severity == "HIGH":
|
||||
quarantine_log(log_line, f"High severity secret detected: {secret_type}")
|
||||
trigger_revocation(secret_type, redact(secret))
|
||||
return alerts
|
||||
|
||||
def quarantine_log(log_line, reason):
|
||||
entry = {"timestamp": datetime.utcnow().isoformat() + "Z", "reason": reason, "log_line": log_line}
|
||||
QUARANTINE.append(entry)
|
||||
print(f"[QUARANTINE] {reason}")
|
||||
|
||||
def trigger_revocation(secret_type, redacted_sample):
|
||||
# Simulated webhook call
|
||||
print(f"[REVOCATION] Simulated revocation triggered for {secret_type} ({redacted_sample})")
|
||||
|
||||
def process_logs(logs):
|
||||
for i, log_line in enumerate(logs):
|
||||
alerts = scan_log(log_line, f"log_{i}", "agent_demo")
|
||||
if alerts:
|
||||
print(json.dumps(alerts, indent=2))
|
||||
else:
|
||||
SANITIZED_LOGS.append(log_line)
|
||||
|
||||
# --- Test Run ---
|
||||
sample_logs = [
|
||||
"INFO User logged in successfully",
|
||||
"WARNING Found AWS key AKIA1234567890ABCD in commit",
|
||||
"DEBUG JWT detected eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.fake.fake"
|
||||
]
|
||||
|
||||
process_logs(sample_logs)
|
||||
|
||||
print("\n--- Quarantine Store ---")
|
||||
print(json.dumps(QUARANTINE, indent=2))
|
||||
```
|
||||
|
||||
✅ **What this does:**
|
||||
|
||||
- Reads `patterns.yaml`
|
||||
|
||||
- Scans logs, prints alerts, quarantines high-severity entries
|
||||
|
||||
- Simulates revocation calls for AWS/GitHub/Slack
|
||||
|
||||
- Keeps sanitized logs separate from quarantined logs
|
||||
|
||||
|
||||
---
|
||||
|
||||
## ✅ Next Expansions (Optional)
|
||||
|
||||
- 📦 **Redis/DB backend** for QUARANTINE instead of memory.
|
||||
|
||||
- 📡 **Real webhook integrations** (AWS STS, GitHub API, Slack API).
|
||||
|
||||
- 🧠 **Feedback ingestion module** (e.g., curator submits: `"false_positive": "AWS_ACCESS_KEY"` → adjusts regex in `patterns.yaml`).
|
||||
|
||||
- 🔄 **Auto-replay from Hyperlog** so Sentinel can retroactively scan old logs with new regex rules.
|
||||
|
||||
|
||||
---
|
||||
🔥 **production-grade spec**.
|
||||
|
||||
---
|
||||
|
||||
## 📂 **1️⃣ Feedback Ingestion Spec**
|
||||
|
||||
This defines how curators/humans give feedback to the Sentinel so it can **update its detection rules (patterns.yaml)** safely.
|
||||
|
||||
---
|
||||
|
||||
### 🔄 **Feedback Flow**
|
||||
|
||||
1. **Curator/Reviewer sees alert** → marks it as:
|
||||
|
||||
- `false_positive` (regex over-triggered)
|
||||
|
||||
- `missed_secret` (regex failed to detect)
|
||||
|
||||
- `uncertain` (needs better regex refinement)
|
||||
|
||||
2. **Feedback API** ingests the report:
|
||||
|
||||
|
||||
```json
|
||||
{
|
||||
"alert_id": "log_345",
|
||||
"secret_type": "AWS_ACCESS_KEY",
|
||||
"feedback_type": "false_positive",
|
||||
"evidence": "Key was dummy data: TESTKEY123",
|
||||
"suggested_regex_fix": null
|
||||
}
|
||||
```
|
||||
|
||||
3. **Meta-Learner** updates rules:
|
||||
|
||||
|
||||
- `false_positive` → adds **exceptions** (e.g., allowlist prefixes like `TESTKEY`).
|
||||
|
||||
- `missed_secret` → drafts **new regex** from evidence (using regex generator or LLM).
|
||||
|
||||
- Writes changes to **patterns.yaml** under `pending_review`.
|
||||
|
||||
|
||||
4. **Security admin approves** before the new regex is marked `active: true`.
|
||||
|
||||
|
||||
---
|
||||
|
||||
### 🧠 **Feedback Schema in YAML**
|
||||
|
||||
```yaml
|
||||
pending_updates:
|
||||
- regex_name: AWS_ACCESS_KEY
|
||||
action: modify
|
||||
new_regex: "AKIA[0-9A-Z]{16}(?!TESTKEY)"
|
||||
confidence: 0.82
|
||||
status: "pending human review"
|
||||
submitted_by: curator_2
|
||||
timestamp: 2025-08-02T12:40:00Z
|
||||
```
|
||||
|
||||
✅ This keeps **audit trails** & allows **safe hot updates**.
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ **2️⃣ Real AWS/GitHub Webhook Payload Templates**
|
||||
|
||||
These are **example POST payloads** your Sentinel would send when it detects a leaked secret.
|
||||
|
||||
---
|
||||
|
||||
### 🔐 **AWS Access Key Revocation**
|
||||
|
||||
**Endpoint:**
|
||||
`POST https://security.example.com/hooks/aws-revoke`
|
||||
|
||||
**Payload:**
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "secret_leak_detected",
|
||||
"secret_type": "AWS_ACCESS_KEY",
|
||||
"redacted_key": "AKIA****XYZ",
|
||||
"log_reference": "hyperlog:58321",
|
||||
"recommended_action": "Revoke IAM access key immediately",
|
||||
"severity": "HIGH",
|
||||
"timestamp": "2025-08-02T12:45:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
➡ Your security automation would call AWS CLI or IAM API:
|
||||
|
||||
```bash
|
||||
aws iam update-access-key --access-key-id <redacted> --status Inactive
|
||||
aws iam delete-access-key --access-key-id <redacted>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🐙 **GitHub Token Revocation**
|
||||
|
||||
**Endpoint:**
|
||||
`POST https://security.example.com/hooks/github-revoke`
|
||||
|
||||
**Payload:**
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "secret_leak_detected",
|
||||
"secret_type": "GITHUB_TOKEN",
|
||||
"redacted_key": "ghp_****abcd",
|
||||
"repository": "repo-name",
|
||||
"log_reference": "hyperlog:58322",
|
||||
"severity": "HIGH",
|
||||
"recommended_action": "Invalidate GitHub token via API",
|
||||
"timestamp": "2025-08-02T12:46:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
➡ This would tie into GitHub’s [token-scanning API](https://docs.github.com/en/developers/overview/secret-scanning) or use PAT revocation.
|
||||
|
||||
---
|
||||
|
||||
### 💬 **Slack Token Revocation**
|
||||
|
||||
**Endpoint:**
|
||||
`POST https://security.example.com/hooks/slack-revoke`
|
||||
|
||||
**Payload:**
|
||||
|
||||
```json
|
||||
{
|
||||
"event": "secret_leak_detected",
|
||||
"secret_type": "SLACK_TOKEN",
|
||||
"redacted_key": "xoxb****hjk",
|
||||
"workspace": "company-slack",
|
||||
"log_reference": "hyperlog:58323",
|
||||
"severity": "HIGH",
|
||||
"recommended_action": "Revoke Slack bot/user token",
|
||||
"timestamp": "2025-08-02T12:47:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
➡ Slack Admin API can be used to **revoke** or **rotate**.
|
||||
|
||||
---
|
||||
|
||||
## 📡 **3️⃣ Redis or PostgreSQL Quarantine Store**
|
||||
|
||||
Switching from memory to **persistent storage** means quarantined logs survive restarts.
|
||||
|
||||
---
|
||||
|
||||
### ✅ **Redis Option (Fast, Volatile)**
|
||||
|
||||
```python
|
||||
import redis, json
|
||||
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
|
||||
|
||||
def quarantine_log(log_line, reason):
|
||||
entry = {"timestamp": datetime.utcnow().isoformat() + "Z", "reason": reason, "log_line": log_line}
|
||||
r.lpush("quarantine", json.dumps(entry))
|
||||
print(f"[QUARANTINE] Stored in Redis: {reason}")
|
||||
```
|
||||
|
||||
- 🏎 **Pros:** Fast, easy to scale.
|
||||
|
||||
- ⚠️ **Cons:** Volatile unless persisted (RDB/AOF).
|
||||
|
||||
|
||||
---
|
||||
|
||||
### ✅ **PostgreSQL Option (Auditable, Durable)**
|
||||
|
||||
**Schema:**
|
||||
|
||||
```sql
|
||||
CREATE TABLE quarantine (
|
||||
id SERIAL PRIMARY KEY,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
reason TEXT NOT NULL,
|
||||
log_line TEXT NOT NULL,
|
||||
reviewed BOOLEAN DEFAULT FALSE
|
||||
);
|
||||
```
|
||||
|
||||
**Python Insert:**
|
||||
|
||||
```python
|
||||
import psycopg2
|
||||
|
||||
conn = psycopg2.connect("dbname=sentinel user=postgres password=secret")
|
||||
cursor = conn.cursor()
|
||||
|
||||
def quarantine_log(log_line, reason):
|
||||
entry_time = datetime.utcnow().isoformat() + "Z"
|
||||
cursor.execute(
|
||||
"INSERT INTO quarantine (timestamp, reason, log_line) VALUES (%s, %s, %s)",
|
||||
(entry_time, reason, log_line)
|
||||
)
|
||||
conn.commit()
|
||||
print(f"[QUARANTINE] Stored in PostgreSQL: {reason}")
|
||||
```
|
||||
|
||||
✅ **Postgres is better for long-term auditing** — you can run reports like:
|
||||
|
||||
- “How many AWS keys leaked this month?”
|
||||
|
||||
- “Which agents generated the most HIGH-severity quarantines?”
|
||||
|
||||
|
||||
---
|
||||
|
||||
## 🚀 **What’s Next?**
|
||||
|
||||
We now have:
|
||||
✅ **Detection → Redaction → Quarantine → Revocation → Feedback → Pattern Evolution**
|
||||
✅ **patterns.yaml** for versioned regex
|
||||
✅ **Webhooks** for real-time secret revocation
|
||||
✅ **Persistent quarantine store** (Redis or Postgres)
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## 🛡️ **Expanded System Prompt for Secrets Sentinel**
|
||||
|
||||
> **Role & Mission**:
|
||||
> You are the **Secrets Sentinel**, a security-focused agent monitoring all log streams for potential leaks of sensitive information (API keys, passwords, tokens, etc.). Your mission: **detect**, **sanitize**, and **prevent** secret exposure while keeping operations secure and auditable.
|
||||
>
|
||||
> **Core Responsibilities**:
|
||||
> ✅ Scan **all log entries** for API keys, passwords, JWTs, database strings, and private keys.
|
||||
> ✅ **Redact** any detected secrets in-flight before writing them to storage or forwarding.
|
||||
> ✅ **Generate structured alerts** for each detection with relevant metadata.
|
||||
> ✅ **Quarantine** log lines that contain **high-severity** secrets (so they aren’t distributed further).
|
||||
> ✅ Support **continuous learning** by flagging uncertain cases for human/curator review.
|
||||
>
|
||||
> **Secret Detection Targets**:
|
||||
>
|
||||
> - **Cloud Keys** (AWS, GCP, Azure, etc.)
|
||||
>
|
||||
> - **OAuth Tokens** (Bearer, Slack, Discord, GitHub, etc.)
|
||||
>
|
||||
> - **JWTs** (header.payload.signature format)
|
||||
>
|
||||
> - **SSH Private Keys** (`-----BEGIN PRIVATE KEY-----`)
|
||||
>
|
||||
> - **Database Connection Strings** (Postgres, MySQL, MongoDB, etc.)
|
||||
>
|
||||
> - **Generic Passwords** (detected from common prefixes, e.g. `pwd=`, `password:`).
|
||||
>
|
||||
>
|
||||
> **Detection Rules**:
|
||||
>
|
||||
> - Use **regex patterns** for known key formats.
|
||||
>
|
||||
> - Score detections with a **confidence metric** (0–1).
|
||||
>
|
||||
> - If a string doesn’t fully match, classify as **LOW confidence** for review.
|
||||
>
|
||||
>
|
||||
> **Redaction Policy**:
|
||||
>
|
||||
> - Always redact most of the secret (`AKIA************XYZ`).
|
||||
>
|
||||
> - Never store or transmit the **full secret**.
|
||||
>
|
||||
>
|
||||
> **Alert Format (JSON)**:
|
||||
>
|
||||
> ```json
|
||||
> {
|
||||
> "timestamp": "2025-08-02T10:12:34Z",
|
||||
> "source_agent": "agent_42",
|
||||
> "log_line_id": "hyperlog:134593",
|
||||
> "secret_type": "AWS_ACCESS_KEY",
|
||||
> "redacted_sample": "AKIA********XYZ",
|
||||
> "confidence_score": 0.95,
|
||||
> "severity": "HIGH",
|
||||
> "recommended_action": "Revoke AWS key immediately and rotate credentials"
|
||||
> }
|
||||
> ```
|
||||
>
|
||||
> **Behavior Under Edge Cases**:
|
||||
>
|
||||
> - If unsure: flag as LOW severity with `"recommended_action": "Manual review"`.
|
||||
>
|
||||
> - If a secret is clearly fake (like `TESTKEY123`), still alert but tag as `test_credential: true`.
|
||||
>
|
||||
>
|
||||
> **Tone & Style**:
|
||||
>
|
||||
> - Precise, security-minded, and concise in reporting.
|
||||
>
|
||||
|
||||
---
|
||||
|
||||
## 📚 **Regex Patterns Library (Starter Set)**
|
||||
|
||||
```python
|
||||
REGEX_PATTERNS = {
|
||||
"AWS_ACCESS_KEY": r"AKIA[0-9A-Z]{16}",
|
||||
"AWS_SECRET_KEY": r"(?i)aws(.{0,20})?(?-i)['\"][0-9a-zA-Z\/+]{40}['\"]",
|
||||
"GCP_API_KEY": r"AIza[0-9A-Za-z\\-_]{35}",
|
||||
"GITHUB_TOKEN": r"gh[pousr]_[0-9A-Za-z]{36}",
|
||||
"SLACK_TOKEN": r"xox[baprs]-[0-9A-Za-z-]{10,48}",
|
||||
"JWT": r"eyJ[A-Za-z0-9_-]+?\.[A-Za-z0-9._-]+?\.[A-Za-z0-9._-]+",
|
||||
"SSH_PRIVATE_KEY": r"-----BEGIN (RSA|DSA|EC|OPENSSH) PRIVATE KEY-----",
|
||||
"GENERIC_PASSWORD": r"(?:password|pwd|pass|secret)\s*[:=]\s*['\"]?[^\s'\";]+['\"]?",
|
||||
"DB_CONN_STRING": r"(postgres|mysql|mongodb|mssql|redis):\/\/[^\s]+"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠 **Python Skeleton Implementation**
|
||||
|
||||
```python
|
||||
import re
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
REGEX_PATTERNS = {
|
||||
"AWS_ACCESS_KEY": r"AKIA[0-9A-Z]{16}",
|
||||
"AWS_SECRET_KEY": r"(?i)aws(.{0,20})?(?-i)['\"][0-9a-zA-Z\/+]{40}['\"]",
|
||||
"GCP_API_KEY": r"AIza[0-9A-Za-z\\-_]{35}",
|
||||
"GITHUB_TOKEN": r"gh[pousr]_[0-9A-Za-z]{36}",
|
||||
"SLACK_TOKEN": r"xox[baprs]-[0-9A-Za-z-]{10,48}",
|
||||
"JWT": r"eyJ[A-Za-z0-9_-]+?\.[A-Za-z0-9._-]+?\.[A-Za-z0-9._-]+",
|
||||
"SSH_PRIVATE_KEY": r"-----BEGIN (RSA|DSA|EC|OPENSSH) PRIVATE KEY-----",
|
||||
"GENERIC_PASSWORD": r"(?:password|pwd|pass|secret)\s*[:=]\s*['\"]?[^\s'\";]+['\"]?",
|
||||
"DB_CONN_STRING": r"(postgres|mysql|mongodb|mssql|redis):\/\/[^\s]+"
|
||||
}
|
||||
|
||||
def redact(secret: str) -> str:
|
||||
"""Redact a secret leaving only first and last 3 chars."""
|
||||
return secret[:4] + "*" * (len(secret) - 7) + secret[-3:]
|
||||
|
||||
def scan_log_line(log_line: str, log_id: str, source_agent: str):
|
||||
alerts = []
|
||||
for secret_type, pattern in REGEX_PATTERNS.items():
|
||||
match = re.search(pattern, log_line)
|
||||
if match:
|
||||
secret = match.group(0)
|
||||
alert = {
|
||||
"timestamp": datetime.utcnow().isoformat() + "Z",
|
||||
"source_agent": source_agent,
|
||||
"log_line_id": log_id,
|
||||
"secret_type": secret_type,
|
||||
"redacted_sample": redact(secret),
|
||||
"confidence_score": 0.95,
|
||||
"severity": "HIGH" if "PRIVATE_KEY" in secret_type or "AWS" in secret_type else "MEDIUM",
|
||||
"recommended_action": "Revoke key/rotate credentials"
|
||||
}
|
||||
alerts.append(alert)
|
||||
return alerts
|
||||
|
||||
# Example usage:
|
||||
log = "User pushed code with AWS key: AKIA1234567890ABCD"
|
||||
alerts = scan_log_line(log, "hyperlog:42", "agent_7")
|
||||
print(json.dumps(alerts, indent=2))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚨 **Optional Extensions**
|
||||
|
||||
- ✅ **Automatic Quarantine** → If severity=HIGH, forward to a “quarantine pipeline” (prevent further log distribution).
|
||||
|
||||
- ✅ **Secret Revocation Hooks** → Webhooks/scripts to auto-disable compromised keys (AWS IAM revoke, GitHub token reset, etc.).
|
||||
|
||||
- ✅ **Curator Feedback Loop** → If many false positives occur, agents/curators can “vote” on regex quality → improves future detection.
|
||||
|
||||
|
||||
---
|
||||
|
||||
👉 Do you want me to:
|
||||
|
||||
- **📡 Add “quarantine mode” logic** (e.g. isolate log lines and mark them unsafe)?
|
||||
|
||||
- **🔗 Draft a webhook design** for automatic secret revocation (AWS/GitHub/Slack)?
|
||||
|
||||
- **🧠 Write a “meta-prompt” for this agent** so it can _learn new secret formats_ over time from curator feedback?
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Here’s the **full build-out** with all three requested additions:
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## 🛡 **Secrets Sentinel: Extended Prompt**
|
||||
|
||||
> **Role & Mission**:
|
||||
> You are the **Secrets Sentinel**, a security agent tasked with scanning all logs for leaked secrets. You **detect**, **redact**, **quarantine**, and optionally **revoke** compromised credentials. You also **evolve your detection rules** over time by learning from curator and human feedback.
|
||||
>
|
||||
> **Extended Responsibilities**:
|
||||
> ✅ **Quarantine:** When you find a **HIGH severity** secret, isolate the entire log entry from normal processing so it doesn’t spread.
|
||||
> ✅ **Revocation Hooks:** Trigger a webhook or automation script to disable the compromised key (e.g., AWS IAM revoke).
|
||||
> ✅ **Adaptive Learning:** If you get “false positive” or “missed secret” feedback, update or request updates to your regex rules and detection logic.
|
||||
|
||||
---
|
||||
|
||||
### 1️⃣ **Quarantine Mode Logic**
|
||||
|
||||
Add this to the **Python skeleton**:
|
||||
|
||||
```python
|
||||
QUARANTINE_STORE = [] # This could be a Redis list, DB table, etc.
|
||||
|
||||
def quarantine_log(log_line: str, reason: str):
|
||||
"""Move a sensitive log line to quarantine for review."""
|
||||
quarantine_entry = {
|
||||
"timestamp": datetime.utcnow().isoformat() + "Z",
|
||||
"reason": reason,
|
||||
"log_line": log_line
|
||||
}
|
||||
QUARANTINE_STORE.append(quarantine_entry)
|
||||
print(f"[QUARANTINE] Log quarantined: {reason}")
|
||||
return quarantine_entry
|
||||
|
||||
def scan_log_line(log_line: str, log_id: str, source_agent: str):
|
||||
alerts = []
|
||||
for secret_type, pattern in REGEX_PATTERNS.items():
|
||||
match = re.search(pattern, log_line)
|
||||
if match:
|
||||
secret = match.group(0)
|
||||
severity = "HIGH" if "PRIVATE_KEY" in secret_type or "AWS" in secret_type else "MEDIUM"
|
||||
alert = {
|
||||
"timestamp": datetime.utcnow().isoformat() + "Z",
|
||||
"source_agent": source_agent,
|
||||
"log_line_id": log_id,
|
||||
"secret_type": secret_type,
|
||||
"redacted_sample": redact(secret),
|
||||
"confidence_score": 0.95,
|
||||
"severity": severity,
|
||||
"recommended_action": "Revoke key/rotate credentials"
|
||||
}
|
||||
alerts.append(alert)
|
||||
if severity == "HIGH":
|
||||
quarantine_log(log_line, f"High severity secret detected: {secret_type}")
|
||||
return alerts
|
||||
```
|
||||
|
||||
➡ **Now, any HIGH-severity find triggers quarantine.**
|
||||
|
||||
---
|
||||
|
||||
### 2️⃣ **Webhook Design for Secret Revocation**
|
||||
|
||||
We’ll use a **simple webhook dispatcher** that calls different revocation endpoints depending on the secret type.
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
REVOCATION_HOOKS = {
|
||||
"AWS_ACCESS_KEY": "https://security.example.com/hooks/aws-revoke",
|
||||
"GITHUB_TOKEN": "https://security.example.com/hooks/github-revoke",
|
||||
"SLACK_TOKEN": "https://security.example.com/hooks/slack-revoke"
|
||||
}
|
||||
|
||||
def trigger_revocation(secret_type: str, redacted_sample: str):
|
||||
"""Send webhook request to revoke compromised credentials."""
|
||||
if secret_type in REVOCATION_HOOKS:
|
||||
payload = {
|
||||
"secret_type": secret_type,
|
||||
"sample": redacted_sample,
|
||||
"reason": "Detected secret leak"
|
||||
}
|
||||
try:
|
||||
r = requests.post(REVOCATION_HOOKS[secret_type], json=payload, timeout=5)
|
||||
if r.status_code == 200:
|
||||
print(f"[REVOCATION] Successfully triggered revocation for {secret_type}")
|
||||
else:
|
||||
print(f"[REVOCATION] Failed revocation for {secret_type}: {r.status_code}")
|
||||
except Exception as e:
|
||||
print(f"[REVOCATION ERROR] {e}")
|
||||
```
|
||||
|
||||
Then, in the alert loop:
|
||||
|
||||
```python
|
||||
if severity == "HIGH":
|
||||
quarantine_log(log_line, f"High severity secret detected: {secret_type}")
|
||||
trigger_revocation(secret_type, redact(secret))
|
||||
```
|
||||
|
||||
➡ **Now, if the sentinel finds an AWS key or GitHub token, it can hit the right webhook to revoke it immediately.**
|
||||
|
||||
---
|
||||
|
||||
### 3️⃣ **Meta-Prompt for Learning New Secret Formats**
|
||||
|
||||
This allows the Sentinel to **expand its detection library** dynamically.
|
||||
|
||||
---
|
||||
|
||||
**🧠 Adaptive Learning Meta-Prompt**
|
||||
|
||||
> **Role**: You are the “Meta-Learner” function of Secrets Sentinel.
|
||||
>
|
||||
> **Mission**: Continuously expand the Sentinel’s knowledge of new secret formats and detection rules.
|
||||
>
|
||||
> **Responsibilities**:
|
||||
>
|
||||
> * Accept **feedback reports** from curators or human reviewers (false positives, missed secrets).
|
||||
> * If multiple missed secrets share a pattern, propose a **new regex** (or modify an existing one).
|
||||
> * Version-control detection rules (e.g., store regex patterns in a JSON or YAML file with version numbers).
|
||||
> * Generate a **confidence score** for new rules before promoting them to active use.
|
||||
> * Flag uncertain cases for human approval.
|
||||
>
|
||||
> **Behavior**:
|
||||
>
|
||||
> * If false positives keep happening for a regex → weaken or add exception clauses.
|
||||
> * If missed secrets are reported → extract examples, infer format, draft new regex, test on past logs.
|
||||
>
|
||||
> **Output**:
|
||||
>
|
||||
> ```json
|
||||
> {
|
||||
> "action": "add|update|remove",
|
||||
> "regex_name": "NEW_PROVIDER_KEY",
|
||||
> "regex_pattern": "XXX",
|
||||
> "confidence": 0.87,
|
||||
> "status": "pending human review"
|
||||
> }
|
||||
> ```
|
||||
>
|
||||
> **Goal**: Turn curator/human feedback into a **self-evolving secret-detection engine**.
|
||||
|
||||
---
|
||||
|
||||
### 🧩 How It All Fits
|
||||
|
||||
✅ **Secrets Sentinel core** = Detect + Redact + Quarantine
|
||||
✅ **Revocation Hooks** = Automated security response
|
||||
✅ **Meta-Learning** = Regex patterns evolve over time
|
||||
|
||||
|
||||
---
|
||||
|
||||
## 📜 **1️⃣ Migration Script: Redis → PostgreSQL**
|
||||
|
||||
This script will migrate existing quarantined log entries from **Redis** to **Postgres**.
|
||||
|
||||
```python
|
||||
import redis, json, psycopg2
|
||||
from datetime import datetime
|
||||
|
||||
# Redis config
|
||||
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
|
||||
|
||||
# Postgres config
|
||||
conn = psycopg2.connect("dbname=sentinel user=postgres password=secret")
|
||||
cursor = conn.cursor()
|
||||
|
||||
def migrate_quarantine():
|
||||
count = 0
|
||||
while True:
|
||||
entry_json = r.rpop("quarantine") # pop oldest entry from Redis
|
||||
if not entry_json:
|
||||
break
|
||||
entry = json.loads(entry_json)
|
||||
cursor.execute(
|
||||
"INSERT INTO quarantine (timestamp, reason, log_line) VALUES (%s, %s, %s)",
|
||||
(entry["timestamp"], entry["reason"], entry["log_line"])
|
||||
)
|
||||
count += 1
|
||||
conn.commit()
|
||||
print(f"[MIGRATION] Moved {count} quarantined entries from Redis → PostgreSQL")
|
||||
|
||||
if __name__ == "__main__":
|
||||
migrate_quarantine()
|
||||
```
|
||||
|
||||
✅ **Run once** after Postgres is set up — empties Redis queue into the durable DB.
|
||||
|
||||
---
|
||||
|
||||
## 🖥 **2️⃣ Admin Dashboard Spec**
|
||||
|
||||
**Purpose:** A web UI to manage the Sentinel’s security pipeline.
|
||||
|
||||
---
|
||||
|
||||
### 🎯 **Core Features**
|
||||
|
||||
✅ **Quarantine Browser**
|
||||
|
||||
- Paginated view of all quarantined logs
|
||||
|
||||
- Search/filter by `secret_type`, `source_agent`, `date`, `status`
|
||||
|
||||
- Mark quarantined logs as **reviewed** or **false alarm**
|
||||
|
||||
|
||||
✅ **Regex Rules Manager**
|
||||
|
||||
- Lists all regexes from `patterns.yaml`
|
||||
|
||||
- Add / update / deactivate rules via UI
|
||||
|
||||
- Shows `pending_updates` flagged by the Meta-Learner for human approval
|
||||
|
||||
|
||||
✅ **Revocation Status Board**
|
||||
|
||||
- See which secrets triggered revocations
|
||||
|
||||
- Status of revocation hooks (success/fail)
|
||||
|
||||
|
||||
✅ **Metrics Dashboard**
|
||||
|
||||
- Charts: “Secrets Detected Over Time”, “Top Sources of Leaks”
|
||||
|
||||
- KPIs: # HIGH severity secrets this month, # rules updated, # false positives
|
||||
|
||||
|
||||
---
|
||||
|
||||
### 🏗 **Tech Stack Suggestion**
|
||||
|
||||
- **Backend:** FastAPI (Python)
|
||||
|
||||
- **Frontend:** React + Tailwind
|
||||
|
||||
- **DB:** PostgreSQL for quarantine + rules history
|
||||
|
||||
- **Auth:** OAuth (GitHub/Google) + RBAC (only security admins can approve regex changes)
|
||||
|
||||
|
||||
---
|
||||
|
||||
### 🔌 **Endpoints**
|
||||
|
||||
```
|
||||
GET /api/quarantine → list quarantined entries
|
||||
POST /api/quarantine/review → mark entry as reviewed
|
||||
GET /api/rules → list regex patterns
|
||||
POST /api/rules/update → update or add a regex
|
||||
GET /api/revocations → list revocation events
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🖥 **Mock Dashboard Layout**
|
||||
|
||||
- **Left Nav:** Quarantine | Rules | Revocations | Metrics
|
||||
|
||||
- **Main Panel:**
|
||||
|
||||
- Data tables with sorting/filtering
|
||||
|
||||
- Inline editors for regex rules
|
||||
|
||||
- Approve/Reject buttons for pending regex updates
|
||||
|
||||
|
||||
✅ Basically a **security control room** for Sentinel.
|
||||
|
||||
---
|
||||
|
||||
## 🤖 **3️⃣ Meta-Curator AI Prompt**
|
||||
|
||||
This agent reviews Sentinel’s work and **tunes it automatically**.
|
||||
|
||||
---
|
||||
|
||||
### **Meta-Curator: System Prompt**
|
||||
|
||||
> **Role & Mission:**
|
||||
> You are the **Meta-Curator**, a supervisory AI responsible for reviewing the **Secrets Sentinel’s** detections, regex updates, and feedback reports.
|
||||
>
|
||||
> **Core Responsibilities:**
|
||||
> ✅ **Audit alerts** – Look for false positives, duplicates, or missed leaks by cross-checking Sentinel outputs.
|
||||
> ✅ **Review regex proposals** – When Sentinel drafts new regex rules, decide if they’re:
|
||||
>
|
||||
> - ✅ Approved (safe to activate)
|
||||
>
|
||||
> - ❌ Rejected (too broad or incorrect)
|
||||
>
|
||||
> - 🕒 Deferred (needs human review)
|
||||
> ✅ **Tune detection thresholds** – Adjust `confidence` or `severity` on patterns based on outcomes.
|
||||
> ✅ **Generate new rules** – If multiple missed secrets share a format, draft a regex and submit to humans for approval.
|
||||
> ✅ **Report upstream** – Summarize changes to security admins weekly.
|
||||
>
|
||||
|
||||
---
|
||||
|
||||
### **Behavior Guidelines**
|
||||
|
||||
- **Conservative by default:** Don’t auto-approve regexes unless confidence > 0.95.
|
||||
|
||||
- **Keep auditability:** Every decision (approve/reject) is logged in the hyperlog.
|
||||
|
||||
- **Respect human overrides:** Never overwrite a regex that a human explicitly locked.
|
||||
|
||||
|
||||
---
|
||||
|
||||
### **Example Meta-Curator Output**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "approve_regex",
|
||||
"regex_name": "GITLAB_TOKEN",
|
||||
"regex_pattern": "glpat-[0-9A-Za-z\\-_]{20}",
|
||||
"confidence": 0.97,
|
||||
"decision_reason": "Validated against 12 quarantined examples, no false positives found.",
|
||||
"timestamp": "2025-08-02T13:45:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
✅ This meta-agent is the **brains of the rules layer** — keeps Sentinel evolving, but under control.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 **Now You Have:**
|
||||
|
||||
✅ **Migration Path** → Redis → PostgreSQL
|
||||
✅ **Admin Dashboard Spec** → complete with endpoints & layout
|
||||
✅ **Meta-Curator Prompt** → the agent that “manages the manager”
|
||||
|
||||
---
|
||||
Submodule modules/slurp deleted from c5a1012e94
Submodule modules/website updated: 7774d7ec98...24a29ef732
Submodule modules/whoosh deleted from 1e81daaf18
Reference in New Issue
Block a user