Remove content directories from git tracking

Content and scheduled post directories will be managed separately
as Docker volume mounts for easier content management.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
anthonyrawlins
2025-08-27 14:47:40 +10:00
parent 6e13451dc4
commit 91c1cb9e5b
5 changed files with 5 additions and 230 deletions

6
.gitignore vendored
View File

@@ -48,4 +48,8 @@ logs/
.cache/
# Development scripts
dev-start.sh.bak
dev-start.sh.bak
# Content directories (managed separately)
content/
scheduled/

View File

@@ -1,51 +0,0 @@
---
title: "Beyond RAG: The Future of AI Context with CHORUS"
description: "AI is moving fast, but one of the biggest bottlenecks isn't model size or compute power—it's context management. Here's how CHORUS goes beyond traditional RAG approaches."
date: "2025-08-28"
author:
name: "Anthony Rawlins"
role: "CEO & Founder, CHORUS Services"
tags:
- "contextual-ai"
- "RAG"
- "context-management"
- "hierarchical-reasoning"
featured: false
---
AI is moving fast, but one of the biggest bottlenecks isnt model size or compute power, its **context management**.
For years, **Retrieval-Augmented Generation (RAG)** has been the go-to method for extending large language models (LLMs). By bolting on vector databases and search, RAG helps models pull in relevant documents. It works, but only to a point. Anyone whos scaled production systems knows the cracks:
* RAG treats knowledge as flat text snippets, missing relationships and nuance.
* Git and other version-control systems capture *code history*, but not the evolving reasoning behind decisions.
* Static context caches snap a picture in time, but knowledge and workflows dont stand still.
In short: **RAG, Git, and static context snapshots arent enough for the next generation of AI.**
## Why Hierarchical Context Matters
Knowledge isnt just a pile of files — its layered, temporal, and deeply interconnected. AI systems need to track *how* reasoning unfolds, *why* decisions were made, and *how context evolves over time*. Thats where **Chorus** comes in.
Instead of treating context as documents to fetch, we treat it as a **living, distributed hierarchy**. Chorus enables agents to share, navigate, and build on structured threads of reasoning across domains and time. Its not just about retrieval — its about orchestration, memory, and continuity.
## Research Is Moving the Same Way
The AI research frontier points in this direction too:
* **NVIDIAs recent small model papers** showed that scaling up isnt the only answer — well-designed small models can outperform by being more structured and specialized.
* The **Hierarchical Reasoning Model (HRM)** highlights how smarter architectures, not just bigger context windows, unlock deeper reasoning.
Both emphasize the same principle: **intelligence comes from structure, not size alone**.
## Whats Next
Chorus is building the scaffolding for this new paradigm. Our goal is to make context:
* **Persistent** reasoning doesnt vanish when the session ends.
* **Navigable** past decisions and justifications are always accessible.
* **Collaborative** multiple agents can share and evolve context together.
Were not giving away the full blueprint yet, but if youre interested in what lies **beyond RAG**, beyond Git, and beyond static memory hacks, keep watching.
The future of **AI context management** is closer than you think.

View File

@@ -1,67 +0,0 @@
---
title: "Welcome to PING!"
description: "The blog about contextual AI orchestration, agent coordination, and the future of intelligent systems."
date: "2025-08-27"
author:
name: "Anthony Rawlins"
role: "CEO & Founder, CHORUS Services"
tags:
- "announcement"
- "contextual-ai"
- "orchestration"
featured: true
---
We're excited to launch PING! — the blog about contextual AI orchestration, agent coordination, and the future of intelligent systems. This is where we'll share our thoughts, insights, and discoveries as we build the future of contextual AI.
## What to Expect
Our blog will cover a range of topics that are central to our mission at CHORUS:
### Contextual AI Orchestration
Deep dives into how we're solving the challenge of getting **the right context, to the right agent, at the right time**. We'll explore the architectural decisions, technical challenges, and innovative solutions that make contextual AI orchestration possible.
### Agent Coordination
Insights into how autonomous agents can work together effectively, including:
- **P2P Agent Networks**: How agents discover and coordinate with each other
- **Decision Making**: Algorithms and patterns for distributed decision making
- **Context Sharing**: Efficient methods for agents to share relevant context
### Technical Architecture
Behind-the-scenes looks at the systems we're building:
- **BZZZ**: Our P2P agent coordination platform
- **SLURP**: Our context curation and intelligence system
- **WHOOSH**: Our orchestration and workflow platform
### Industry Perspectives
Our thoughts on the evolving AI landscape, emerging patterns in agent-based systems, and where we think the industry is heading.
## Our Philosophy
At CHORUS, we believe that the future of AI isn't just about making individual models more powerful—it's about creating **intelligent systems** where multiple agents, each with their own specialized capabilities, can work together seamlessly.
The key insight is **context**. Without the right context, even the most powerful AI agent is just expensive autocomplete. With the right context, even smaller specialized agents can achieve remarkable results.
## What's Coming Next
Over the coming weeks and months, we'll be sharing:
1. **Technical deep dives** into our core systems
2. **Case studies** from our development work
3. **Tutorials** on building contextual AI systems
4. **Industry analysis** on AI orchestration trends
5. **Open source releases** and community projects
## Join the Conversation
We're building CHORUS in the open, and we want you to be part of the journey. Whether you're an AI researcher, a developer building agent-based systems, or just someone curious about the future of AI, we'd love to hear from you.
**Stay Connected:**
- Join our [waitlist](https://chorus.services) for early access
- Connect with us on [LinkedIn](https://linkedin.com/company/chorus-services)
The future of AI is contextual, distributed, and orchestrated. Let's build it together.
---
*Want to learn more about CHORUS? Visit our [main website](https://chorus.services) to explore our vision for contextual AI orchestration.*

View File

@@ -1,53 +0,0 @@
---
title: "Why On-prem GPUs Still Matter for AI"
description: "Own the stack. Own your data."
date: "2025-08-29"
author:
name: "Anthony Rawlins"
role: "CEO & Founder, CHORUS Services"
tags:
- "gpu compute"
- "contextual-ai"
- "infrastructure"
featured: false
---
Cloud GPUs are everywhere right now, but if youve tried to run serious workloads, you know the story: long queues, high costs, throttling, and vendor lock-in. Renting compute might be convenient for prototypes, but at scale it gets expensive and limiting.
Thats why more teams are rethinking **on-premises GPU infrastructure**.
## The Case for In-House Compute
1. **Cost at Scale** Training, fine-tuning, or heavy inference workloads rack up cloud costs quickly. Owning your own GPUs flips that equation over the long term.
2. **Control & Customization** You own the stack: drivers, runtimes, schedulers, cluster topology. No waiting on cloud providers.
3. **Latency & Data Gravity** Keeping data close to the GPUs removes bandwidth bottlenecks. If your data already lives in-house, shipping it to the cloud and back is wasteful.
4. **Privacy & Compliance** Your models and data stay under your governance. No shared tenancy, no external handling.
## Not Just About Training Massive LLMs
Its easy to think of GPUs as “just for training giant foundation models.” But most teams today are leveraging GPUs for:
* **Inference at scale** low-latency deployments.
* **Fine-tuning & adapters** customizing smaller models.
* **Vector search & embeddings** powering RAG pipelines.
* **Analytics & graph workloads** accelerated by frameworks like RAPIDS.
This is where recent research gets interesting. NVIDIAs latest papers on **small models** show that capability doesnt just scale with parameter count — it scales with *specialization and structure*. Instead of defaulting to giant black-box LLMs, were entering a world where **smaller, domain-tuned models** run faster, cheaper, and more predictably.
And with the launch of the **Blackwell architecture**, the GPU landscape itself is changing. Blackwell isnt just about raw FLOPs; its about efficiency, memory bandwidth, and supporting mixed workloads (training + inference + data processing) on the same platform. Thats exactly the kind of balance on-prem clusters can exploit.
## Where This Ties Back to Chorus
At Chorus, we think of GPUs not just as horsepower, but as the **substrate that makes distributed reasoning practical**. Hierarchical context and agent orchestration require low-latency, high-throughput compute — the kind thats tough to guarantee in the cloud. On-prem clusters give us:
* Predictable performance for multi-agent reasoning.
* Dedicated acceleration for embeddings and vector ops.
* A foundation for experimenting with **HRM-inspired** approaches that dont just make models bigger, but make them smarter.
## The Bottom Line
The future isnt cloud *versus* on-prem — its hybrid. Cloud for burst capacity, on-prem GPUs for sustained reasoning, privacy, and cost control. Owning your own stack is about **freedom**: the freedom to innovate at your pace, tune your models your way, and build intelligence on infrastructure you trust.
The real question isnt whether you *can* run AI on-prem.
Its whether you can afford *not to*.

View File

@@ -1,58 +0,0 @@
# Scheduled Posts
This directory contains blog posts that are scheduled for future publication.
## Directory Structure
```
scheduled/
├── 2024/
│ ├── 01/
│ ├── 02/
│ └── ...
├── 2025/
│ ├── 01/
│ ├── 02/
│ └── ...
└── README.md
```
## File Naming Convention
Posts should be named with the format: `YYYY-MM-DD-slug.md`
Example: `2024-03-15-understanding-ai-agents.md`
## Frontmatter Format
Each scheduled post should include the following frontmatter:
```yaml
---
title: "Your Post Title"
description: "Brief description of the post"
date: "2024-03-15"
publishDate: "2024-03-15T09:00:00.000Z"
author:
name: "Author Name"
role: "Author Role"
tags:
- "tag1"
- "tag2"
featured: false
draft: false
---
```
## Publishing Process
1. Write your post in the appropriate scheduled directory
2. Set the `publishDate` to when you want it published
3. A scheduled job will move posts from `scheduled/` to `posts/` when their publish date arrives
4. The blog will automatically pick up the new post and display it
## Notes
- Posts in this directory are not visible on the live blog until moved to `posts/`
- Use `draft: true` for posts that are work-in-progress
- The `publishDate` field determines when the post goes live