WIP: Save agent roles integration work before CHORUS rebrand

- Agent roles and coordination features
- Chat API integration testing
- New configuration and workspace management

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
anthonyrawlins
2025-08-01 02:21:11 +10:00
parent 81b473d48f
commit 5978a0b8f5
3713 changed files with 1103925 additions and 59 deletions

21
vendor/github.com/libp2p/go-buffer-pool/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Juan Batiz-Benet
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

29
vendor/github.com/libp2p/go-buffer-pool/LICENSE-BSD generated vendored Normal file
View File

@@ -0,0 +1,29 @@
### Applies to buffer.go and buffer_test.go ###
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

53
vendor/github.com/libp2p/go-buffer-pool/README.md generated vendored Normal file
View File

@@ -0,0 +1,53 @@
go-buffer-pool
==================
[![](https://img.shields.io/badge/made%20by-Protocol%20Labs-blue.svg?style=flat-square)](https://protocol.ai)
[![](https://img.shields.io/badge/project-libp2p-yellow.svg?style=flat-square)](https://libp2p.io/)
[![](https://img.shields.io/badge/freenode-%23libp2p-yellow.svg?style=flat-square)](https://webchat.freenode.net/?channels=%23libp2p)
[![codecov](https://codecov.io/gh/libp2p/go-buffer-pool/branch/master/graph/badge.svg)](https://codecov.io/gh/libp2p/go-buffer-pool)
[![Travis CI](https://travis-ci.org/libp2p/go-buffer-pool.svg?branch=master)](https://travis-ci.org/libp2p/go-buffer-pool)
[![Discourse posts](https://img.shields.io/discourse/https/discuss.libp2p.io/posts.svg)](https://discuss.libp2p.io)
> A variable size buffer pool for go.
## Table of Contents
- [Use Case](#use-case)
- [Advantages over GC](#advantages-over-gc)
- [Disadvantages over GC:](#disadvantages-over-gc)
- [Contribute](#contribute)
- [License](#license)
## Use Case
Use this when you need to repeatedly allocate and free a bunch of temporary buffers of approximately the same size.
### Advantages over GC
* Reduces Memory Usage:
* We don't have to wait for a GC to run before we can reuse memory. This is essential if you're repeatedly allocating large short-lived buffers.
* Reduces CPU usage:
* It takes some load off of the GC (due to buffer reuse).
* We don't have to zero buffers (fewer wasteful memory writes).
### Disadvantages over GC:
* Can leak memory contents. Unlike the go GC, we *don't* zero memory.
* All buffers have a capacity of a power of 2. This is fine if you either (a) actually need buffers with this size or (b) expect these buffers to be temporary.
* Requires that buffers be returned explicitly. This can lead to race conditions and memory corruption if the buffer is released while it's still in use.
## Contribute
PRs are welcome!
Small note: If editing the Readme, please conform to the [standard-readme](https://github.com/RichardLitt/standard-readme) specification.
## License
MIT © Protocol Labs
BSD © The Go Authors
---
The last gx published version of this module was: 0.1.3: QmQDvJoB6aJWN3sjr3xsgXqKCXf4jU5zdMXpDMsBkYVNqa

302
vendor/github.com/libp2p/go-buffer-pool/buffer.go generated vendored Normal file
View File

@@ -0,0 +1,302 @@
// This is a derivitive work of Go's bytes.Buffer implementation.
//
// Originally copyright 2009 The Go Authors. All rights reserved.
//
// Modifications copyright 2018 Steven Allen. All rights reserved.
//
// Use of this source code is governed by both a BSD-style and an MIT-style
// license that can be found in the LICENSE_BSD and LICENSE files.
package pool
import (
"io"
)
// Buffer is a buffer like bytes.Buffer that:
//
// 1. Uses a buffer pool.
// 2. Frees memory on read.
//
// If you only have a few buffers and read/write at a steady rate, *don't* use
// this package, it'll be slower.
//
// However:
//
// 1. If you frequently create/destroy buffers, this implementation will be
// significantly nicer to the allocator.
// 2. If you have many buffers with bursty traffic, this implementation will use
// significantly less memory.
type Buffer struct {
// Pool is the buffer pool to use. If nil, this Buffer will use the
// global buffer pool.
Pool *BufferPool
buf []byte
rOff int
// Preallocated slice for samll reads/writes.
// This is *really* important for performance and only costs 8 words.
bootstrap [64]byte
}
// NewBuffer constructs a new buffer initialized to `buf`.
// Unlike `bytes.Buffer`, we *copy* the buffer but don't reuse it (to ensure
// that we *only* use buffers from the pool).
func NewBuffer(buf []byte) *Buffer {
b := new(Buffer)
if len(buf) > 0 {
b.buf = b.getBuf(len(buf))
copy(b.buf, buf)
}
return b
}
// NewBufferString is identical to NewBuffer *except* that it allows one to
// initialize the buffer from a string (without having to allocate an
// intermediate bytes slice).
func NewBufferString(buf string) *Buffer {
b := new(Buffer)
if len(buf) > 0 {
b.buf = b.getBuf(len(buf))
copy(b.buf, buf)
}
return b
}
func (b *Buffer) grow(n int) int {
wOff := len(b.buf)
bCap := cap(b.buf)
if bCap >= wOff+n {
b.buf = b.buf[:wOff+n]
return wOff
}
bSize := b.Len()
minCap := 2*bSize + n
// Slide if cap >= minCap.
// Reallocate otherwise.
if bCap >= minCap {
copy(b.buf, b.buf[b.rOff:])
} else {
// Needs new buffer.
newBuf := b.getBuf(minCap)
copy(newBuf, b.buf[b.rOff:])
b.returnBuf()
b.buf = newBuf
}
b.rOff = 0
b.buf = b.buf[:bSize+n]
return bSize
}
func (b *Buffer) getPool() *BufferPool {
if b.Pool == nil {
return GlobalPool
}
return b.Pool
}
func (b *Buffer) returnBuf() {
if cap(b.buf) > len(b.bootstrap) {
b.getPool().Put(b.buf)
}
b.buf = nil
}
func (b *Buffer) getBuf(n int) []byte {
if n <= len(b.bootstrap) {
return b.bootstrap[:n]
}
return b.getPool().Get(n)
}
// Len returns the number of bytes that can be read from this buffer.
func (b *Buffer) Len() int {
return len(b.buf) - b.rOff
}
// Cap returns the current capacity of the buffer.
//
// Note: Buffer *may* re-allocate when writing (or growing by) `n` bytes even if
// `Cap() < Len() + n` to avoid excessive copying.
func (b *Buffer) Cap() int {
return cap(b.buf)
}
// Bytes returns the slice of bytes currently buffered in the Buffer.
//
// The buffer returned by Bytes is valid until the next call grow, truncate,
// read, or write. Really, just don't touch the Buffer until you're done with
// the return value of this function.
func (b *Buffer) Bytes() []byte {
return b.buf[b.rOff:]
}
// String returns the string representation of the buffer.
//
// It returns `<nil>` the buffer is a nil pointer.
func (b *Buffer) String() string {
if b == nil {
return "<nil>"
}
return string(b.buf[b.rOff:])
}
// WriteString writes a string to the buffer.
//
// This function is identical to Write except that it allows one to write a
// string directly without allocating an intermediate byte slice.
func (b *Buffer) WriteString(buf string) (int, error) {
wOff := b.grow(len(buf))
return copy(b.buf[wOff:], buf), nil
}
// Truncate truncates the Buffer.
//
// Panics if `n > b.Len()`.
//
// This function may free memory by shrinking the internal buffer.
func (b *Buffer) Truncate(n int) {
if n < 0 || n > b.Len() {
panic("truncation out of range")
}
b.buf = b.buf[:b.rOff+n]
b.shrink()
}
// Reset is equivalent to Truncate(0).
func (b *Buffer) Reset() {
b.returnBuf()
b.rOff = 0
}
// ReadByte reads a single byte from the Buffer.
func (b *Buffer) ReadByte() (byte, error) {
if b.rOff >= len(b.buf) {
return 0, io.EOF
}
c := b.buf[b.rOff]
b.rOff++
return c, nil
}
// WriteByte writes a single byte to the Buffer.
func (b *Buffer) WriteByte(c byte) error {
wOff := b.grow(1)
b.buf[wOff] = c
return nil
}
// Grow grows the internal buffer such that `n` bytes can be written without
// reallocating.
func (b *Buffer) Grow(n int) {
wOff := b.grow(n)
b.buf = b.buf[:wOff]
}
// Next is an alternative to `Read` that returns a byte slice instead of taking
// one.
//
// The returned byte slice is valid until the next read, write, grow, or
// truncate.
func (b *Buffer) Next(n int) []byte {
m := b.Len()
if m < n {
n = m
}
data := b.buf[b.rOff : b.rOff+n]
b.rOff += n
return data
}
// Write writes the byte slice to the buffer.
func (b *Buffer) Write(buf []byte) (int, error) {
wOff := b.grow(len(buf))
return copy(b.buf[wOff:], buf), nil
}
// WriteTo copies from the buffer into the given writer until the buffer is
// empty.
func (b *Buffer) WriteTo(w io.Writer) (int64, error) {
if b.rOff < len(b.buf) {
n, err := w.Write(b.buf[b.rOff:])
b.rOff += n
if b.rOff > len(b.buf) {
panic("invalid write count")
}
b.shrink()
return int64(n), err
}
return 0, nil
}
// MinRead is the minimum slice size passed to a Read call by
// Buffer.ReadFrom. As long as the Buffer has at least MinRead bytes beyond
// what is required to hold the contents of r, ReadFrom will not grow the
// underlying buffer.
const MinRead = 512
// ReadFrom reads from the given reader into the buffer.
func (b *Buffer) ReadFrom(r io.Reader) (int64, error) {
n := int64(0)
for {
wOff := b.grow(MinRead)
// Use *entire* buffer.
b.buf = b.buf[:cap(b.buf)]
read, err := r.Read(b.buf[wOff:])
b.buf = b.buf[:wOff+read]
n += int64(read)
switch err {
case nil:
case io.EOF:
err = nil
fallthrough
default:
b.shrink()
return n, err
}
}
}
// Read reads at most `len(buf)` bytes from the internal buffer into the given
// buffer.
func (b *Buffer) Read(buf []byte) (int, error) {
if len(buf) == 0 {
return 0, nil
}
if b.rOff >= len(b.buf) {
return 0, io.EOF
}
n := copy(buf, b.buf[b.rOff:])
b.rOff += n
b.shrink()
return n, nil
}
func (b *Buffer) shrink() {
c := b.Cap()
// Either nil or bootstrap.
if c <= len(b.bootstrap) {
return
}
l := b.Len()
if l == 0 {
// Shortcut if empty.
b.returnBuf()
b.rOff = 0
} else if l*8 < c {
// Only shrink when capacity > 8x length. Avoids shrinking too aggressively.
newBuf := b.getBuf(l)
copy(newBuf, b.buf[b.rOff:])
b.returnBuf()
b.rOff = 0
b.buf = newBuf[:l]
}
}

3
vendor/github.com/libp2p/go-buffer-pool/codecov.yml generated vendored Normal file
View File

@@ -0,0 +1,3 @@
coverage:
range: "50...100"
comment: off

117
vendor/github.com/libp2p/go-buffer-pool/pool.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
// Package pool provides a sync.Pool equivalent that buckets incoming
// requests to one of 32 sub-pools, one for each power of 2, 0-32.
//
// import (pool "github.com/libp2p/go-buffer-pool")
// var p pool.BufferPool
//
// small := make([]byte, 1024)
// large := make([]byte, 4194304)
// p.Put(small)
// p.Put(large)
//
// small2 := p.Get(1024)
// large2 := p.Get(4194304)
// fmt.Println("small2 len:", len(small2))
// fmt.Println("large2 len:", len(large2))
//
// // Output:
// // small2 len: 1024
// // large2 len: 4194304
//
package pool
import (
"math"
"math/bits"
"sync"
)
// GlobalPool is a static Pool for reusing byteslices of various sizes.
var GlobalPool = new(BufferPool)
// MaxLength is the maximum length of an element that can be added to the Pool.
const MaxLength = math.MaxInt32
// BufferPool is a pool to handle cases of reusing elements of varying sizes. It
// maintains 32 internal pools, for each power of 2 in 0-32.
//
// You should generally just call the package level Get and Put methods or use
// the GlobalPool BufferPool instead of constructing your own.
//
// You MUST NOT copy Pool after using.
type BufferPool struct {
pools [32]sync.Pool // a list of singlePools
ptrs sync.Pool
}
type bufp struct {
buf []byte
}
// Get retrieves a buffer of the appropriate length from the buffer pool or
// allocates a new one. Get may choose to ignore the pool and treat it as empty.
// Callers should not assume any relation between values passed to Put and the
// values returned by Get.
//
// If no suitable buffer exists in the pool, Get creates one.
func (p *BufferPool) Get(length int) []byte {
if length == 0 {
return nil
}
// Calling this function with a negative length is invalid.
// make will panic if length is negative, so we don't have to.
if length > MaxLength || length < 0 {
return make([]byte, length)
}
idx := nextLogBase2(uint32(length))
if ptr := p.pools[idx].Get(); ptr != nil {
bp := ptr.(*bufp)
buf := bp.buf[:uint32(length)]
bp.buf = nil
p.ptrs.Put(ptr)
return buf
}
return make([]byte, 1<<idx)[:uint32(length)]
}
// Put adds x to the pool.
func (p *BufferPool) Put(buf []byte) {
capacity := cap(buf)
if capacity == 0 || capacity > MaxLength {
return // drop it
}
idx := prevLogBase2(uint32(capacity))
var bp *bufp
if ptr := p.ptrs.Get(); ptr != nil {
bp = ptr.(*bufp)
} else {
bp = new(bufp)
}
bp.buf = buf
p.pools[idx].Put(bp)
}
// Get retrieves a buffer of the appropriate length from the global buffer pool
// (or allocates a new one).
func Get(length int) []byte {
return GlobalPool.Get(length)
}
// Put returns a buffer to the global buffer pool.
func Put(slice []byte) {
GlobalPool.Put(slice)
}
// Log of base two, round up (for v > 0).
func nextLogBase2(v uint32) uint32 {
return uint32(bits.Len32(v - 1))
}
// Log of base two, round down (for v > 0)
func prevLogBase2(num uint32) uint32 {
next := nextLogBase2(num)
if num == (1 << uint32(next)) {
return next
}
return next - 1
}

3
vendor/github.com/libp2p/go-buffer-pool/version.json generated vendored Normal file
View File

@@ -0,0 +1,3 @@
{
"version": "v0.1.0"
}

119
vendor/github.com/libp2p/go-buffer-pool/writer.go generated vendored Normal file
View File

@@ -0,0 +1,119 @@
package pool
import (
"bufio"
"io"
"sync"
)
const WriterBufferSize = 4096
var bufioWriterPool = sync.Pool{
New: func() interface{} {
return bufio.NewWriterSize(nil, WriterBufferSize)
},
}
// Writer is a buffered writer that returns its internal buffer in a pool when
// not in use.
type Writer struct {
W io.Writer
bufw *bufio.Writer
}
func (w *Writer) ensureBuffer() {
if w.bufw == nil {
w.bufw = bufioWriterPool.Get().(*bufio.Writer)
w.bufw.Reset(w.W)
}
}
// Write writes the given byte slice to the underlying connection.
//
// Note: Write won't return the write buffer to the pool even if it ends up
// being empty after the write. You must call Flush() to do that.
func (w *Writer) Write(b []byte) (int, error) {
if w.bufw == nil {
if len(b) >= WriterBufferSize {
return w.W.Write(b)
}
w.bufw = bufioWriterPool.Get().(*bufio.Writer)
w.bufw.Reset(w.W)
}
return w.bufw.Write(b)
}
// Size returns the size of the underlying buffer.
func (w *Writer) Size() int {
return WriterBufferSize
}
// Available returns the amount buffer space available.
func (w *Writer) Available() int {
if w.bufw != nil {
return w.bufw.Available()
}
return WriterBufferSize
}
// Buffered returns the amount of data buffered.
func (w *Writer) Buffered() int {
if w.bufw != nil {
return w.bufw.Buffered()
}
return 0
}
// WriteByte writes a single byte.
func (w *Writer) WriteByte(b byte) error {
w.ensureBuffer()
return w.bufw.WriteByte(b)
}
// WriteRune writes a single rune, returning the number of bytes written.
func (w *Writer) WriteRune(r rune) (int, error) {
w.ensureBuffer()
return w.bufw.WriteRune(r)
}
// WriteString writes a string, returning the number of bytes written.
func (w *Writer) WriteString(s string) (int, error) {
w.ensureBuffer()
return w.bufw.WriteString(s)
}
// Flush flushes the write buffer, if any, and returns it to the pool.
func (w *Writer) Flush() error {
if w.bufw == nil {
return nil
}
if err := w.bufw.Flush(); err != nil {
return err
}
w.bufw.Reset(nil)
bufioWriterPool.Put(w.bufw)
w.bufw = nil
return nil
}
// Close flushes the underlying writer and closes it if it implements the
// io.Closer interface.
//
// Note: Close() closes the writer even if Flush() fails to avoid leaking system
// resources. If you want to make sure Flush() succeeds, call it first.
func (w *Writer) Close() error {
var (
ferr, cerr error
)
ferr = w.Flush()
// always close even if flush fails.
if closer, ok := w.W.(io.Closer); ok {
cerr = closer.Close()
}
if ferr != nil {
return ferr
}
return cerr
}