WIP: Save agent roles integration work before CHORUS rebrand

- Agent roles and coordination features
- Chat API integration testing
- New configuration and workspace management

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
anthonyrawlins
2025-08-01 02:21:11 +10:00
parent 81b473d48f
commit 5978a0b8f5
3713 changed files with 1103925 additions and 59 deletions

View File

@@ -0,0 +1,373 @@
# HCFS-Integrated Development Environment
This directory contains Docker configurations for creating HCFS-enabled development environments that provide AI agents with persistent, context-aware workspaces.
## 🎯 Overview
Instead of using temporary directories that are lost when containers stop, this system integrates with HCFS (Hierarchical Context File System) to provide:
- **Persistent Workspaces**: Agent work is stored in HCFS and survives container restarts
- **Context Sharing**: Multiple agents can access and build upon each other's work
- **Intelligent Artifact Collection**: Important files are automatically stored in HCFS
- **Role-Based Access**: Agents can access context relevant to their specialization
- **Feedback Learning**: The RL Context Curator learns from agent success/failure patterns
## 🏗️ Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Bzzz Agents │ │ HCFS-Enabled │ │ HCFS Core │
│ │ │ Containers │ │ │
│ • CLI Agents │◄──►│ │◄──►│ • Context API │
│ • Ollama Models │ │ • Python Dev │ │ • RL Curator │
│ • Reasoning │ │ • Node.js Dev │ │ • Storage │
│ • Code Review │ │ • Go Dev │ │ • Search │
└─────────────────┘ │ • Generic Base │ └─────────────────┘
└─────────────────┘
```
## 🐳 Available Images
### Base Image: `bzzz-hcfs-base`
- Ubuntu 22.04 with HCFS integration
- Standard development tools (git, make, curl, etc.)
- HCFS workspace management scripts
- Agent user with proper permissions
- FUSE support for HCFS mounting
### Language-Specific Images:
#### `bzzz-hcfs-python`
- Python 3.10 with comprehensive ML/AI packages
- Jupyter Lab/Notebook support
- Popular frameworks: Flask, FastAPI, Django
- Data science stack: NumPy, Pandas, scikit-learn
- Deep learning: PyTorch, Transformers
- **Ports**: 8888 (Jupyter), 8000, 5000, 8080
#### `bzzz-hcfs-nodejs`
- Node.js 20 with modern JavaScript/TypeScript tools
- Package managers: npm, yarn
- Build tools: Webpack, Vite, Rollup
- Testing: Jest, Mocha, Cypress
- **Ports**: 3000, 8080, 8000, 9229 (debugger)
#### `bzzz-hcfs-go`
- Go 1.21 with standard development tools
- Popular frameworks: Gin, Echo, Fiber
- Development tools: Delve debugger, Air live reload
- **Ports**: 8080, 8000, 9000, 2345 (debugger)
## 🚀 Quick Start
### 1. Build the Images
```bash
cd /home/tony/AI/projects/Bzzz/docker
./build-hcfs-images.sh build
```
### 2. Start the HCFS Ecosystem
```bash
docker-compose -f docker-compose.hcfs.yml up -d
```
### 3. Access Development Environments
**Python Development:**
```bash
# Interactive shell
docker exec -it agent-python-dev bash
# Jupyter Lab
open http://localhost:8888
```
**Node.js Development:**
```bash
# Interactive shell
docker exec -it agent-nodejs-dev bash
# Start development server
docker exec -it agent-nodejs-dev npm run dev
```
**Go Development:**
```bash
# Interactive shell
docker exec -it agent-go-dev bash
# Build and run
docker exec -it agent-go-dev make build run
```
## 🔧 Configuration
### Environment Variables
**Required for HCFS Integration:**
- `AGENT_ID`: Unique identifier for the agent
- `TASK_ID`: Task identifier for workspace context
- `HCFS_API_URL`: HCFS API endpoint (default: http://host.docker.internal:8000)
- `HCFS_ENABLED`: Enable/disable HCFS integration (default: true)
**Optional:**
- `GIT_USER_NAME`: Git configuration
- `GIT_USER_EMAIL`: Git configuration
- `SETUP_PYTHON_VENV`: Create Python virtual environment
- `NODE_ENV`: Node.js environment mode
### HCFS Configuration
Each container includes `/etc/hcfs/hcfs-agent.yaml` with:
- API endpoints and timeouts
- Workspace settings
- Artifact collection patterns
- Security configurations
- Logging preferences
## 💾 Workspace Management
### Automatic Features
1. **Workspace Initialization**: Creates HCFS context for agent workspace
2. **Continuous Sync**: Background daemon syncs workspace state every 30 seconds
3. **Artifact Collection**: Automatically stores important files:
- Log files (*.log)
- Documentation (*.md, README*)
- Configuration (*.json, *.yaml)
- Build outputs (build/*, output/*)
- Results (results/*)
4. **Graceful Shutdown**: Collects final artifacts when container stops
### Manual Commands
```bash
# Sync current workspace state
/opt/hcfs/hcfs-workspace.sh sync
# Collect and store artifacts
/opt/hcfs/hcfs-workspace.sh collect
# Finalize workspace (run on completion)
/opt/hcfs/hcfs-workspace.sh finalize
# Check workspace status
/opt/hcfs/hcfs-workspace.sh status
```
## 🔄 Integration with Bzzz Agents
### Updated Sandbox Creation
The Bzzz sandbox system now supports HCFS workspaces:
```go
// Create HCFS-enabled sandbox
sandbox, err := CreateSandboxWithHCFS(ctx, taskImage, agentConfig, agentID, taskID)
// Check if using HCFS
if sandbox.IsUsingHCFS() {
workspace := sandbox.GetHCFSWorkspace()
fmt.Printf("Using HCFS workspace: %s\n", workspace.HCFSPath)
}
```
### Configuration in Bzzz
Add HCFS configuration to your Bzzz agent config:
```yaml
hcfs:
enabled: true
api_url: "http://localhost:8000"
mount_path: "/tmp/hcfs-workspaces"
store_artifacts: true
idle_cleanup_interval: "15m"
max_idle_time: "1h"
```
## 📊 Monitoring and Debugging
### Service Health Checks
```bash
# Check HCFS API
curl http://localhost:8000/health
# Check RL Tuner
curl http://localhost:8001/health
# View container logs
docker-compose -f docker-compose.hcfs.yml logs -f hcfs-api
```
### Workspace Status
```bash
# View workspace metadata
cat /home/agent/work/.hcfs-workspace
# Check sync daemon status
ps aux | grep hcfs-workspace
# View HCFS logs
tail -f /var/log/hcfs/workspace.log
```
## 🛠️ Development Workflows
### Python ML Development
```bash
# Start Python environment
docker exec -it agent-python-dev bash
# Create new project
cd /home/agent/work
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Start Jupyter for data exploration
jupyter lab --ip=0.0.0.0 --port=8888
# Artifacts automatically collected:
# - *.ipynb notebooks
# - model files in models/
# - results in output/
```
### Node.js Web Development
```bash
# Start Node.js environment
docker exec -it agent-nodejs-dev bash
# Initialize project
cd /home/agent/work
cp package.json.template package.json
npm install
# Start development server
npm run dev
# Artifacts automatically collected:
# - package*.json
# - build output in dist/
# - logs in logs/
```
### Go Microservices
```bash
# Start Go environment
docker exec -it agent-go-dev bash
# Initialize project
cd /home/agent/work
cp go.mod.template go.mod
cp main.go.template main.go
go mod tidy
# Build and run
make build
make run
# Artifacts automatically collected:
# - go.mod, go.sum
# - binary in bin/
# - test results
```
## 🔒 Security Considerations
### Container Security
- Agents run as non-root `agent` user
- Limited sudo access only for FUSE mounts
- Network restrictions block sensitive ports
- Read-only access to system directories
### HCFS Security
- Context access controlled by agent roles
- Workspace isolation between agents
- Artifact encryption (optional)
- Audit logging of all operations
## 🔄 Backup and Recovery
### Workspace Persistence
Agent workspaces are stored in named Docker volumes:
- `python-workspace`: Python development files
- `nodejs-workspace`: Node.js development files
- `go-workspace`: Go development files
### HCFS Data
Core HCFS data is stored in:
- `hcfs-data`: Main context database
- `hcfs-rl-data`: RL Context Curator data
### Backup Script
```bash
# Backup all workspace data
docker run --rm -v python-workspace:/data -v /backup:/backup alpine \
tar czf /backup/python-workspace-$(date +%Y%m%d).tar.gz -C /data .
```
## 🐛 Troubleshooting
### Common Issues
**HCFS API Not Available:**
```bash
# Check if HCFS container is running
docker ps | grep hcfs-api
# Check network connectivity
docker exec agent-python-dev curl -f http://hcfs-api:8000/health
```
**FUSE Mount Failures:**
```bash
# Check FUSE support
docker exec agent-python-dev ls -la /dev/fuse
# Check mount permissions
docker exec agent-python-dev mount | grep fuse
```
**Workspace Sync Issues:**
```bash
# Restart sync daemon
docker exec agent-python-dev pkill -f hcfs-workspace
docker exec agent-python-dev /opt/hcfs/hcfs-workspace.sh daemon &
# Manual sync
docker exec agent-python-dev /opt/hcfs/hcfs-workspace.sh sync
```
### Log Locations
- HCFS API: `docker logs hcfs-api`
- Agent containers: `docker logs agent-python-dev`
- Workspace sync: `/var/log/hcfs/workspace.log` (inside container)
## 📚 Additional Resources
- [HCFS Documentation](../HCFS/README.md)
- [Bzzz Agent Configuration](../README.md)
- [RL Context Curator Guide](../HCFS/integration_tests/README.md)
- [Docker Compose Reference](https://docs.docker.com/compose/)
## 🎯 Next Steps
1. **Deploy to Production**: Use Docker Swarm or Kubernetes
2. **Scale Horizontally**: Add more agent instances
3. **Custom Images**: Create domain-specific development environments
4. **Monitoring**: Add Prometheus/Grafana for metrics
5. **CI/CD Integration**: Automate testing and deployment

358
docker/build-hcfs-images.sh Executable file
View File

@@ -0,0 +1,358 @@
#!/bin/bash
set -euo pipefail
# HCFS Docker Images Build Script
# Builds all HCFS-enabled development environment containers
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
REGISTRY="${DOCKER_REGISTRY:-registry.home.deepblack.cloud}"
NAMESPACE="${DOCKER_NAMESPACE:-tony}"
VERSION="${VERSION:-latest}"
BUILD_PARALLEL="${BUILD_PARALLEL:-false}"
# Logging functions
log() {
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
}
success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
# Function to build a single image
build_image() {
local image_name="$1"
local dockerfile_dir="$2"
local build_args="$3"
log "Building image: $image_name"
local full_image_name="$REGISTRY/$NAMESPACE/$image_name:$VERSION"
local build_cmd="docker build"
# Add build arguments if provided
if [ -n "$build_args" ]; then
build_cmd="$build_cmd $build_args"
fi
# Add tags
build_cmd="$build_cmd -t $image_name:$VERSION -t $image_name:latest"
build_cmd="$build_cmd -t $full_image_name"
# Add dockerfile directory
build_cmd="$build_cmd $dockerfile_dir"
if eval $build_cmd; then
success "Built image: $image_name"
return 0
else
error "Failed to build image: $image_name"
return 1
fi
}
# Function to prepare HCFS SDK files
prepare_hcfs_sdks() {
log "Preparing HCFS SDK files..."
local sdk_dir="$SCRIPT_DIR/sdks"
mkdir -p "$sdk_dir"
# Copy Python SDK
if [ -d "$PROJECT_ROOT/../HCFS/hcfs-python" ]; then
cp -r "$PROJECT_ROOT/../HCFS/hcfs-python" "$sdk_dir/hcfs-python-sdk"
success "Copied Python HCFS SDK"
else
warning "Python HCFS SDK not found, creating minimal version"
mkdir -p "$sdk_dir/hcfs-python-sdk"
cat > "$sdk_dir/hcfs-python-sdk/setup.py" << 'EOF'
from setuptools import setup, find_packages
setup(
name="hcfs-sdk",
version="1.0.0",
packages=find_packages(),
install_requires=["httpx", "pydantic"],
)
EOF
mkdir -p "$sdk_dir/hcfs-python-sdk/hcfs"
echo "# HCFS Python SDK Placeholder" > "$sdk_dir/hcfs-python-sdk/hcfs/__init__.py"
fi
# Create Node.js SDK
mkdir -p "$sdk_dir/hcfs-nodejs-sdk"
cat > "$sdk_dir/hcfs-nodejs-sdk/package.json" << 'EOF'
{
"name": "@hcfs/sdk",
"version": "1.0.0",
"description": "HCFS Node.js SDK",
"main": "index.js",
"dependencies": {
"axios": "^1.0.0"
}
}
EOF
echo "module.exports = { HCFSClient: class HCFSClient {} };" > "$sdk_dir/hcfs-nodejs-sdk/index.js"
# Create Go SDK
mkdir -p "$sdk_dir/hcfs-go-sdk"
cat > "$sdk_dir/hcfs-go-sdk/go.mod" << 'EOF'
module github.com/hcfs/go-sdk
go 1.21
require (
github.com/go-resty/resty/v2 v2.7.0
)
EOF
cat > "$sdk_dir/hcfs-go-sdk/client.go" << 'EOF'
package client
import "github.com/go-resty/resty/v2"
type HCFSClient struct {
client *resty.Client
baseURL string
}
func NewHCFSClient(baseURL string) (*HCFSClient, error) {
return &HCFSClient{
client: resty.New(),
baseURL: baseURL,
}, nil
}
EOF
success "HCFS SDKs prepared"
}
# Function to copy scripts
prepare_scripts() {
log "Preparing build scripts..."
# Copy scripts to each image directory
for image_dir in "$SCRIPT_DIR"/hcfs-*; do
if [ -d "$image_dir" ]; then
mkdir -p "$image_dir/scripts"
mkdir -p "$image_dir/config"
mkdir -p "$image_dir/hcfs-client"
# Copy common scripts
cp "$SCRIPT_DIR/hcfs-base/scripts/"* "$image_dir/scripts/" 2>/dev/null || true
cp "$SCRIPT_DIR/hcfs-base/config/"* "$image_dir/config/" 2>/dev/null || true
# Copy HCFS client
cp -r "$SCRIPT_DIR/sdks/hcfs-python-sdk/"* "$image_dir/hcfs-client/" 2>/dev/null || true
fi
done
success "Scripts prepared"
}
# Function to validate prerequisites
validate_prerequisites() {
log "Validating prerequisites..."
# Check if Docker is available
if ! command -v docker &> /dev/null; then
error "Docker is not installed or not in PATH"
exit 1
fi
# Check if Docker daemon is running
if ! docker info &> /dev/null; then
error "Docker daemon is not running"
exit 1
fi
# Check if required directories exist
if [ ! -d "$SCRIPT_DIR/hcfs-base" ]; then
error "Base image directory not found: $SCRIPT_DIR/hcfs-base"
exit 1
fi
success "Prerequisites validated"
}
# Function to build all images
build_all_images() {
log "Building HCFS development environment images..."
local images=(
"bzzz-hcfs-base:$SCRIPT_DIR/hcfs-base:"
"bzzz-hcfs-python:$SCRIPT_DIR/hcfs-python:"
"bzzz-hcfs-nodejs:$SCRIPT_DIR/hcfs-nodejs:"
"bzzz-hcfs-go:$SCRIPT_DIR/hcfs-go:"
)
local failed_builds=()
if [ "$BUILD_PARALLEL" = "true" ]; then
log "Building images in parallel..."
local pids=()
for image_spec in "${images[@]}"; do
IFS=':' read -r image_name dockerfile_dir build_args <<< "$image_spec"
(build_image "$image_name" "$dockerfile_dir" "$build_args") &
pids+=($!)
done
# Wait for all builds to complete
for pid in "${pids[@]}"; do
if ! wait $pid; then
failed_builds+=("PID:$pid")
fi
done
else
log "Building images sequentially..."
for image_spec in "${images[@]}"; do
IFS=':' read -r image_name dockerfile_dir build_args <<< "$image_spec"
if ! build_image "$image_name" "$dockerfile_dir" "$build_args"; then
failed_builds+=("$image_name")
fi
done
fi
# Report results
if [ ${#failed_builds[@]} -eq 0 ]; then
success "All images built successfully!"
else
error "Failed to build images: ${failed_builds[*]}"
return 1
fi
}
# Function to push images to registry
push_images() {
log "Pushing images to registry: $REGISTRY"
local images=(
"bzzz-hcfs-base"
"bzzz-hcfs-python"
"bzzz-hcfs-nodejs"
"bzzz-hcfs-go"
)
for image in "${images[@]}"; do
local full_name="$REGISTRY/$NAMESPACE/$image:$VERSION"
log "Pushing $full_name..."
if docker push "$full_name"; then
success "Pushed $full_name"
else
warning "Failed to push $full_name"
fi
done
}
# Function to run tests
test_images() {
log "Testing built images..."
local images=(
"bzzz-hcfs-base"
"bzzz-hcfs-python"
"bzzz-hcfs-nodejs"
"bzzz-hcfs-go"
)
for image in "${images[@]}"; do
log "Testing $image..."
# Basic smoke test
if docker run --rm "$image:$VERSION" /bin/echo "Image $image test successful"; then
success "Test passed: $image"
else
warning "Test failed: $image"
fi
done
}
# Function to clean up
cleanup() {
log "Cleaning up temporary files..."
# Remove copied SDK files
rm -rf "$SCRIPT_DIR/sdks"
# Clean up dangling images
docker image prune -f &> /dev/null || true
success "Cleanup completed"
}
# Main execution
main() {
local command="${1:-build}"
case $command in
"build")
validate_prerequisites
prepare_hcfs_sdks
prepare_scripts
build_all_images
;;
"push")
push_images
;;
"test")
test_images
;;
"all")
validate_prerequisites
prepare_hcfs_sdks
prepare_scripts
build_all_images
test_images
push_images
;;
"clean")
cleanup
;;
"help"|*)
echo "HCFS Docker Images Build Script"
echo ""
echo "Usage: $0 {build|push|test|all|clean|help}"
echo ""
echo "Commands:"
echo " build - Build all HCFS development images"
echo " push - Push images to registry"
echo " test - Run smoke tests on built images"
echo " all - Build, test, and push images"
echo " clean - Clean up temporary files"
echo " help - Show this help message"
echo ""
echo "Environment Variables:"
echo " DOCKER_REGISTRY - Docker registry URL (default: registry.home.deepblack.cloud)"
echo " DOCKER_NAMESPACE - Docker namespace (default: tony)"
echo " VERSION - Image version tag (default: latest)"
echo " BUILD_PARALLEL - Build images in parallel (default: false)"
exit 0
;;
esac
}
# Set up signal handlers for cleanup
trap cleanup EXIT INT TERM
# Execute main function
main "$@"

View File

@@ -0,0 +1,247 @@
# HCFS Development Ecosystem
# Complete Docker Compose setup for HCFS-enabled agent development
version: '3.8'
services:
# HCFS Core API Service
hcfs-api:
image: hcfs:latest
container_name: hcfs-api
ports:
- "8000:8000"
environment:
- HCFS_DATABASE_URL=sqlite:///data/hcfs.db
- HCFS_HOST=0.0.0.0
- HCFS_PORT=8000
- HCFS_LOG_LEVEL=info
volumes:
- hcfs-data:/data
- hcfs-logs:/logs
networks:
- hcfs-network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# HCFS RL Context Curator
hcfs-rl-tuner:
image: hcfs:latest
container_name: hcfs-rl-tuner
ports:
- "8001:8001"
environment:
- HCFS_API_URL=http://hcfs-api:8000
- RL_TUNER_HOST=0.0.0.0
- RL_TUNER_PORT=8001
volumes:
- hcfs-rl-data:/data
networks:
- hcfs-network
depends_on:
- hcfs-api
restart: unless-stopped
command: ["python", "-m", "hcfs.rl_curator.rl_tuner_service"]
# Python Development Agent
agent-python:
build:
context: ./hcfs-python
dockerfile: Dockerfile
container_name: agent-python-dev
ports:
- "8888:8888" # Jupyter
- "8080:8080" # Development server
environment:
- AGENT_ID=python-dev-agent
- TASK_ID=development-task
- HCFS_API_URL=http://hcfs-api:8000
- HCFS_ENABLED=true
- GIT_USER_NAME=HCFS Agent
- GIT_USER_EMAIL=agent@hcfs.local
volumes:
- python-workspace:/home/agent/work
- python-cache:/home/agent/.cache
networks:
- hcfs-network
depends_on:
- hcfs-api
stdin_open: true
tty: true
restart: unless-stopped
# Node.js Development Agent
agent-nodejs:
build:
context: ./hcfs-nodejs
dockerfile: Dockerfile
container_name: agent-nodejs-dev
ports:
- "3000:3000" # Node.js app
- "9229:9229" # Node.js debugger
environment:
- AGENT_ID=nodejs-dev-agent
- TASK_ID=development-task
- HCFS_API_URL=http://hcfs-api:8000
- HCFS_ENABLED=true
- NODE_ENV=development
volumes:
- nodejs-workspace:/home/agent/work
- nodejs-cache:/home/agent/.npm
networks:
- hcfs-network
depends_on:
- hcfs-api
stdin_open: true
tty: true
restart: unless-stopped
# Go Development Agent
agent-go:
build:
context: ./hcfs-go
dockerfile: Dockerfile
container_name: agent-go-dev
ports:
- "8090:8080" # Go web server
- "2345:2345" # Delve debugger
environment:
- AGENT_ID=go-dev-agent
- TASK_ID=development-task
- HCFS_API_URL=http://hcfs-api:8000
- HCFS_ENABLED=true
- CGO_ENABLED=1
volumes:
- go-workspace:/home/agent/work
- go-cache:/home/agent/.cache
networks:
- hcfs-network
depends_on:
- hcfs-api
stdin_open: true
tty: true
restart: unless-stopped
# Generic Development Agent (base image)
agent-generic:
build:
context: ./hcfs-base
dockerfile: Dockerfile
container_name: agent-generic-dev
ports:
- "8050:8080"
environment:
- AGENT_ID=generic-dev-agent
- TASK_ID=development-task
- HCFS_API_URL=http://hcfs-api:8000
- HCFS_ENABLED=true
volumes:
- generic-workspace:/home/agent/work
networks:
- hcfs-network
depends_on:
- hcfs-api
stdin_open: true
tty: true
restart: unless-stopped
# HCFS Management Dashboard (optional)
hcfs-dashboard:
image: nginx:alpine
container_name: hcfs-dashboard
ports:
- "8080:80"
volumes:
- ./dashboard:/usr/share/nginx/html:ro
networks:
- hcfs-network
depends_on:
- hcfs-api
restart: unless-stopped
# Development Database (PostgreSQL for advanced features)
postgres:
image: postgres:15-alpine
container_name: hcfs-postgres
environment:
- POSTGRES_DB=hcfs
- POSTGRES_USER=hcfs
- POSTGRES_PASSWORD=hcfs_password
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- hcfs-network
restart: unless-stopped
# Redis for caching and session management
redis:
image: redis:7-alpine
container_name: hcfs-redis
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- hcfs-network
restart: unless-stopped
# MinIO for object storage (artifact storage)
minio:
image: minio/minio:latest
container_name: hcfs-minio
ports:
- "9000:9000"
- "9001:9001"
environment:
- MINIO_ROOT_USER=minioadmin
- MINIO_ROOT_PASSWORD=minioadmin123
volumes:
- minio-data:/data
networks:
- hcfs-network
command: server /data --console-address ":9001"
restart: unless-stopped
networks:
hcfs-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
# HCFS Core Data
hcfs-data:
driver: local
hcfs-logs:
driver: local
hcfs-rl-data:
driver: local
# Agent Workspaces (persistent across container restarts)
python-workspace:
driver: local
python-cache:
driver: local
nodejs-workspace:
driver: local
nodejs-cache:
driver: local
go-workspace:
driver: local
go-cache:
driver: local
generic-workspace:
driver: local
# Infrastructure Data
postgres-data:
driver: local
redis-data:
driver: local
minio-data:
driver: local

131
docker/hcfs-base/Dockerfile Normal file
View File

@@ -0,0 +1,131 @@
# HCFS Base Image - Production-ready environment with HCFS integration
FROM ubuntu:22.04
LABEL maintainer="anthony@deepblack.cloud"
LABEL description="HCFS-integrated base image for AI agent development environments"
LABEL version="1.0.0"
# Prevent interactive prompts during package installation
ENV DEBIAN_FRONTEND=noninteractive
ENV TERM=xterm-256color
# Set up standard environment
ENV HCFS_WORKSPACE_ROOT=/workspace
ENV HCFS_MOUNT_POINT=/mnt/hcfs
ENV HCFS_API_URL=http://host.docker.internal:8000
ENV HCFS_ENABLED=true
ENV PYTHONPATH=/usr/local/lib/python3.10/site-packages:$PYTHONPATH
# Create agent user for sandboxed execution
RUN groupadd -r agent && useradd -r -g agent -d /home/agent -s /bin/bash agent
# Install system dependencies
RUN apt-get update && apt-get install -y \
# Core system tools
curl \
wget \
git \
make \
build-essential \
software-properties-common \
gnupg2 \
lsb-release \
ca-certificates \
apt-transport-https \
# Development essentials
vim \
nano \
tree \
jq \
zip \
unzip \
rsync \
tmux \
screen \
htop \
# Network tools
net-tools \
iputils-ping \
dnsutils \
# Python and pip
python3 \
python3-pip \
python3-dev \
python3-venv \
# FUSE for HCFS mounting
fuse3 \
libfuse3-dev \
# Additional utilities
sqlite3 \
openssh-client \
&& rm -rf /var/lib/apt/lists/*
# Set up Python symlinks
RUN ln -sf /usr/bin/python3 /usr/bin/python && \
ln -sf /usr/bin/pip3 /usr/bin/pip
# Install HCFS Python SDK and dependencies
RUN pip install --no-cache-dir \
httpx \
websockets \
fastapi \
uvicorn \
pydantic \
python-multipart \
aiofiles \
sentence-transformers \
numpy \
scipy \
scikit-learn \
requests \
pyyaml \
toml \
click
# Create directory structure
RUN mkdir -p \
/workspace \
/mnt/hcfs \
/home/agent \
/home/agent/work \
/home/agent/.local \
/home/agent/.cache \
/opt/hcfs \
/etc/hcfs \
/var/log/hcfs
# Set up HCFS integration scripts
COPY scripts/hcfs-init.sh /opt/hcfs/
COPY scripts/hcfs-mount.sh /opt/hcfs/
COPY scripts/hcfs-workspace.sh /opt/hcfs/
COPY scripts/entrypoint.sh /opt/hcfs/
COPY config/hcfs-agent.yaml /etc/hcfs/
# Make scripts executable
RUN chmod +x /opt/hcfs/*.sh
# Install HCFS client library
COPY hcfs-client /opt/hcfs/client
RUN cd /opt/hcfs/client && pip install -e .
# Set up agent workspace
RUN chown -R agent:agent /home/agent /workspace /mnt/hcfs
RUN chmod 755 /home/agent /workspace
# Configure sudo for agent user (needed for FUSE mounts)
RUN echo "agent ALL=(ALL) NOPASSWD: /bin/mount, /bin/umount, /usr/bin/fusermount3" >> /etc/sudoers
# Set default working directory
WORKDIR /home/agent/work
# Environment for development
ENV HOME=/home/agent
ENV USER=agent
ENV SHELL=/bin/bash
# Expose standard ports for development services
EXPOSE 8080 8000 3000 5000
# Set up entrypoint that initializes HCFS workspace
ENTRYPOINT ["/opt/hcfs/entrypoint.sh"]
CMD ["/bin/bash"]

View File

@@ -0,0 +1,137 @@
# HCFS Agent Configuration
# This configuration is used by agents running in HCFS-enabled containers
hcfs:
# HCFS API Configuration
api:
url: "http://host.docker.internal:8000"
timeout: 30s
retry_count: 3
# Workspace Configuration
workspace:
root: "/home/agent/work"
mount_point: "/mnt/hcfs"
auto_sync: true
sync_interval: 30s
# Artifact Collection
artifacts:
enabled: true
patterns:
- "*.log"
- "*.md"
- "*.txt"
- "*.json"
- "*.yaml"
- "output/*"
- "build/*.json"
- "results/*"
max_size: "10MB"
compress: false
# Cleanup Configuration
cleanup:
idle_timeout: "1h"
auto_cleanup: true
preserve_artifacts: true
# Agent Capabilities
agent:
capabilities:
- "file_operations"
- "command_execution"
- "artifact_collection"
- "context_sharing"
- "workspace_management"
# Resource Limits
limits:
max_memory: "2GB"
max_cpu: "2.0"
max_disk: "10GB"
max_files: 10000
# Development Tools
tools:
python:
enabled: true
version: "3.10"
venv: true
packages:
- "requests"
- "pyyaml"
- "click"
- "rich"
git:
enabled: true
auto_config: true
make:
enabled: true
docker:
enabled: false # Disabled by default for security
# Security Configuration
security:
user: "agent"
home: "/home/agent"
shell: "/bin/bash"
# Network restrictions
network:
allow_outbound: true
blocked_ports:
- 22 # SSH
- 3389 # RDP
- 5432 # PostgreSQL
- 3306 # MySQL
# File system restrictions
filesystem:
read_only_paths:
- "/etc"
- "/usr"
- "/boot"
writable_paths:
- "/home/agent"
- "/tmp"
- "/workspace"
- "/mnt/hcfs"
# Logging Configuration
logging:
level: "info"
format: "json"
destinations:
- "/var/log/hcfs/agent.log"
- "stdout"
# Log categories
categories:
workspace: "debug"
artifacts: "info"
hcfs_api: "info"
security: "warn"
# Environment Variables
environment:
PYTHONPATH: "/usr/local/lib/python3.10/site-packages"
PATH: "/home/agent/.local/bin:/usr/local/bin:/usr/bin:/bin"
TERM: "xterm-256color"
EDITOR: "vim"
# Container Metadata
metadata:
version: "1.0.0"
created_by: "bzzz-hcfs-integration"
description: "HCFS-enabled agent container for distributed AI development"
# Tags for categorization
tags:
- "ai-agent"
- "hcfs-enabled"
- "development"
- "sandboxed"

View File

@@ -0,0 +1,197 @@
#!/bin/bash
set -euo pipefail
# HCFS Agent Container Entrypoint
echo "🚀 Starting HCFS-enabled agent container..."
# Environment validation
AGENT_ID="${AGENT_ID:-agent-$(hostname)}"
TASK_ID="${TASK_ID:-task-$(date +%s)}"
HCFS_API_URL="${HCFS_API_URL:-http://host.docker.internal:8000}"
HCFS_ENABLED="${HCFS_ENABLED:-true}"
echo "📋 Container Configuration:"
echo " Agent ID: $AGENT_ID"
echo " Task ID: $TASK_ID"
echo " HCFS API: $HCFS_API_URL"
echo " HCFS Enabled: $HCFS_ENABLED"
# Function to wait for HCFS API
wait_for_hcfs() {
local max_attempts=30
local attempt=0
echo "⏳ Waiting for HCFS API to be available..."
while [ $attempt -lt $max_attempts ]; do
if curl -s "$HCFS_API_URL/health" > /dev/null 2>&1; then
echo "✅ HCFS API is available"
return 0
fi
echo " Attempt $((attempt + 1))/$max_attempts - HCFS API not ready"
sleep 2
attempt=$((attempt + 1))
done
echo "❌ HCFS API failed to become available after $max_attempts attempts"
return 1
}
# Function to initialize HCFS workspace
init_hcfs_workspace() {
echo "🔧 Initializing HCFS workspace..."
# Create workspace context in HCFS
local workspace_path="/agents/$AGENT_ID/workspaces/$(date +%s)"
local context_data=$(cat <<EOF
{
"path": "$workspace_path",
"content": "Agent workspace for container $(hostname)",
"summary": "Agent $AGENT_ID workspace - Task $TASK_ID",
"metadata": {
"agent_id": "$AGENT_ID",
"task_id": "$TASK_ID",
"container_id": "$(hostname)",
"created_at": "$(date -Iseconds)",
"workspace_type": "agent_container"
}
}
EOF
)
# Create context via HCFS API
local response=$(curl -s -X POST \
-H "Content-Type: application/json" \
-d "$context_data" \
"$HCFS_API_URL/contexts" || echo "")
if [ -n "$response" ]; then
echo "✅ HCFS workspace context created: $workspace_path"
echo "$workspace_path" > /tmp/hcfs-workspace-path
return 0
else
echo "⚠️ Failed to create HCFS workspace context, using local storage"
return 1
fi
}
# Function to mount HCFS
mount_hcfs() {
local workspace_path="$1"
echo "🔗 Mounting HCFS workspace: $workspace_path"
# For now, create a symbolic structure since we don't have full FUSE implementation
# In production, this would be: fusermount3 -o allow_other "$workspace_path" /mnt/hcfs
mkdir -p /mnt/hcfs
mkdir -p /home/agent/work/{src,build,output,logs}
# Create workspace metadata
cat > /home/agent/work/.hcfs-workspace << EOF
HCFS_WORKSPACE_PATH=$workspace_path
HCFS_API_URL=$HCFS_API_URL
AGENT_ID=$AGENT_ID
TASK_ID=$TASK_ID
CREATED_AT=$(date -Iseconds)
EOF
# Set ownership
chown -R agent:agent /home/agent/work /mnt/hcfs
echo "✅ HCFS workspace mounted and configured"
}
# Function to setup development environment
setup_dev_environment() {
echo "🛠️ Setting up development environment..."
# Create standard development directories
sudo -u agent mkdir -p /home/agent/{.local/bin,.config,.cache,work/{src,tests,docs,scripts}}
# Set up git configuration if provided
if [ -n "${GIT_USER_NAME:-}" ] && [ -n "${GIT_USER_EMAIL:-}" ]; then
sudo -u agent git config --global user.name "$GIT_USER_NAME"
sudo -u agent git config --global user.email "$GIT_USER_EMAIL"
echo "✅ Git configuration set: $GIT_USER_NAME <$GIT_USER_EMAIL>"
fi
# Set up Python virtual environment
if [ "${SETUP_PYTHON_VENV:-true}" = "true" ]; then
sudo -u agent python3 -m venv /home/agent/.venv
echo "✅ Python virtual environment created"
fi
echo "✅ Development environment ready"
}
# Function to start background services
start_background_services() {
echo "🔄 Starting background services..."
# Start HCFS workspace sync daemon (if needed)
if [ "$HCFS_ENABLED" = "true" ] && [ -f /tmp/hcfs-workspace-path ]; then
/opt/hcfs/hcfs-workspace.sh daemon &
echo "✅ HCFS workspace sync daemon started"
fi
}
# Function to cleanup on exit
cleanup() {
echo "🧹 Container cleanup initiated..."
if [ "$HCFS_ENABLED" = "true" ] && [ -f /tmp/hcfs-workspace-path ]; then
echo "💾 Storing final workspace state to HCFS..."
/opt/hcfs/hcfs-workspace.sh finalize
fi
echo "✅ Cleanup completed"
}
# Set up signal handlers for graceful shutdown
trap cleanup EXIT INT TERM
# Main initialization sequence
main() {
echo "🏁 Starting HCFS Agent Container initialization..."
# Wait for HCFS if enabled
if [ "$HCFS_ENABLED" = "true" ]; then
if wait_for_hcfs; then
if init_hcfs_workspace; then
local workspace_path=$(cat /tmp/hcfs-workspace-path)
mount_hcfs "$workspace_path"
else
echo "⚠️ HCFS workspace initialization failed, continuing with local storage"
fi
else
echo "⚠️ HCFS API unavailable, continuing with local storage"
fi
else
echo " HCFS disabled, using local storage only"
fi
# Set up development environment
setup_dev_environment
# Start background services
start_background_services
echo "🎉 HCFS Agent Container initialization complete!"
echo "📁 Workspace: /home/agent/work"
echo "🔧 Agent: $AGENT_ID"
echo "📋 Task: $TASK_ID"
# Execute the provided command or start interactive shell
if [ $# -eq 0 ]; then
echo "🔧 Starting interactive shell..."
exec sudo -u agent -i /bin/bash
else
echo "🚀 Executing command: $*"
exec sudo -u agent "$@"
fi
}
# Execute main function
main "$@"

View File

@@ -0,0 +1,242 @@
#!/bin/bash
set -euo pipefail
# HCFS Workspace Management Script
# Handles workspace synchronization and artifact collection
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
WORKSPACE_DIR="/home/agent/work"
HCFS_CONFIG="/home/agent/work/.hcfs-workspace"
# Load workspace configuration
if [ -f "$HCFS_CONFIG" ]; then
source "$HCFS_CONFIG"
else
echo "⚠️ No HCFS workspace configuration found"
exit 1
fi
# Logging function
log() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a /var/log/hcfs/workspace.log
}
# Function to store artifact in HCFS
store_artifact() {
local artifact_path="$1"
local artifact_name="$2"
local content="$3"
local hcfs_artifact_path="${HCFS_WORKSPACE_PATH}/artifacts/${artifact_name}"
local artifact_data=$(cat <<EOF
{
"path": "$hcfs_artifact_path",
"content": "$content",
"summary": "Artifact: $artifact_name",
"metadata": {
"agent_id": "$AGENT_ID",
"task_id": "$TASK_ID",
"artifact_name": "$artifact_name",
"artifact_type": "workspace_output",
"file_path": "$artifact_path",
"created_at": "$(date -Iseconds)"
}
}
EOF
)
local response=$(curl -s -X POST \
-H "Content-Type: application/json" \
-d "$artifact_data" \
"$HCFS_API_URL/contexts" || echo "")
if [ -n "$response" ]; then
log "✅ Stored artifact: $artifact_name -> $hcfs_artifact_path"
return 0
else
log "❌ Failed to store artifact: $artifact_name"
return 1
fi
}
# Function to collect and store workspace artifacts
collect_artifacts() {
log "📦 Collecting workspace artifacts..."
local artifact_count=0
# Common artifact patterns
local artifact_patterns=(
"*.log"
"*.md"
"*.txt"
"*.json"
"*.yaml"
"*.yml"
"output/*"
"build/*.json"
"build/*.xml"
"results/*"
"./**/README*"
"./**/CHANGELOG*"
"./**/requirements*.txt"
"./**/package*.json"
"./**/Cargo.toml"
"./**/go.mod"
"./**/pom.xml"
)
for pattern in "${artifact_patterns[@]}"; do
while IFS= read -r -d '' file; do
if [ -f "$file" ] && [ -s "$file" ]; then
local relative_path="${file#$WORKSPACE_DIR/}"
local content=$(base64 -w 0 "$file" 2>/dev/null || echo "")
if [ -n "$content" ] && [ ${#content} -lt 1000000 ]; then # Limit to 1MB
if store_artifact "$relative_path" "$relative_path" "$content"; then
artifact_count=$((artifact_count + 1))
fi
fi
fi
done < <(find "$WORKSPACE_DIR" -name "$pattern" -type f -print0 2>/dev/null || true)
done
log "✅ Collected $artifact_count artifacts"
}
# Function to update workspace status in HCFS
update_workspace_status() {
local status="$1"
local message="$2"
local status_data=$(cat <<EOF
{
"path": "${HCFS_WORKSPACE_PATH}/status",
"content": "$message",
"summary": "Workspace status: $status",
"metadata": {
"agent_id": "$AGENT_ID",
"task_id": "$TASK_ID",
"status": "$status",
"timestamp": "$(date -Iseconds)",
"hostname": "$(hostname)",
"workspace_dir": "$WORKSPACE_DIR"
}
}
EOF
)
curl -s -X POST \
-H "Content-Type: application/json" \
-d "$status_data" \
"$HCFS_API_URL/contexts" > /dev/null || true
log "📊 Updated workspace status: $status"
}
# Function to sync workspace changes
sync_workspace() {
log "🔄 Syncing workspace changes..."
# Create workspace summary
local file_count=$(find "$WORKSPACE_DIR" -type f 2>/dev/null | wc -l)
local dir_count=$(find "$WORKSPACE_DIR" -type d 2>/dev/null | wc -l)
local total_size=$(du -sb "$WORKSPACE_DIR" 2>/dev/null | cut -f1 || echo "0")
local summary=$(cat <<EOF
Workspace Summary ($(date -Iseconds)):
- Files: $file_count
- Directories: $dir_count
- Total Size: $total_size bytes
- Agent: $AGENT_ID
- Task: $TASK_ID
- Container: $(hostname)
Recent Activity:
$(ls -la "$WORKSPACE_DIR" 2>/dev/null | head -10 || echo "No files")
EOF
)
update_workspace_status "active" "$summary"
}
# Function to finalize workspace
finalize_workspace() {
log "🏁 Finalizing workspace..."
# Collect all artifacts
collect_artifacts
# Create final summary
local completion_summary=$(cat <<EOF
Workspace Completion Summary:
- Agent ID: $AGENT_ID
- Task ID: $TASK_ID
- Container: $(hostname)
- Started: $CREATED_AT
- Completed: $(date -Iseconds)
- Duration: $(($(date +%s) - $(date -d "$CREATED_AT" +%s 2>/dev/null || echo "0"))) seconds
Final Workspace Contents:
$(find "$WORKSPACE_DIR" -type f 2>/dev/null | head -20 || echo "No files")
Artifacts Collected:
$(ls "$WORKSPACE_DIR"/{output,build,logs,results}/* 2>/dev/null | head -10 || echo "No artifacts")
EOF
)
update_workspace_status "completed" "$completion_summary"
log "✅ Workspace finalized"
}
# Daemon mode for continuous sync
daemon_mode() {
log "🔄 Starting HCFS workspace sync daemon..."
local sync_interval=30 # seconds
local last_sync=0
while true; do
local current_time=$(date +%s)
if [ $((current_time - last_sync)) -ge $sync_interval ]; then
sync_workspace
last_sync=$current_time
fi
sleep 5
done
}
# Main command dispatcher
case "${1:-help}" in
"sync")
sync_workspace
;;
"collect")
collect_artifacts
;;
"finalize")
finalize_workspace
;;
"daemon")
daemon_mode
;;
"status")
update_workspace_status "active" "Status check at $(date -Iseconds)"
;;
"help"|*)
echo "HCFS Workspace Management Script"
echo ""
echo "Usage: $0 {sync|collect|finalize|daemon|status|help}"
echo ""
echo "Commands:"
echo " sync - Sync current workspace state to HCFS"
echo " collect - Collect and store artifacts in HCFS"
echo " finalize - Finalize workspace and store all artifacts"
echo " daemon - Run continuous sync daemon"
echo " status - Update workspace status in HCFS"
echo " help - Show this help message"
;;
esac

141
docker/hcfs-go/Dockerfile Normal file
View File

@@ -0,0 +1,141 @@
# HCFS Go Development Environment
FROM bzzz-hcfs-base:latest
LABEL maintainer="anthony@deepblack.cloud"
LABEL description="HCFS Go development environment with modern Go tools"
LABEL language="go"
LABEL version="1.0.0"
# Install Go
ENV GO_VERSION=1.21.3
RUN wget -O go.tar.gz "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" && \
tar -C /usr/local -xzf go.tar.gz && \
rm go.tar.gz
# Set up Go environment
ENV GOROOT=/usr/local/go
ENV GOPATH=/home/agent/go
ENV GOCACHE=/home/agent/.cache/go-build
ENV GOMODCACHE=/home/agent/.cache/go-mod
ENV PATH=$GOROOT/bin:$GOPATH/bin:$PATH
# Create Go workspace
RUN sudo -u agent mkdir -p /home/agent/go/{bin,src,pkg} && \
sudo -u agent mkdir -p /home/agent/work/{cmd,internal,pkg,api,web,scripts,docs,tests}
# Install Go development tools
RUN sudo -u agent bash -c 'go install golang.org/x/tools/gopls@latest' && \
sudo -u agent bash -c 'go install golang.org/x/tools/cmd/goimports@latest' && \
sudo -u agent bash -c 'go install golang.org/x/lint/golint@latest' && \
sudo -u agent bash -c 'go install github.com/goreleaser/goreleaser@latest' && \
sudo -u agent bash -c 'go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest' && \
sudo -u agent bash -c 'go install github.com/go-delve/delve/cmd/dlv@latest' && \
sudo -u agent bash -c 'go install github.com/swaggo/swag/cmd/swag@latest' && \
sudo -u agent bash -c 'go install github.com/air-verse/air@latest'
# Install popular Go frameworks and libraries
RUN sudo -u agent bash -c 'cd /tmp && go mod init temp && \
go get github.com/gin-gonic/gin@latest && \
go get github.com/gorilla/mux@latest && \
go get github.com/echo-community/echo/v4@latest && \
go get github.com/gofiber/fiber/v2@latest && \
go get gorm.io/gorm@latest && \
go get github.com/stretchr/testify@latest && \
go get github.com/spf13/cobra@latest && \
go get github.com/spf13/viper@latest'
# Install HCFS Go SDK
COPY hcfs-go-sdk /opt/hcfs/go-sdk
RUN cd /opt/hcfs/go-sdk && sudo -u agent go mod tidy
# Create Go project template
RUN sudo -u agent bash -c 'cat > /home/agent/work/go.mod.template << EOF
module hcfs-agent-project
go 1.21
require (
github.com/hcfs/go-sdk v0.1.0
github.com/gin-gonic/gin v1.9.1
github.com/spf13/cobra v1.7.0
github.com/spf13/viper v1.16.0
)
replace github.com/hcfs/go-sdk => /opt/hcfs/go-sdk
EOF'
RUN sudo -u agent bash -c 'cat > /home/agent/work/main.go.template << EOF
package main
import (
"fmt"
"log"
"github.com/hcfs/go-sdk/client"
)
func main() {
// Initialize HCFS client
hcfsClient, err := client.NewHCFSClient("http://host.docker.internal:8000")
if err != nil {
log.Fatal("Failed to create HCFS client:", err)
}
fmt.Println("HCFS Go agent starting...")
// Your agent code here
}
EOF'
# Create Makefile template
RUN sudo -u agent bash -c 'cat > /home/agent/work/Makefile.template << EOF
.PHONY: build run test clean lint fmt
BINARY_NAME=agent
MAIN_PATH=./cmd/main.go
build:
go build -o bin/$(BINARY_NAME) $(MAIN_PATH)
run:
go run $(MAIN_PATH)
test:
go test -v ./...
test-coverage:
go test -v -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
clean:
go clean
rm -f bin/$(BINARY_NAME)
rm -f coverage.out
lint:
golangci-lint run
fmt:
go fmt ./...
goimports -w .
deps:
go mod tidy
go mod download
.DEFAULT_GOAL := build
EOF'
# Go-specific HCFS integration script
COPY scripts/go-hcfs-init.go /opt/hcfs/scripts/
RUN chmod +x /opt/hcfs/scripts/go-hcfs-init.go
# Expose common Go development ports
EXPOSE 8080 8000 9000 2345
# Add Go-specific entrypoint
COPY scripts/go-entrypoint.sh /opt/hcfs/
RUN chmod +x /opt/hcfs/go-entrypoint.sh
ENTRYPOINT ["/opt/hcfs/go-entrypoint.sh"]
CMD ["go", "version"]

View File

@@ -0,0 +1,112 @@
# HCFS Node.js Development Environment
FROM bzzz-hcfs-base:latest
LABEL maintainer="anthony@deepblack.cloud"
LABEL description="HCFS Node.js development environment with modern JS/TS tools"
LABEL language="javascript"
LABEL version="1.0.0"
# Install Node.js and npm
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
apt-get install -y nodejs
# Install Yarn package manager
RUN npm install -g yarn
# Install global development tools
RUN npm install -g \
# TypeScript ecosystem
typescript \
ts-node \
@types/node \
# Build tools
webpack \
webpack-cli \
rollup \
vite \
# Testing frameworks
jest \
mocha \
cypress \
# Code quality
eslint \
prettier \
@typescript-eslint/parser \
@typescript-eslint/eslint-plugin \
# Development servers
nodemon \
concurrently \
# Package management
npm-check-updates \
# Documentation
jsdoc \
typedoc \
# CLI tools
commander \
inquirer \
chalk \
# Process management
pm2 \
forever
# Create Node.js workspace structure
RUN sudo -u agent mkdir -p /home/agent/work/{src,tests,docs,public,build,dist}
# Set up Node.js environment
ENV NODE_ENV=development
ENV NPM_CONFIG_PREFIX=/home/agent/.npm-global
ENV PATH=/home/agent/.npm-global/bin:$PATH
# Create npm configuration
RUN sudo -u agent mkdir -p /home/agent/.npm-global && \
sudo -u agent npm config set prefix '/home/agent/.npm-global'
# Install HCFS Node.js SDK
COPY hcfs-nodejs-sdk /opt/hcfs/nodejs-sdk
RUN cd /opt/hcfs/nodejs-sdk && npm install && npm link
# Create package.json template for new projects
RUN sudo -u agent bash -c 'cat > /home/agent/work/package.json.template << EOF
{
"name": "hcfs-agent-project",
"version": "1.0.0",
"description": "HCFS-enabled Node.js project",
"main": "src/index.js",
"scripts": {
"start": "node src/index.js",
"dev": "nodemon src/index.js",
"test": "jest",
"build": "webpack --mode production",
"lint": "eslint src/",
"format": "prettier --write src/"
},
"dependencies": {
"@hcfs/sdk": "file:/opt/hcfs/nodejs-sdk",
"express": "^4.18.0",
"axios": "^1.0.0"
},
"devDependencies": {
"nodemon": "^3.0.0",
"jest": "^29.0.0",
"eslint": "^8.0.0",
"prettier": "^3.0.0"
},
"engines": {
"node": ">=18.0.0"
}
}
EOF'
# Node.js-specific HCFS integration script
COPY scripts/nodejs-hcfs-init.js /opt/hcfs/scripts/
RUN chmod +x /opt/hcfs/scripts/nodejs-hcfs-init.js
# Expose common Node.js development ports
EXPOSE 3000 8080 8000 9229
# Add Node.js-specific entrypoint
COPY scripts/nodejs-entrypoint.sh /opt/hcfs/
RUN chmod +x /opt/hcfs/nodejs-entrypoint.sh
ENTRYPOINT ["/opt/hcfs/nodejs-entrypoint.sh"]
CMD ["node"]

View File

@@ -0,0 +1,139 @@
# HCFS Python Development Environment
FROM bzzz-hcfs-base:latest
LABEL maintainer="anthony@deepblack.cloud"
LABEL description="HCFS Python development environment with ML/AI tools"
LABEL language="python"
LABEL version="1.0.0"
# Install Python development tools
RUN apt-get update && apt-get install -y \
# Python build dependencies
python3-dev \
python3-wheel \
python3-setuptools \
# Data science libraries dependencies
libhdf5-dev \
libnetcdf-dev \
libopenblas-dev \
liblapack-dev \
gfortran \
# ML/AI library dependencies
libgraphviz-dev \
graphviz \
# Image processing
libjpeg-dev \
libpng-dev \
libtiff-dev \
# Additional development tools
python3-ipython \
jupyter-core \
# Testing tools
python3-pytest \
&& rm -rf /var/lib/apt/lists/*
# Install comprehensive Python package ecosystem
RUN pip install --no-cache-dir \
# Core development
ipython \
jupyter \
jupyterlab \
notebook \
# Web frameworks
flask \
fastapi \
django \
starlette \
# Data science and ML
numpy \
pandas \
scipy \
scikit-learn \
matplotlib \
seaborn \
plotly \
# Deep learning
torch \
torchvision \
transformers \
# NLP
nltk \
spacy \
sentence-transformers \
# API and HTTP
requests \
httpx \
aiohttp \
# Database
sqlalchemy \
psycopg2-binary \
sqlite3 \
# Configuration and serialization
pyyaml \
toml \
configparser \
# CLI tools
click \
typer \
rich \
# Testing
pytest \
pytest-asyncio \
pytest-cov \
# Code quality
black \
flake8 \
mypy \
pylint \
# Documentation
sphinx \
mkdocs \
# Async programming
asyncio \
aiofiles \
# Development utilities
python-dotenv \
tqdm \
loguru
# Install HCFS Python SDK
COPY hcfs-python-sdk /opt/hcfs/python-sdk
RUN cd /opt/hcfs/python-sdk && pip install -e .
# Create development workspace structure
RUN sudo -u agent mkdir -p /home/agent/work/{src,tests,docs,notebooks,data,models,scripts}
# Set up Python-specific environment
ENV PYTHONPATH=/home/agent/work/src:/opt/hcfs/python-sdk:$PYTHONPATH
ENV JUPYTER_CONFIG_DIR=/home/agent/.jupyter
ENV JUPYTER_DATA_DIR=/home/agent/.local/share/jupyter
# Create Jupyter configuration
RUN sudo -u agent mkdir -p /home/agent/.jupyter && \
sudo -u agent bash -c 'cat > /home/agent/.jupyter/jupyter_notebook_config.py << EOF
c.NotebookApp.ip = "0.0.0.0"
c.NotebookApp.port = 8888
c.NotebookApp.open_browser = False
c.NotebookApp.token = ""
c.NotebookApp.password = ""
c.NotebookApp.notebook_dir = "/home/agent/work"
c.NotebookApp.allow_root = False
EOF'
# Python-specific HCFS integration script
COPY scripts/python-hcfs-init.py /opt/hcfs/scripts/
RUN chmod +x /opt/hcfs/scripts/python-hcfs-init.py
# Expose common Python development ports
EXPOSE 8888 8000 5000 8080
# Set Python as the default environment
ENV SHELL=/bin/bash
ENV PYTHON_ENV=development
# Add Python-specific entrypoint
COPY scripts/python-entrypoint.sh /opt/hcfs/
RUN chmod +x /opt/hcfs/python-entrypoint.sh
ENTRYPOINT ["/opt/hcfs/python-entrypoint.sh"]
CMD ["python"]