Deployment Guide

This guide covers deploying ChronDB in production environments, from single-node Docker setups to Kubernetes clusters.

Docker

Single Container

docker run -d \
  --name chrondb \
  -p 3000:3000 \
  -p 6379:6379 \
  -p 5432:5432 \
  -v chrondb-data:/app/data \
  ghcr.io/avelino/chrondb:latest

Docker Compose

A docker-compose.yml is provided at the repository root for local development and simple deployments:

docker compose up -d

This starts ChronDB with:

  • Persistent data volume (chrondb-data)

  • All three protocols exposed (REST 3000, Redis 6379, PostgreSQL 5432)

  • Health check on /healthz every 10 seconds

  • Automatic restart on failure

  • 1 GB memory limit

Verify the service is running:

Custom Configuration

Mount a custom config.edn to override defaults:

See Configurationarrow-up-right for all available options.

Docker Image Details

  • Base: Debian 12 slim (runtime stage)

  • Build: GraalVM native-image (multi-stage)

  • User: Non-root chrondb user

  • Ports: 3000 (REST), 6379 (Redis), 5432 (PostgreSQL)

  • Entry point: /usr/local/bin/chrondb


Kubernetes

StatefulSet

ChronDB stores data on disk (Git repository + Lucene index), so it should be deployed as a StatefulSet with persistent storage.

Service

Expose ChronDB to other pods in the cluster:

For external access, create an additional LoadBalancer or Ingress resource targeting the service.

ConfigMap

Store your config.edn in a ConfigMap:

Mount it in the StatefulSet:

Prometheus ServiceMonitor

If you use Prometheus Operator, create a ServiceMonitor to scrape ChronDB metrics:


Resource Guidance

Memory

Documents
Recommended Memory

< 10,000

512 MB

10,000 - 100,000

1 - 2 GB

100,000 - 1,000,000

2 - 4 GB

> 1,000,000

4+ GB

Memory usage is driven by:

  • Lucene index segment buffers (in-memory during writes)

  • Git object cache (recently accessed documents)

  • JVM heap for query processing (sorts, aggregations, joins)

Disk

Disk usage grows with both document count and history depth. Every write creates a Git commit, so a document updated 100 times uses ~100x the space of its current size. Run git gc periodically (via the compact command) to optimize storage.

Rule of thumb: allocate 3-5x the expected current dataset size for history and Git overhead.

CPU

ChronDB is mostly I/O bound. A single core handles typical workloads. Allocate additional cores for:

  • Concurrent query processing

  • Lucene index merges (background)

  • Git operations (push/pull to remotes)


Production Checklist

Data Persistence

Monitoring

Security

Backups

Network

Last updated

Was this helpful?