Docker Compose Deployment
This guide walks through deploying Octokraft using Docker Compose. This is the recommended approach for small teams (under 50 developers) or evaluation environments.Prerequisites
- Linux server (Ubuntu 22.04+ recommended)
- Docker 24.x+ with Docker Compose v2
- A registered GitHub App (see GitHub Integration)
- A Clerk account for authentication
- Access to an OpenAI-compatible AI model API
Quick Start
Get deployment files
Download the Octokraft deployment package, which includes
docker-compose.yml, .env.example, and supporting configuration files.Configure environment
Copy the example environment file and fill in your values:Edit
.env and set all required variables. See the environment variables section below for the full list.Start services
docker-compose.yml and update the connection strings in your .env file.Environment Variables
These are the key variables you must configure. For the complete list, see the Configuration Reference.Required
| Variable | Description |
|---|---|
DATABASE_URL | PostgreSQL connection string (e.g., postgres://user:pass@host:5432/octokraft?sslmode=disable) |
REDIS_URL | Redis connection string (e.g., redis://host:6379) |
TEMPORAL_ADDRESS | Temporal server address (e.g., temporal:7233) |
FALKORDB_HOST | FalkorDB host address (e.g., falkordb:6379) |
SECRET_KEY | Encryption key for API tokens. Generate with openssl rand -hex 32. |
CLERK_SECRET_KEY | Your Clerk API secret key |
CLERK_JWT_ISSUER | Your Clerk JWT issuer URL |
GITHUB_APP_ID | Your GitHub App ID |
GITHUB_PRIVATE_KEY_PATH | Path to your GitHub App private key .pem file |
GITHUB_WEBHOOK_SECRET | Secret used to verify GitHub webhook payloads |
CORS_ORIGINS | Allowed CORS origins (e.g., https://octokraft.yourcompany.com) |
FRONTEND_URL | Public URL of the frontend (e.g., https://octokraft.yourcompany.com) |
AI Model Configuration
At minimum, configure the OpenAI large model slot. See Configuration Reference for all 4 slots.| Variable | Description |
|---|---|
LLM_OPENAI_LARGE_PROVIDER | Provider name (e.g., openai, azure, openrouter) |
LLM_OPENAI_LARGE_MODEL | Model identifier (e.g., gpt-4o) |
LLM_OPENAI_LARGE_API_KEY | API key for the provider |
LLM_OPENAI_LARGE_BASE_URL | API endpoint URL |
LLM_OPENAI_SMALL_PROVIDER | Provider for the small model slot |
LLM_OPENAI_SMALL_MODEL | Model identifier (e.g., gpt-4o-mini) |
LLM_OPENAI_SMALL_API_KEY | API key |
LLM_OPENAI_SMALL_BASE_URL | API endpoint URL |
Docker Compose File
Below is a referencedocker-compose.yml with all services. Adjust as needed — if you use managed PostgreSQL or Redis, remove those services and update connection strings.
The
worker service uses the same image as the api service with a different entrypoint. Workers process background tasks including code analysis, health assessments, and PR analysis.Scaling
Adding Workers
The analysis workers handle the compute-intensive tasks. To process more repositories or PRs concurrently, add more worker replicas:Resource Allocation
For a team of 20-30 developers with 10-20 repositories:| Service | CPU | Memory |
|---|---|---|
api | 2 cores | 2 GB |
worker (each) | 2 cores | 2 GB |
frontend | 0.5 cores | 256 MB |
postgres | 2 cores | 2 GB |
redis | 0.5 cores | 512 MB |
falkordb | 1 core | 1 GB |
temporal | 1 core | 1 GB |
TLS Configuration
For production deployments, terminate TLS in front of Octokraft using a reverse proxy such as Nginx, Caddy, or Traefik. Example with Caddy:FRONTEND_URL and BACKEND_URL to the public HTTPS URLs, and update CORS_ORIGINS accordingly.
Operations
Health Checks
Logs
Upgrades
Backups
Back up the PostgreSQL database regularly. The other services (Redis, FalkorDB) contain derived data that can be rebuilt from a fresh analysis.Troubleshooting
API server fails to start
API server fails to start
Check the logs for missing environment variables:The API server will not start if any required variable is missing. The error message will indicate which variable is unset.
Cannot connect to database
Cannot connect to database
Verify that PostgreSQL is running and the connection string is correct:If using an external database, confirm that the
DATABASE_URL is reachable from within the Docker network.Workers not processing tasks
Workers not processing tasks
Verify the worker is running and connected to Temporal:Check the Temporal UI at
http://localhost:8233 to see whether workflows are queued, running, or failing.GitHub webhooks not arriving
GitHub webhooks not arriving
Verify that your server is reachable from the internet on port 443 (or whichever port you expose). GitHub must be able to reach your webhook endpoint.Check webhook delivery status in your GitHub App settings under Advanced > Recent Deliveries.
Analysis running slowly
Analysis running slowly
Scale the worker service to add more processing capacity:Also confirm that the AI model API is responsive. Slow model responses are the most common cause of slow analysis.