Configuration Reference
All Octokraft configuration is done through environment variables. There are no configuration files. Required variables have no defaults — the application will not start if they are missing.
Core
| Variable | Description | Required | Default |
|---|
PORT | API server listen port | No | 8080 |
APP_ENV | Environment mode (development or production) | No | development |
LOG_LEVEL | Logging level (debug, info, warn, error) | No | info |
SECRET_KEY | Encryption key for API tokens. Generate with openssl rand -hex 32. | Yes | — |
Database
| Variable | Description | Required | Default |
|---|
DATABASE_URL | PostgreSQL connection string (e.g., postgres://user:pass@host:5432/octokraft?sslmode=require) | Yes | — |
Redis
| Variable | Description | Required | Default |
|---|
REDIS_URL | Redis connection string (e.g., redis://host:6379) | Yes | — |
Temporal
| Variable | Description | Required | Default |
|---|
TEMPORAL_ADDRESS | Temporal server address in host:port format (e.g., temporal:7233) | Yes | — |
TEMPORAL_NAMESPACE | Temporal namespace | No | default |
FalkorDB
| Variable | Description | Required | Default |
|---|
FALKORDB_HOST | FalkorDB host address (e.g., falkordb:6379) | Yes | — |
FALKORDB_PASSWORD | FalkorDB password, if authentication is enabled | No | — |
Authentication (Clerk)
| Variable | Description | Required | Default |
|---|
CLERK_SECRET_KEY | Clerk API secret key | Yes | — |
CLERK_JWT_ISSUER | Clerk JWT issuer URL (e.g., https://your-instance.clerk.accounts.dev) | Yes | — |
GitHub
| Variable | Description | Required | Default |
|---|
GITHUB_APP_ID | GitHub App ID | Yes | — |
GITHUB_PRIVATE_KEY_PATH | Filesystem path to the GitHub App private key .pem file | Yes | — |
GITHUB_WEBHOOK_SECRET | Secret used to verify incoming webhook payloads | Yes | — |
GITHUB_CLIENT_ID | GitHub App OAuth client ID | No | — |
GITHUB_CLIENT_SECRET | GitHub App OAuth client secret | No | — |
Frontend and CORS
| Variable | Description | Required | Default |
|---|
FRONTEND_URL | Public URL of the Octokraft frontend | No | http://localhost:5173 |
BACKEND_URL | Public URL of the Octokraft API | No | http://localhost:8080 |
CORS_ORIGINS | Comma-separated list of allowed CORS origins | No | http://localhost:3000,http://localhost:5173 |
For production deployments, set CORS_ORIGINS to your actual frontend domain (e.g., https://octokraft.yourcompany.com). The defaults are intended for local development only.
AI Models
Octokraft uses 4 model slots for different analysis tasks. Each slot can be configured independently, allowing you to use different providers or models for different workloads.
Slot Overview
| Slot | Typical Use |
|---|
| OpenAI Large | Code review, architecture review, health assessments |
| OpenAI Small | Convention analysis, drift detection, summarization |
| Anthropic Large | Optional override for tasks that benefit from a different model |
| Anthropic Small | Optional override for lighter tasks |
At minimum, configure the OpenAI Large and OpenAI Small slots. The Anthropic slots are optional overrides.
Environment Variables per Slot
Each slot uses the same set of variables with a different prefix:
| Prefix | Slot |
|---|
LLM_OPENAI_LARGE_ | OpenAI Large |
LLM_OPENAI_SMALL_ | OpenAI Small |
LLM_ANTHROPIC_LARGE_ | Anthropic Large (optional) |
LLM_ANTHROPIC_SMALL_ | Anthropic Small (optional) |
For each prefix, the following variables are available:
| Suffix | Description |
|---|
PROVIDER | Provider name (e.g., openai, azure, openrouter, ollama) |
MODEL | Model identifier (e.g., gpt-4o, gpt-4o-mini, claude-sonnet-4-20250514) |
API_KEY | API key for the provider |
BASE_URL | API endpoint URL (e.g., https://api.openai.com/v1) |
PROXY_MODE | Proxy routing mode: empty for direct, openrouter, or ollama |
Example Configuration
# Primary models (required)
LLM_OPENAI_LARGE_PROVIDER=openai
LLM_OPENAI_LARGE_MODEL=gpt-4o
LLM_OPENAI_LARGE_API_KEY=sk-...
LLM_OPENAI_LARGE_BASE_URL=https://api.openai.com/v1
LLM_OPENAI_SMALL_PROVIDER=openai
LLM_OPENAI_SMALL_MODEL=gpt-4o-mini
LLM_OPENAI_SMALL_API_KEY=sk-...
LLM_OPENAI_SMALL_BASE_URL=https://api.openai.com/v1
# Optional: use Anthropic for architecture reviews
LLM_ANTHROPIC_LARGE_PROVIDER=anthropic
LLM_ANTHROPIC_LARGE_MODEL=claude-sonnet-4-20250514
LLM_ANTHROPIC_LARGE_API_KEY=sk-ant-...
LLM_ANTHROPIC_LARGE_BASE_URL=https://api.anthropic.com
Task-to-Model Routing
By default, tasks are routed to model slots automatically. You can override this with the TASK_MODEL_MAPPING variable:
| Variable | Description | Default |
|---|
TASK_MODEL_MAPPING | Custom task-to-slot mapping in task:slot format, comma-separated | See below |
Default routing:
| Task | Default Slot |
|---|
code_review | openai_large |
architecture_review | openai_large |
code_health | openai_large |
convention_analysis | openai_small |
drift_detection | openai_small |
summarization | openai_small |
Override example:
TASK_MODEL_MAPPING=code_review:anthropic_large,architecture_review:anthropic_large
Analyzers
| Variable | Description | Default |
|---|
ANALYZER_IMAGE_PREFIX | Container image prefix for static analyzer images (e.g., ghcr.io/octokraft/) | Empty (local images) |
AGENT_EXECUTION_MODE | Agent execution mode: local (CLI subprocess) or k8s (Kubernetes Job) | Auto-detected |
Kubernetes Agent (k8s mode only)
These variables are only used when AGENT_EXECUTION_MODE is set to k8s:
| Variable | Description | Default |
|---|
K8S_AGENT_IMAGE | Container image for the analysis agent | ghcr.io/ciprian-cgr/octokraft-agent:latest |
K8S_AGENT_NAMESPACE | Kubernetes namespace for agent jobs | octokraft |
K8S_WORKSPACE_PVC | Persistent volume claim for analysis workspace | analysis-workspace-pvc |
K8S_IMAGE_PULL_SECRET | Image pull secret name | ghcr-secret |
K8S_WORKSPACE_ROOT | Workspace mount path inside agent containers | /mnt/analysis-workspace |
Worker Concurrency
| Variable | Description | Default |
|---|
WORKER_MAX_CONCURRENT_ACTIVITIES | Maximum concurrent Temporal activities per worker | 8 |
WORKER_MAX_CONCURRENT_WORKFLOWS | Maximum concurrent Temporal workflows per worker | 5 |
Increase these values on machines with more CPU and memory to improve throughput. Reduce them if workers are running out of memory.
OpenTelemetry
| Variable | Description | Default |
|---|
OTEL_ENABLED | Enable OpenTelemetry tracing | false |
OTEL_ENDPOINT | OTLP collector endpoint (e.g., otel-collector:4317) | — |
OTEL_INSECURE | Use insecure (non-TLS) connection to collector | true |
Health Endpoints
The API server exposes the following operational endpoints:
| Endpoint | Description |
|---|
/healthz | Simple health check. Returns 200 OK if the server is running. Suitable for liveness probes. |
/health/detailed | Detailed health check. Returns the status of each infrastructure dependency (PostgreSQL, Redis, FalkorDB, Temporal) including circuit breaker state. Suitable for readiness probes. |
/metrics | Prometheus-format metrics. Includes HTTP request counts, latencies, active workflow counts, and system resource usage. |
Monitoring
Prometheus
Scrape the /metrics endpoint on port 8080. If deploying on Kubernetes with the Helm chart, enable the ServiceMonitor to configure automatic discovery.
OpenTelemetry
Set the OTEL_* variables above to send distributed traces to your OpenTelemetry collector. Traces cover HTTP requests, Temporal workflow execution, database queries, and AI model calls.
Structured Logging
All components emit structured JSON logs to stdout. Key fields:
| Field | Description |
|---|
level | Log level (debug, info, warn, error) |
msg | Log message |
time | Timestamp |
error | Error details (when applicable) |
request_id | HTTP request correlation ID |
project_id | Project context (when applicable) |
Set LOG_LEVEL to debug for verbose output during troubleshooting. Use info or warn in production.