Skip to main content

Configuration Reference

All Octokraft configuration is done through environment variables. There are no configuration files. Required variables have no defaults — the application will not start if they are missing.

Core

VariableDescriptionRequiredDefault
PORTAPI server listen portNo8080
APP_ENVEnvironment mode (development or production)Nodevelopment
LOG_LEVELLogging level (debug, info, warn, error)Noinfo
SECRET_KEYEncryption key for API tokens. Generate with openssl rand -hex 32.Yes

Database

VariableDescriptionRequiredDefault
DATABASE_URLPostgreSQL connection string (e.g., postgres://user:pass@host:5432/octokraft?sslmode=require)Yes

Redis

VariableDescriptionRequiredDefault
REDIS_URLRedis connection string (e.g., redis://host:6379)Yes

Temporal

VariableDescriptionRequiredDefault
TEMPORAL_ADDRESSTemporal server address in host:port format (e.g., temporal:7233)Yes
TEMPORAL_NAMESPACETemporal namespaceNodefault

FalkorDB

VariableDescriptionRequiredDefault
FALKORDB_HOSTFalkorDB host address (e.g., falkordb:6379)Yes
FALKORDB_PASSWORDFalkorDB password, if authentication is enabledNo

Authentication (Clerk)

VariableDescriptionRequiredDefault
CLERK_SECRET_KEYClerk API secret keyYes
CLERK_JWT_ISSUERClerk JWT issuer URL (e.g., https://your-instance.clerk.accounts.dev)Yes

GitHub

VariableDescriptionRequiredDefault
GITHUB_APP_IDGitHub App IDYes
GITHUB_PRIVATE_KEY_PATHFilesystem path to the GitHub App private key .pem fileYes
GITHUB_WEBHOOK_SECRETSecret used to verify incoming webhook payloadsYes
GITHUB_CLIENT_IDGitHub App OAuth client IDNo
GITHUB_CLIENT_SECRETGitHub App OAuth client secretNo

Frontend and CORS

VariableDescriptionRequiredDefault
FRONTEND_URLPublic URL of the Octokraft frontendNohttp://localhost:5173
BACKEND_URLPublic URL of the Octokraft APINohttp://localhost:8080
CORS_ORIGINSComma-separated list of allowed CORS originsNohttp://localhost:3000,http://localhost:5173
For production deployments, set CORS_ORIGINS to your actual frontend domain (e.g., https://octokraft.yourcompany.com). The defaults are intended for local development only.

AI Models

Octokraft uses 4 model slots for different analysis tasks. Each slot can be configured independently, allowing you to use different providers or models for different workloads.

Slot Overview

SlotTypical Use
OpenAI LargeCode review, architecture review, health assessments
OpenAI SmallConvention analysis, drift detection, summarization
Anthropic LargeOptional override for tasks that benefit from a different model
Anthropic SmallOptional override for lighter tasks
At minimum, configure the OpenAI Large and OpenAI Small slots. The Anthropic slots are optional overrides.

Environment Variables per Slot

Each slot uses the same set of variables with a different prefix:
PrefixSlot
LLM_OPENAI_LARGE_OpenAI Large
LLM_OPENAI_SMALL_OpenAI Small
LLM_ANTHROPIC_LARGE_Anthropic Large (optional)
LLM_ANTHROPIC_SMALL_Anthropic Small (optional)
For each prefix, the following variables are available:
SuffixDescription
PROVIDERProvider name (e.g., openai, azure, openrouter, ollama)
MODELModel identifier (e.g., gpt-4o, gpt-4o-mini, claude-sonnet-4-20250514)
API_KEYAPI key for the provider
BASE_URLAPI endpoint URL (e.g., https://api.openai.com/v1)
PROXY_MODEProxy routing mode: empty for direct, openrouter, or ollama

Example Configuration

# Primary models (required)
LLM_OPENAI_LARGE_PROVIDER=openai
LLM_OPENAI_LARGE_MODEL=gpt-4o
LLM_OPENAI_LARGE_API_KEY=sk-...
LLM_OPENAI_LARGE_BASE_URL=https://api.openai.com/v1

LLM_OPENAI_SMALL_PROVIDER=openai
LLM_OPENAI_SMALL_MODEL=gpt-4o-mini
LLM_OPENAI_SMALL_API_KEY=sk-...
LLM_OPENAI_SMALL_BASE_URL=https://api.openai.com/v1

# Optional: use Anthropic for architecture reviews
LLM_ANTHROPIC_LARGE_PROVIDER=anthropic
LLM_ANTHROPIC_LARGE_MODEL=claude-sonnet-4-20250514
LLM_ANTHROPIC_LARGE_API_KEY=sk-ant-...
LLM_ANTHROPIC_LARGE_BASE_URL=https://api.anthropic.com

Task-to-Model Routing

By default, tasks are routed to model slots automatically. You can override this with the TASK_MODEL_MAPPING variable:
VariableDescriptionDefault
TASK_MODEL_MAPPINGCustom task-to-slot mapping in task:slot format, comma-separatedSee below
Default routing:
TaskDefault Slot
code_reviewopenai_large
architecture_reviewopenai_large
code_healthopenai_large
convention_analysisopenai_small
drift_detectionopenai_small
summarizationopenai_small
Override example:
TASK_MODEL_MAPPING=code_review:anthropic_large,architecture_review:anthropic_large

Analyzers

VariableDescriptionDefault
ANALYZER_IMAGE_PREFIXContainer image prefix for static analyzer images (e.g., ghcr.io/octokraft/)Empty (local images)
AGENT_EXECUTION_MODEAgent execution mode: local (CLI subprocess) or k8s (Kubernetes Job)Auto-detected

Kubernetes Agent (k8s mode only)

These variables are only used when AGENT_EXECUTION_MODE is set to k8s:
VariableDescriptionDefault
K8S_AGENT_IMAGEContainer image for the analysis agentghcr.io/ciprian-cgr/octokraft-agent:latest
K8S_AGENT_NAMESPACEKubernetes namespace for agent jobsoctokraft
K8S_WORKSPACE_PVCPersistent volume claim for analysis workspaceanalysis-workspace-pvc
K8S_IMAGE_PULL_SECRETImage pull secret nameghcr-secret
K8S_WORKSPACE_ROOTWorkspace mount path inside agent containers/mnt/analysis-workspace

Worker Concurrency

VariableDescriptionDefault
WORKER_MAX_CONCURRENT_ACTIVITIESMaximum concurrent Temporal activities per worker8
WORKER_MAX_CONCURRENT_WORKFLOWSMaximum concurrent Temporal workflows per worker5
Increase these values on machines with more CPU and memory to improve throughput. Reduce them if workers are running out of memory.

OpenTelemetry

VariableDescriptionDefault
OTEL_ENABLEDEnable OpenTelemetry tracingfalse
OTEL_ENDPOINTOTLP collector endpoint (e.g., otel-collector:4317)
OTEL_INSECUREUse insecure (non-TLS) connection to collectortrue

Health Endpoints

The API server exposes the following operational endpoints:
EndpointDescription
/healthzSimple health check. Returns 200 OK if the server is running. Suitable for liveness probes.
/health/detailedDetailed health check. Returns the status of each infrastructure dependency (PostgreSQL, Redis, FalkorDB, Temporal) including circuit breaker state. Suitable for readiness probes.
/metricsPrometheus-format metrics. Includes HTTP request counts, latencies, active workflow counts, and system resource usage.

Monitoring

Prometheus

Scrape the /metrics endpoint on port 8080. If deploying on Kubernetes with the Helm chart, enable the ServiceMonitor to configure automatic discovery.

OpenTelemetry

Set the OTEL_* variables above to send distributed traces to your OpenTelemetry collector. Traces cover HTTP requests, Temporal workflow execution, database queries, and AI model calls.

Structured Logging

All components emit structured JSON logs to stdout. Key fields:
FieldDescription
levelLog level (debug, info, warn, error)
msgLog message
timeTimestamp
errorError details (when applicable)
request_idHTTP request correlation ID
project_idProject context (when applicable)
Set LOG_LEVEL to debug for verbose output during troubleshooting. Use info or warn in production.