Skip to main content

config.toml Reference

The project-level config.toml lives at the root of your Action Llama project. All sections and fields are optional — sensible defaults are used for anything you omit. If the file doesn’t exist at all, an empty config is assumed.

Full Annotated Example

# Default model for all agents (agents can override in their own agent-config.toml)
[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
thinkingLevel = "medium"
authType = "api_key"

# Local Docker container settings
[local]
image = "al-agent:latest"   # Base image name (default: "al-agent:latest")
memory = "4g"               # Memory limit per container (default: "4g")
cpus = 2                    # CPU limit per container (default: 2)
timeout = 900               # Default max container runtime in seconds (default: 900, overridable per-agent)

# Cloud provider config (optional — only needed for `al start -c`)
[cloud]
provider = "cloud-run"      # "cloud-run", "ecs", or "vps"
# ... provider-specific fields (see below)

# Gateway HTTP server settings
[gateway]
port = 8080                 # Gateway port (default: 8080)
lockTimeout = 1800          # Lock TTL in seconds (default: 1800 / 30 minutes)

# Webhook sources — named webhook endpoints with provider type and credential
[webhooks.my-github]
type = "github"
credential = "MyOrg"              # credential instance for HMAC validation

# Scheduler settings
maxReruns = 10              # Max consecutive reruns for successful agent runs (default: 10)
maxCallDepth = 3            # Max depth for agent-to-agent call chains (default: 3)
workQueueSize = 100         # Max queued work items (webhooks + calls) per agent (default: 100)

Field Reference

Top-level fields

FieldTypeDefaultDescription
maxRerunsnumber10Maximum consecutive reruns when an agent requests a rerun via al-rerun before stopping
maxCallDepthnumber3Maximum depth for agent-to-agent call chains (A calls B calls C = depth 2)
workQueueSizenumber100Maximum queued work items (webhook events + agent calls) per agent when all runners are busy

[model] — Default LLM

Default model configuration inherited by all agents that don’t define their own [model] section in agent-config.toml.
FieldTypeRequiredDescription
providerstringYesLLM provider: "anthropic", "openai", "groq", "google", "xai", "mistral", "openrouter", or "custom"
modelstringYesModel ID (e.g. "claude-sonnet-4-20250514", "gpt-4o", "gemini-2.0-flash-exp")
authTypestringYesAuth method: "api_key", "oauth_token", or "pi_auth"
thinkingLevelstringNoThinking budget: "off", "minimal", "low", "medium", "high", "xhigh". Only applies to Anthropic models with reasoning support. Ignored for other providers.
See Models for all supported providers, model IDs, auth types, and thinking levels.

[local] — Docker Container Settings

Controls local Docker container isolation. These settings also apply as resource limits for Cloud Run jobs and ECS Fargate tasks.
FieldTypeDefaultDescription
imagestring"al-agent:latest"Base Docker image name
memorystring"4g"Memory limit per container (e.g. "4g", "8g", "4096" for ECS in MiB)
cpusnumber2CPU limit per container
timeoutnumber900Default max container runtime in seconds. Individual agents can override this with timeout in their agent-config.toml. See agent timeout docs.

[cloud] — Cloud Provider

Only needed when running agents on VPS infrastructure. Configure manually or use deployment tools.
FieldTypeRequiredDescription
providerstringYes"vps" (SSH + Docker)

VPS fields (provider = "vps")

FieldTypeRequiredDefaultDescription
hoststringYesServer IP address or hostname
sshUserstringNo"root"SSH username
sshPortnumberNo22SSH port
sshKeyPathstringNo"~/.ssh/id_rsa"Path to SSH private key
vultrInstanceIdstringNoVultr instance ID (set automatically if provisioned)
vultrRegionstringNoVultr region (set automatically if provisioned)
See VPS docs for full setup.

[gateway] — HTTP Server

The gateway starts automatically when Docker mode or webhooks are enabled. It handles health checks, webhook reception, credential serving (local Docker only), resource locking, and the shutdown kill switch.
FieldTypeDefaultDescription
portnumber8080Port for the gateway HTTP server
lockTimeoutnumber1800Default lock TTL in seconds. Locks expire automatically after this duration unless refreshed via heartbeat.

[webhooks.*] — Webhook Sources

Named webhook sources that agents can reference in their [[webhooks]] triggers. Each source defines a provider type and an optional credential for signature validation.
FieldTypeRequiredDescription
typestringYesProvider type: "github" or "sentry"
credentialstringNoCredential instance name for HMAC signature validation (e.g. "MyOrg" maps to github_webhook_secret:MyOrg). Omit for unsigned webhooks.
[webhooks.my-github]
type = "github"
credential = "MyOrg"              # uses github_webhook_secret:MyOrg for HMAC validation

[webhooks.my-sentry]
type = "sentry"
credential = "SentryProd"         # uses sentry_client_secret:SentryProd

[webhooks.unsigned-github]
type = "github"                   # no credential — accepts unsigned webhooks
Agents reference these sources by name in their agent-config.toml:
[[webhooks]]
source = "my-github"
events = ["issues"]

Minimal Examples

Anthropic with Docker (typical dev setup)

[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
thinkingLevel = "medium"
authType = "api_key"
Everything else uses defaults: Docker enabled, 4GB memory, 2 CPUs, 15min timeout, gateway on port 8080.

Cloud Run production

[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
thinkingLevel = "medium"
authType = "api_key"

[local]
memory = "8g"
cpus = 4
timeout = 7200

[cloud]
provider = "cloud-run"
gcpProject = "my-gcp-project"
region = "us-central1"
artifactRegistry = "us-central1-docker.pkg.dev/my-gcp-project/al-images"

[gateway]
port = 3000

ECS Fargate production

[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
thinkingLevel = "medium"
authType = "api_key"

[cloud]
provider = "ecs"
awsRegion = "us-east-1"
ecsCluster = "al-cluster"
ecrRepository = "123456789012.dkr.ecr.us-east-1.amazonaws.com/al-images"
executionRoleArn = "arn:aws:iam::123456789012:role/ecsTaskExecutionRole"
taskRoleArn = "arn:aws:iam::123456789012:role/al-default-task-role"
subnets = ["subnet-abc123"]

maxReruns = 5
maxCallDepth = 2

VPS production

[model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
thinkingLevel = "medium"
authType = "api_key"

[cloud]
provider = "vps"
host = "5.6.7.8"
sshUser = "root"