config.toml lives at the root of your Action Llama project. All sections and fields are optional — sensible defaults are used for anything you omit. If the file doesn’t exist at all, an empty config is assumed.
Full Annotated Example
Field Reference
Top-level fields
| Field | Type | Default | Description |
|---|---|---|---|
maxReruns | number | 10 | Maximum consecutive reruns when an agent requests a rerun via al-rerun before stopping |
maxCallDepth | number | 3 | Maximum depth for agent-to-agent call chains (A calls B calls C = depth 2) |
workQueueSize | number | 100 | Maximum queued work items (webhook events + agent calls) per agent when all runners are busy. Can be overridden per-agent with maxWorkQueueSize in the agent’s config.toml. |
scale | number | (unlimited) | Project-wide cap on total concurrent runners across all agents |
resourceLockTimeout | number | 1800 | Default lock TTL in seconds. Locks expire automatically after this duration unless refreshed via heartbeat. See Resource Locks. |
historyRetentionDays | number | 14 | Number of days to retain run history and webhook receipts in the local SQLite stats database. Older entries are pruned automatically. |
[models.<name>] — Named Models
Define models once in config.toml, then reference them by name in each agent’s SKILL.md frontmatter. Agents list model names in priority order — the first is the primary model, and the rest are fallbacks tried automatically when the primary is rate-limited or unavailable.
| Field | Type | Required | Description |
|---|---|---|---|
provider | string | Yes | LLM provider: "anthropic", "openai", "groq", "google", "xai", "mistral", "openrouter", or "custom" |
model | string | Yes | Model ID (e.g. "claude-sonnet-4-20250514", "gpt-4o", "gemini-2.0-flash-exp") |
authType | string | Yes | Auth method: "api_key", "oauth_token", or "pi_auth" |
thinkingLevel | string | No | Thinking budget: "off", "minimal", "low", "medium", "high", "xhigh". Only applies to Anthropic models with reasoning support. Ignored for other providers. |
config.toml:
[local] — Docker Container Settings
Controls local Docker container isolation. These settings apply only to agents using the default container runtime — they are ignored for agents using the host-user runtime.
| Field | Type | Default | Description |
|---|---|---|---|
image | string | "al-agent:latest" | Base Docker image name |
memory | string | "4g" | Memory limit per container (e.g. "4g", "8g") |
cpus | number | 2 | CPU limit per container |
timeout | number | 900 | Default max container runtime in seconds. Individual agents can override this with timeout in their config.toml. See agent timeout. |
[gateway] — HTTP Server
The gateway starts automatically when Docker mode or webhooks are enabled. It handles health checks, webhook reception, credential serving (local Docker only), resource locking, and the shutdown kill switch.
| Field | Type | Default | Description |
|---|---|---|---|
port | number | 8080 | Port for the gateway HTTP server |
[webhooks.*] — Webhook Sources
Named webhook sources that agents can reference in their webhook triggers. Each source defines a provider type and an optional credential for signature validation.
| Field | Type | Required | Description |
|---|---|---|---|
type | string | Yes | Provider type: "github", "sentry", "linear", or "mintlify" |
credential | string | No | Credential instance name for HMAC signature validation (e.g. "MyOrg" maps to github_webhook_secret:MyOrg). Omit for unsigned webhooks. |
config.toml:
[telemetry] — Observability
Optional OpenTelemetry integration.
| Field | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable or disable telemetry collection |
provider | string | "none" | Telemetry provider: "otel" or "none" |
endpoint | string | — | OpenTelemetry collector endpoint URL (required when provider = "otel") |
serviceName | string | — | Service name reported to the collector |
headers | table | — | Additional HTTP headers sent with telemetry requests (e.g. auth tokens) |
samplingRate | number | — | Sampling rate between 0.0 (none) and 1.0 (all traces) |
Minimal Examples
Anthropic with Docker (typical dev setup)
VPS production (environment file)
Server configuration lives in an environment file (~/.action-llama/environments/<name>.toml), not in config.toml. See VPS Deployment for full setup.
Cloud Run Jobs runtime (environment file)
To run agents as Cloud Run Jobs instead of local Docker containers, add a[cloud] section to your environment file. The scheduler still runs wherever you host it; only agent execution is offloaded to GCP.