Skip to main content
Each agent lives in a directory under agents/<name>/. Configuration is split across two files:
FilePurposePortable?
SKILL.mdPortable metadata (name, description, license, compatibility) + markdown instructionsYes — travels with the skill
config.tomlRuntime config (credentials, models, schedule, webhooks, hooks, params, scale, timeout)No — project-local
A skill is the portable artifact (SKILL.md + optionally a Dockerfile). An agent is a skill instantiated in your project with local runtime config. When you al add a skill, it becomes an agent. An optional Dockerfile can also live in the agent directory for custom container images. It may be provided by the skill author or customized per-project.
agents/<name>/
├── SKILL.md        # Portable metadata + instructions
├── config.toml     # Project-local runtime config
└── Dockerfile      # Optional — custom container image (container runtime only)

SKILL.md

The YAML frontmatter contains portable metadata. The markdown body contains the agent’s instructions.
---
name: dev-agent
description: Solves GitHub issues by writing and testing code
license: MIT
compatibility: ">=0.5.0"
---

# Instructions

You are a dev agent. Check for open issues labeled "agent" and fix them.

## Workflow

1. List open issues labeled "agent" in repos from `<agent-config>`
2. For each issue, clone the repo, create a branch, implement the fix
3. Open a PR and link it to the issue

SKILL.md Frontmatter Fields

FieldTypeRequiredDescription
namestringNoHuman-readable name (defaults to directory name)
descriptionstringNoShort description of what the agent does
licensestringNoLicense identifier (e.g. "MIT")
compatibilitystringNoSemver range for Action Llama compatibility

config.toml

The per-agent config.toml contains project-specific runtime configuration. This file is created by al add, al agent new, or al config.
# Install origin — used by `al update` to pull upstream SKILL.md changes
source = "acme/dev-skills"

# Required: named model references from project config.toml [models.*]
# First in list is primary; rest are fallbacks tried on rate limits
models = ["sonnet", "haiku"]

# Required: credential types the agent needs at runtime
# Use "type" for default instance, "type:instance" for named instance
credentials = ["github_token", "git_ssh", "sentry_token"]

# Optional: cron schedule (standard cron syntax)
# Agent must have at least one of: schedule, webhooks
schedule = "*/5 * * * *"

# Optional: number of concurrent runs allowed (default: 1)
# When scale > 1, use LOCK/UNLOCK in your actions to coordinate
scale = 2

# Optional: max runtime in seconds (default: falls back to [local].timeout, then 900)
timeout = 600

# Optional: max queued work items for this agent (default: global workQueueSize)
# When all runners are busy, incoming events are queued up to this limit.
# Oldest events are dropped to make room for newer ones.
maxWorkQueueSize = 50

# Optional: webhook triggers (instead of or in addition to schedule)
[[webhooks]]
source = "my-github"
repos = ["acme/app"]
events = ["issues"]
actions = ["labeled"]
labels = ["agent"]

[[webhooks]]
source = "my-sentry"
resources = ["error", "event_alert"]

[[webhooks]]
source = "my-linear"
events = ["issues"]
actions = ["create", "update"]
labels = ["bug"]

[[webhooks]]
source = "my-mintlify"
events = ["build"]
actions = ["failed"]

# Optional: hooks — shell commands that run before or after the LLM session
[hooks]
pre = [
  "gh repo clone acme/app /tmp/repo --depth 1",
  "curl -o /tmp/context/flags.json https://api.internal/v1/flags",
  "gh issue list --repo acme/app --label P1 --json number,title,body --limit 20 > /tmp/context/issues.json",
]
post = ["upload-artifacts.sh"]

# Optional: custom parameters injected into the agent prompt
[params]
repos = ["acme/app", "acme/api"]
triggerLabel = "agent"
assignee = "bot-user"
sentryOrg = "acme"
sentryProjects = ["web-app", "api"]

# Optional: runtime mode — "container" (default) or "host-user"
# Host-user runs the agent as a separate OS user via sudo, without Docker.
# Useful when the agent needs to run Docker commands itself.
[runtime]
type = "host-user"
run_as = "al-agent"           # OS user to run as (default: "al-agent")

config.toml Field Reference

FieldTypeRequiredDescription
sourcestringNoGit URL or GitHub shorthand for al update. Set automatically by al add.
modelsstring[]YesNamed model references from config.toml [models.*]. First is primary; rest are fallbacks tried automatically on rate limits.
credentialsstring[]YesCredential refs: "type" for default instance, "type:instance" for named instance. See Credentials.
schedulestringNo*Cron expression for polling
scalenumberNoNumber of concurrent runs allowed (default: 1). Set to 0 to disable the agent. Use lock skills in your actions to coordinate instances. See Resource Locks.
timeoutnumberNoMax runtime in seconds. Falls back to [local].timeout in project config, then 900. See Timeout.
maxWorkQueueSizenumberNoMaximum queued work items when all runners are busy. Falls back to global workQueueSize (default: 20). Oldest events are dropped to make room for newer ones.
webhooksarrayNo*Array of webhook trigger objects. See Webhooks.
hookstableNoPre/post hooks that run around the LLM session. See Hooks.
paramstableNoCustom key-value params for the agent prompt
runtimetableNoRuntime mode configuration. See Runtime.
*At least one of schedule or webhooks is required (unless scale = 0).

Scale

The scale field controls how many instances of an agent can run concurrently.
  • Default: 1 (only one instance can run at a time)
  • Minimum: 0 (disables the agent — no runners, cron jobs, or webhook bindings are created)
  • Maximum: No hard limit, but consider system resources and model API rate limits

How it works

  1. Scheduled runs: If a cron trigger fires but all agent instances are busy, the scheduled run is skipped with a warning
  2. Webhook events: If a webhook arrives but all instances are busy, the event is queued (up to workQueueSize limit in global config, default: 100)
  3. Agent calls: If one agent calls another but all target instances are busy, the call is queued in the same work queue

Example use cases

  • Dev agent with scale = 3: Handle multiple GitHub issues simultaneously
  • Review agent with scale = 2: Review multiple PRs in parallel
  • Monitoring agent with scale = 1: Ensure only one instance processes alerts at a time
  • Disabled agent with scale = 0: Keep the config in the project but don’t run it

Resource considerations

Each parallel instance:
  • Uses a separate Docker container (or OS process in host-user mode)
  • Has independent logging streams
  • May consume LLM API quota concurrently
  • Uses system memory and CPU
See Scaling Agents for a guide on scaling with resource locks.

Timeout

The timeout field controls the maximum runtime for an agent invocation. When the timeout expires, the process is terminated with exit code 124. Resolution order: config.toml timeout -> project config.toml [local].timeout -> 900 (default) This means you can set a project-wide default in [local].timeout and override it per-agent.

Examples

# Fast webhook responder
timeout = 300       # 5 minutes

# Medium-length task
timeout = 900       # 15 minutes

# Long-running agent
timeout = 3600      # 1 hour

# Omit timeout — uses [local].timeout or defaults to 900s

Hooks

Hooks run shell commands before and after the LLM session. Pre-hooks (hooks.pre) run after credentials are loaded but before the LLM session starts — use them for cloning repos, fetching data, or staging files. Post-hooks (hooks.post) run after the session completes — use them for cleanup, artifact upload, or reporting. See Dynamic Context for a guide on using hooks effectively.

How it works

  1. Commands run sequentially in the order they appear in config.toml
  2. Commands run inside the agent’s execution environment (container or host-user process) after credential/env setup
  3. Each command runs via /bin/sh -c "..."
  4. If any command exits non-zero, the run aborts with an error
  5. Credential env vars (GITHUB_TOKEN, GH_TOKEN, etc.) are available to hook commands

Fields

FieldTypeRequiredDescription
hooks.prestring[]NoShell commands to run before the LLM session
hooks.poststring[]NoShell commands to run after the LLM session

Examples

[hooks]
pre = [
  "gh repo clone acme/app /tmp/repo --depth 1",
  "gh issue list --repo acme/app --label P1 --json number,title,body --limit 20 > /tmp/context/issues.json",
  "curl -o /tmp/context/flags.json https://api.internal/v1/flags",
]
post = ["upload-artifacts.sh"]

Notes

  • Each hook has a 5-minute timeout
  • Hooks are bounded by the container-level timeout
  • Environment variables set inside hook commands do not propagate back to the agent’s process.env

Runtime

The [runtime] table controls how the agent process is launched. By default, agents run in Docker containers. The host-user runtime runs agents as a separate OS user on the host machine via sudo -u, without Docker. Host-user mode is useful when agents need to run Docker commands themselves (Docker-in-Docker is insecure), or when you want lightweight isolation without container overhead.
[runtime]
type = "host-user"
run_as = "al-agent"

Fields

FieldTypeDefaultDescription
typestring"container"Runtime mode: "container" (Docker) or "host-user" (OS user isolation)
run_asstring"al-agent"OS username to run the agent as. Only used when type = "host-user".

How host-user mode works

  1. The scheduler spawns sudo -u <run_as> al _run-agent <agent> --project <dir>
  2. Credentials are staged to a temp directory and chowned to the agent user
  3. Each run gets an isolated working directory at /tmp/al-runs/<instance-id>/
  4. Logs are written to /tmp/al-runs/<instance-id>.log (owned by the scheduler, not the agent)
  5. No Docker images are built for host-user agents

Setup

The agent OS user must exist and sudoers must be configured. Run al doctor to validate and auto-configure (Linux only):
al doctor
On Linux, al doctor will:
  • Create the OS user if it doesn’t exist (useradd --system --shell /usr/sbin/nologin <run_as>)
  • Add a sudoers rule allowing passwordless execution
On macOS, al doctor prints manual setup instructions.

Limitations

  • No custom Dockerfiles — Dockerfile in the agent directory is ignored
  • No container filesystem isolation — the agent runs on the host filesystem
  • The [local] config section (memory, cpus, image) does not apply to host-user agents
  • needsGateway is false — the gateway is not started for host-user-only projects

Webhook Trigger Fields

Each entry in the webhooks array has the following fields:
FieldTypeRequiredDescription
sourcestringYesName of a webhook source from the project’s config.toml (e.g. "my-github")
All filter fields below are optional. Omit all of them to trigger on everything from that source. See Webhooks for complete filter field tables per provider.

GitHub filter fields

FieldTypeDescription
reposstring[]Filter to specific repos
orgsstring[]Filter to specific organizations
orgstringFilter to a single organization
eventsstring[]Event types: issues, pull_request, push, etc.
actionsstring[]Event actions: opened, labeled, closed, etc.
labelsstring[]Only trigger when issue/PR has these labels
assigneestringOnly trigger when assigned to this user
authorstringOnly trigger for this author
branchesstring[]Only trigger for these branches
conclusionsstring[]Only for workflow_run events with these conclusions

Sentry filter fields

FieldTypeDescription
resourcesstring[]Resource types: event_alert, metric_alert, issue, error, comment

Linear filter fields

FieldTypeDescription
organizationsstring[]Filter to specific Linear organizations
eventsstring[]Linear event types: issues, issue_comment, etc.
actionsstring[]Event actions: create, update, delete, etc.
labelsstring[]Only when issue has these labels
assigneestringOnly when assigned to this user (email)
authorstringOnly for this author (email)

Mintlify filter fields

FieldTypeDescription
projectsstring[]Filter to specific Mintlify projects
eventsstring[]Mintlify event types: build, etc.
actionsstring[]Event actions: failed, succeeded, etc.
branchesstring[]Only for these branches

Model Configuration

The models field references named models defined in config.toml under [models.<name>]. List one or more model names — the first is the primary model, and subsequent entries are fallbacks tried automatically when the primary is rate-limited or unavailable.
models = ["sonnet", "haiku", "gpt4o"]
See Models for all supported providers, model IDs, auth types, thinking levels, and credential setup.