scale > 1 on an agent, multiple instances run concurrently. Without coordination, two instances might pick up the same GitHub issue, review the same PR, or deploy the same service at the same time. Resource locks prevent this.
Why Locks Exist
Locks let concurrent agent instances claim exclusive ownership of a resource before working on it. If another instance already holds the lock, the agent skips that resource and moves on.How It Works
- Before working on a shared resource, the agent runs
rlock "github://acme/app/issues/42". - If the lock is free, the agent gets it and proceeds.
- If another instance already holds the lock, the agent gets back the holder’s name and skips that resource.
- When done, the agent runs
runlock "github://acme/app/issues/42".
SKILL.md workflow — no need to think about HTTP endpoints or authentication.
Commands
| Command | Description |
|---|---|
rlock "<uri>" | Acquire an exclusive lock. Fails if another instance holds it. |
runlock "<uri>" | Release a lock. Only the holder can release. |
rlock-heartbeat "<uri>" | Reset the TTL on a held lock. |
Resource Key URIs
Lock keys use URI format. Use a scheme that identifies the resource type, and a path that uniquely identifies the instance:| Pattern | Example |
|---|---|
github://owner/repo/issues/number | rlock "github://acme/app/issues/42" |
github://owner/repo/pr/number | rlock "github://acme/app/pr/17" |
deploy://service-name | rlock "deploy://api-prod" |
TTL and Expiry
Locks expire automatically after 30 minutes by default. This prevents deadlocks if an agent crashes or hangs without releasing its lock. The timeout is configurable viaresourceLockTimeout in config.toml (value in seconds).
For work that takes longer than the timeout, use rlock-heartbeat to extend the TTL. Each heartbeat resets the clock to another full TTL period. If the agent forgets to heartbeat and the lock expires, another instance can claim it.
Heartbeat
During long-running work, periodically runrlock-heartbeat to keep the lock alive:
Multiple Locks and Deadlock Detection
An agent instance can hold multiple locks simultaneously when working across related resources. However, this introduces the possibility of circular waits — agent A holds lock X and waits for lock Y, while agent B holds lock Y and waits for lock X. The gateway detects these cycles automatically. When anrlock request would create a circular wait in the wait-for graph, it returns a possible deadlock error with the cycle path instead of blocking forever. The agent can then release its held locks and retry.
Authentication
Each container gets a unique per-run secret (the same one used for the shutdown API). Lock requests are authenticated with this secret, so only the container that acquired a lock can release or heartbeat it. There is no way for one agent instance to release another’s lock — it must wait for the TTL to expire.Auto-release on Exit
When a container exits — whether it finishes successfully, hits an error, or times out — all of its locks are released automatically by the scheduler. You don’t need to worry about cleanup in error paths.Example in SKILL.md
Configuration
| Setting | Location | Default | Description |
|---|---|---|---|
resourceLockTimeout | config.toml | 1800 (30 min) | Default TTL for locks in seconds |
See Also
- Agent Commands — Locks — full command syntax and response JSON
- Scaling Agents — guide on scaling with locks