Open source and MIT Licensed. Use Claude, OpenAI, or other models to:Documentation Index
Fetch the complete documentation index at: https://docs.actionllama.org/llms.txt
Use this file to discover all available pages before exploring further.
- Automate your dev workflow: agents handle issues, PRs, monitoring
- Automate communications: have an agent create a summary of product changes for the last week and tweet it out
First Steps
Zero to running agent in minutes
Guides
Deploy, scale, optimize, and orchestrate
Concepts
Understand agents, scheduling, and locks
Reference
Every CLI flag, config field, and command
Key features
- Docker isolation — each run gets its own container with only the credentials it needs
- Git-native — define agents in a git repo, add custom ones, share them across teams
- BYOM — bring your own model (Anthropic, OpenAI, Groq, Google Gemini, xAI, Mistral, OpenRouter, or any custom provider)
- Deploy anywhere — run locally for development or deploy to Cloud Run for production
- Webhook + cron — react to GitHub issues, Sentry alerts, Linear tickets, or poll on a schedule
- Multi-agent — agents can call other agents with
call_agentand collect results withcheck_call - Resource locks — coordinate parallel instances with automatic deadlock detection
- Web dashboard — live agent status and streaming logs in your browser
Philosophy
- Agents are infrastructure: they should be versioned, repeatable, and deployable. Agents work together, so they should be bundled together.
- Stateless by default: state is easy to add to a stateless system. Dynamic agent context strategies can be added if desired.
- Bring your own models: there are many models out there, each appropriate to a different task. You should be able to use any of them.
- Minimize the harness and lean on the model: models are improving constantly, that’s why this framework lets you program with prompts. The harness should focus on orchestration only.