- You choose which agent you want to use (Gemini CLI, OpenCode, GitHub Copilot CLI, and more).
- You keep credentials and model choice inside that agent.
- tally stays a linter first — fast and deterministic — and uses AI only when you explicitly opt in.
How it works
tally treats AI AutoFix as a normal part of its existing fix pipeline:- A rule detects a violation and attaches a SuggestedFix marked as async.
- tally builds a prompt containing the Dockerfile text and structured rule evidence.
- tally runs your configured agent via ACP over stdio.
- The agent returns a unified diff patch targeting the exact Dockerfile bytes from the prompt.
- tally validates the patch: parses it, re-lints the result, and checks invariants.
- If valid, the patch is applied. If not, tally skips the fix and continues linting.
Quick start
Pick an ACP agent
Choose an ACP-capable CLI agent. Any of these work out of the box:
- Gemini CLI (native ACP)
- OpenCode (native ACP)
- GitHub Copilot CLI (native ACP)
- Cline CLI v2 (native ACP)
- Kiro CLI (native ACP)
- Docker agent (native ACP)
Enable AI in .tally.toml
Create or update your
.tally.toml. The example below uses Gemini CLI with MCP servers disabled for lower latency:--allowed-mcp-server-names is an allowlist. Passing a name you don’t have configured (like none) effectively disables all MCP servers. tally doesn’t provide any MCP servers to the agent today, so enabling MCP is usually just extra startup and latency overhead.Recommended setup (low latency)
Dockerfiles are a mature domain that most modern models understand well. For AI fixes, you usually don’t need external tools or context servers — you want fast, predictable transformations. Recommended:- A fast or smaller model with solid general reasoning.
- Disable agent-side tool integrations (MCP servers) unless you know you need them.
Configuration reference
Config file (.tally.toml)
All AI settings live under [ai]:
| Setting | Default | Description |
|---|---|---|
ai.enabled | false | Master kill-switch for AI features |
ai.command | (empty) | ACP agent argv (stdio). If empty, AI fixes can’t run |
ai.timeout | "90s" | Per-fix timeout for the ACP interaction |
ai.max-input-bytes | 262144 | Maximum prompt size to send to the agent |
ai.redact-secrets | true | Redact obvious secrets in prompts (best-effort) |
Environment variables
CLI flags
Supported ACP agents
Native ACP agents
| Agent | Link |
|---|---|
| Gemini CLI | agentclientprotocol.com/agents/gemini-cli |
| OpenCode | opencode.ai/docs/acp/ |
| GitHub Copilot CLI | docs.github.com/en/copilot/reference/acp-server |
| Kiro CLI | kiro.dev/docs/cli/acp/ |
| Cline CLI v2 | docs.cline.bot/cline-cli/acp-editor-integrations |
| Docker agent | docker.github.io/docker-agent/features/acp/ |
| QwenCode | qwenlm.github.io/qwen-code-docs |
Zed-maintained adapters
| Agent | Adapter |
|---|---|
| Claude Code | agentclientprotocol.com/agents/claude-code |
| OpenAI Codex CLI | agentclientprotocol.com/agents/codex |
Security and privacy
tally adds multiple guardrails for AI fixes:- Explicit opt-in — AI is off unless you set
ai.enabled = true. - Unsafe gating — AI fixes require
--fix-unsafein addition to--fix. - Minimal capabilities — tally advertises no filesystem and no terminal capabilities via ACP.
- Secret redaction — prompts are best-effort redacted before being sent to the agent (controlled by
ai.redact-secrets). - Strict output contract — the agent must return a small, targeted diff patch that applies cleanly to the exact Dockerfile bytes tally sent.
- Validation loop — tally re-parses, re-lints, and checks runtime invariants before accepting any proposed change.
Troubleshooting: “Skipped N fixes”
Common reasons a fix is skipped:| Reason | Fix |
|---|---|
--fix not passed | Add --fix to your command |
--fix-unsafe not passed | AI fixes always require --fix-unsafe |
--fix-rule set, but the rule didn’t trigger | The rule had no violations for this Dockerfile |
tally/prefer-multi-stage-build not triggering | This rule only fires for Dockerfiles with exactly one FROM |
| Agent timed out | Increase --ai-timeout or check stderr for the error message |
| Agent failed | tally prints the reason on stderr and keeps stdout clean for JSON/SARIF output |
Why ACP instead of API keys
Many tools bolt AI onto a linter by asking for an OpenAI or Anthropic API key. That approach comes with trade-offs:- Provider lock-in — the linter becomes a mini “AI platform” that must track models, pricing, retries, and auth.
- Secret sprawl — API keys end up in dotfiles, CI secrets, and team docs.
- Enterprise friction — organizations often standardize on a specific gateway, proxy, or provider policy.
- Inconsistent experience — your editor agent knows your preferences, but your linter uses a completely different stack.