Progressive Discovery: Making CLI Tools Agent-Ready
The Problem
Command-line tools were designed for humans. When you run a configuration command, you typically get one of two experiences: an interactive wizard that walks you through prompts one at a time, or a strict non-interactive mode that fails with a cryptic error the moment a required option is missing.
Neither works well for AI agents.
Interactive mode traps the agent in a back-and-forth it can't navigate. Non-interactive mode gives it a binary pass/fail with no actionable context — the agent has to guess what went wrong, parse error messages, and retry blindly.
There's a better way.
Progressive Discovery
Progressive Discovery is a response pattern where a CLI tool, upon detecting an AI agent as its caller, returns structured, contextual feedback about what it already has, what it still needs, and how to get it. Instead of prompting or failing, the tool collaborates with the agent.
The key insight: an AI agent operates with partial context. It may already know some configuration values from the user's instructions or from the project it's working in. What it needs from the tool is not a prompt — it needs information to make decisions. The tool's job is to describe what's missing and provide enough context for the agent to either resolve it from what it knows or ask the user an intelligent question.
A Concrete Example: Payment Gateway Configuration
Consider a CLI tool that configures a payment gateway integration — something like setting up a Stripe or Bluefin connection for a merchant application.
The Human Experience
A human runs the setup command and gets a familiar interactive flow:
$ paytool configure
Welcome to PayTool configuration.
? Select environment: (Use arrow keys)
❯ sandbox
production
? Enter your Merchant ID: ___
? Enter your API Key: ___
? Select default currency: (Use arrow keys)
❯ USD
EUR
GBP
? Enable webhook notifications? (Y/n): ___
? Webhook endpoint URL: ___
✓ Configuration saved to .paytool/config.jsonThis works for a human sitting at a terminal. But what happens when an AI agent needs to set this up?
The Agent Experience Today
Most tools offer a --no-interactive flag. The agent tries to supply everything upfront:
$ paytool configure --no-interactive --merchant-id "MCH_12345"Error: Missing required option: --api-keyThe agent retries, guessing at what else might be needed:
$ paytool configure --no-interactive --merchant-id "MCH_12345" --api-key "sk_test_xxx"Error: Missing required option: --environmentAnd again. And again. Each round trip is a wasted call — no context, no guidance, just failure.
The Progressive Discovery Experience
With Progressive Discovery, the same tool detects the agent and shifts its behavior:
$ paytool configure --agent --merchant-id "MCH_12345"# Configuration: incomplete
## Resolved
- **merchant_id**: `MCH_12345` (provided)
## Pending
### environment (required)
Deployment target. Use `sandbox` for testing, `production` for live transactions.
If this is a development or staging setup, use `sandbox`.
```json
{"field": "environment", "type": "enum", "options": ["sandbox", "production"]}
```
### api_key (required)
API authentication key for the selected environment.
Found in the merchant dashboard under **Settings → API Keys**.
Starts with `sk_test_` for sandbox or `sk_live_` for production.
```json
{"field": "api_key", "type": "string", "format": "sk_{environment}_[a-zA-Z0-9]{24}"}
```
### currency (optional, default: USD)
Default transaction currency.
```json
{"field": "currency", "type": "enum", "options": ["USD", "EUR", "GBP", "CAD", "AUD"], "default": "USD"}
```
### webhook_enabled (optional, default: false)
Enable webhook notifications for transaction events.
### webhook_url (optional, requires webhook_enabled: true)
HTTPS endpoint to receive webhook payloads.
Must be a publicly accessible HTTPS URL. For local development, consider using a tunneling service.Now the agent has everything it needs to act intelligently. It reads the markdown and immediately understands the situation: environment and api_key are required, the rest have defaults. The prose explains where to find the API key and what format it takes. The JSON code blocks give the agent structured data it can map directly to command flags — valid enum options, format patterns, defaults. No parsing of a monolithic JSON object, no guessing at field semantics from key names alone.
The agent can now make a decision: Do I already have this information from the user's instructions or the project context? If the user said "set up the sandbox environment," the agent already knows the environment. If there's a .env file with an API key, the agent can use that. For anything truly unknown, the agent can ask the user a precise, informed question — not "what's your API key?" but "I need your sandbox API key. You can find it in the merchant dashboard under Settings → API Keys. It starts with sk_test_."
The agent fills in what it can and calls again:
$ paytool configure --agent \
--merchant-id "MCH_12345" \
--environment "sandbox" \
--api-key "sk_test_a1b2c3d4e5f6g7h8i9j0k1l2"# Configuration: complete
## Resolved
- **merchant_id**: `MCH_12345` (provided)
- **environment**: `sandbox` (provided)
- **api_key**: `sk_test_***` (provided)
- **currency**: `USD` (default)
- **webhook_enabled**: `false` (default)
Configuration saved to `.paytool/config.json`Done in two calls, with full transparency about what was applied and how.
The Pattern
Progressive Discovery follows three principles:
1. Show the full picture, not one step at a time. Unlike interactive prompts that reveal requirements sequentially, Progressive Discovery returns the entire configuration surface in one response. The agent sees all required and optional fields, their types, constraints, defaults, and dependencies. This allows it to batch its decisions and minimize round trips.
2. Provide actionable context, not just validation errors. Each pending field includes its type, valid options, format constraints, human-readable descriptions, and hints about where to find the value. This transforms a missing field from a blocker into a solvable problem. The agent can reason about where to look — environment variables, project files, user instructions — or formulate a specific question for the user.
3. Acknowledge what's already resolved. The response always reflects back what the tool has already accepted. This gives the agent a clear picture of progress and prevents redundant work. It also surfaces the source of each value — whether it was explicitly provided, pulled from a default, or inferred from the environment.
Detecting the Agent
For Progressive Discovery to work, the tool needs to know when it's being called by an agent. There are two complementary approaches.
Environment-Based Detection
Most AI coding agents set identifiable environment variables. A tool can check for these:
export function detectAgentEnvironment(): null | string {
// Claude Code (CLI or extension)
if (process.env.CLAUDECODE === '1') return 'claude-code'
// Cursor IDE
if (process.env.CURSOR_TRACE_ID) return 'cursor'
// GitHub Copilot CLI
if (process.env.GITHUB_COPILOT_TOKEN
|| process.env.COPILOT_AGENT_ENABLED === '1') return 'github-copilot'
// Aider AI coding assistant
if (process.env.AIDER_MODEL
|| process.env.AIDER_CHAT_HISTORY_FILE) return 'aider'
// OpenCode AI terminal agent
if (process.env.OPENCODE === '1') return 'opencode'
return null
}This is useful for automatic detection — the tool can switch to Progressive Discovery mode without the caller needing to do anything special.
The --agent Flag
Environment detection is helpful, but it's inherently fragile. New agents appear regularly, environment variables change, and some agents might not set any identifiable markers at all.
The more robust solution is explicit: offer an --agent flag.
$ paytool configure --agentThis is the recommended approach for tool authors. It's simple, self-documenting, and puts control in the agent's hands. Any AI agent — current or future — can use it without the tool needing to know about that specific agent. It also serves as an escape hatch for advanced human users who prefer structured output over interactive prompts.
In practice, you should support both: auto-detect when possible, but always accept the flag.
export function isAgentMode(flags: ParsedFlags): boolean {
// Explicit flag takes priority
if (flags.agent) return true
// Fall back to environment detection
return detectAgentEnvironment() !== null
}Implementation Guidelines
If you're building a CLI tool and want to support Progressive Discovery, here's what to keep in mind.
Use markdown as your primary response format, with JSON code blocks for structured datasets. LLMs are language models — they reason about prose and markdown natively. Rigid JSON-only responses waste tokens on syntax noise (brackets, quotes, escaping) that adds no semantic value for the agent. Write the response as readable markdown: describe what's resolved, what's pending, and provide context in natural language. When there's actual structured data the agent might need to iterate over or map to parameters — like a list of valid options, available resources, or enumerated values — wrap it in a JSON code block within the markdown. This gives the agent the best of both worlds: natural language for reasoning and structured data for programmatic use.
Include field metadata generously. Type, format, description, hint, default, dependencies — the more context you provide, the fewer round trips the agent needs. Think of each pending field as a self-contained brief for the agent.
Distinguish required from optional. Let the agent know what must be provided to proceed versus what has sensible defaults. This allows it to complete configuration without asking the user about every optional setting.
Express dependencies explicitly. If field B only matters when field A has a certain value (like webhook_url requiring webhook_enabled: true), say so in the schema. This prevents the agent from asking the user unnecessary questions.
Keep the final call idempotent. The agent might call the command multiple times as it gathers values. Each call should accept all previously resolved values plus new ones, and return the updated state without side effects until status: "complete".
Support partial progress. Don't require all fields in a single call. Accept what the agent has now and report back what's still missing. The tool should be comfortable with incremental resolution.
Why This Matters
AI agents are becoming a primary interface for developer tools. They read documentation, they execute commands, they configure environments. But they're working with tools that were designed for a different kind of user — one who can read a prompt, glance at a help page, and type a response.
Progressive Discovery bridges this gap without breaking the human experience. A tool can support all three modes — interactive for humans, non-interactive for scripts, and progressive for agents — with the same underlying configuration logic. The only thing that changes is how requirements are communicated.
The pattern is simple: instead of asking or failing, describe what you need and why. Let the agent — which sits between the tool and the user — do what it's good at: reasoning about context, making decisions, and asking smart questions when it has to.
That's Progressive Discovery. Not a new protocol. Not a framework. Just a better way for tools to talk to agents.