Agent Variants
The standard agent loop (call LLM → execute tools → repeat) can be customized in several ways.Custom Approval Logic
Control when tools require approval using mode and conditional edges:Context Management
Long-running agents accumulate large contexts. Two techniques help: Compaction summarizes the conversation when it exceeds a threshold:Oversight and Auditing
Add a secondary agent that reviews the primary agent’s actions before execution:response_tool feature creates structured output you can branch on. The tool defines a JSON schema for the expected output format. A common pattern uses choice (enum) and value (explanation) properties. Response data is available via nodes.<execute_tools_node>.response_data.<tool_name>. If the audit fails, use .value to get the guidance and inject it for the primary agent to try again.
When to use: High-stakes tasks, compliance requirements, or when a cheaper model should validate an expensive model’s decisions.
Tool Restrictions
Control available tools based on mode usingtool_filter:
['tag:default']), specific tools (['view', 'grep']), or exclusions.
When to use: Planning modes (read-only), sandboxed exploration, role-specific tool access.
Structured Output with Response Tools
Response tools force the LLM to produce structured output you can programmatically use for routing, classification, or data extraction. Simple options-based response:- ExecuteTools required — You must execute the tool call to access
response_data - Access path —
nodes.<execute_node>.response_data.<tool_name>.<field> - LLM must call it — The response tool is the only way to complete; LLM cannot just respond with text
- Use for routing — Perfect for decisions that control workflow branching
Pipelines
Sequential multi-node workflows where each node builds on previous results.Running Steps After Agent Completes
Use edges to route from an agent’s completion to the next node:Chaining Outputs
Reference previous node outputs usingnodes.<node_id>.<field>:
message.text, exit_code, stdout, stderr, path, tool_results.
Conditional Next Steps
Branch based on results:size(nodes.X.tool_calls) > 0), loop output conditions, custom outputs.
Verification Loops
Repeat while a condition is true usingloop:
while (continue condition with iteration limits), iter.iteration (current iteration, 0-indexed), outputs.* (current iteration’s results in while condition).
Outputs requirement: When using outputs.* in the while condition, you must declare an outputs section in the inline workflow that maps inner node outputs to named fields. This creates a clear contract between the loop body and the while condition:
outputs section, the while condition receives raw inner node outputs keyed by node ID (for example, outputs.call_llm.tool_calls instead of outputs.tool_calls), which is fragile and makes refactoring difficult.
Do-while semantics: The loop always runs at least once. After each iteration, iter.iteration increments before the while condition is checked.
Iteration counting: In the loop body, iter.iteration is 0-indexed (0, 1, 2…). In the while check, it reflects completed iterations (1 after first, 2 after second). Use iter.iteration < N to run exactly N iterations.
When to use: Test-driven development, retry-while-failing, iterative refinement. For fan-out across a fixed list of items where iterations don’t depend on each other, see Parallel Loops instead.
Multi-Agent Coordination
Multiple agents working together, either in parallel or alternating.Parallel Execution with Join
Launch multiple agents simultaneously, then wait for all to complete:join: all node waits until all incoming edges complete. Use worktrees for isolated working directories.
When to use: Competitive implementations, exploring multiple approaches, reducing wall-clock time.
Parallel Loops
When you have a collection of items and want to run the same sub-workflow over each one concurrently, use a parallel loop instead of fanning out with hand-written nodes. Setparallel: true on a loop node and supply an items expression — Reliant launches one iteration per item, each in its own thread, all started simultaneously rather than one after another.
Reach for parallel loops when:
- You want to fan out the same analysis over many files, tickets, or rows.
- Iterations are independent and don’t share conversation state.
- You already have a fixed list (or map) of items to process. Parallel loops require
itemsand don’t support awhilecondition — they always iterate over a known collection.
parallel: trueswitches the loop from sequential to concurrent execution.itemsis a CEL expression evaluating to a list or a map. It is required whenparallelis true.- Inside the loop body,
iter.itemis the current element,iter.indexis its position (0-indexed), anditer.keyis the map key when iterating a map. keyis a CEL expression evaluated per iteration to produce the output map key. It defaults tostring(iter.index). Set it explicitly when you want stable, meaningful keys (and to disambiguate duplicate items).- Results are accumulated in
nodes.analyze_each_file._results, a map keyed by the evaluatedkey. Counts are available asnodes.analyze_each_file._completed,_failed, and_iterations.
Failure policies
Theon_failure field controls what happens when an iteration errors out:
continue(the default): the failed iteration is recorded in_failed, but the rest keep running. The loop completes with whatever partial results succeeded.fail_fast: the first failure cancels in-flight iterations and the loop returns immediately with an error.fail_all: all iterations are allowed to finish, and then the loop reports failure if any iteration failed.
continue for best-effort batch jobs, fail_fast when one bad item invalidates the whole run, and fail_all when you want a complete picture of which items failed before reporting.
Per-iteration threads
Each parallel iteration runs in its own thread — that’s what makes them independent. The default thread mode for parallel loops isnew, which is the opposite of the usual node default of inherit. You can override this with an explicit thread block on the loop node, which applies to every iteration:
mode: fork is useful when each iteration should start from a shared seed thread (for example, a thread that already contains background context) without each iteration writing back into that seed.
Restrictions
The validator rejects parallel loop configurations that don’t make sense, so it’s worth knowing the rules up front:itemsis required whenparallel: true. There’s nowhile-style parallel loop.whileis not allowed alongsideparallel: true. Parallel loops always iterate over a fixed collection.yieldis not allowed alongsideparallel: true. You can’t pause a parallel loop mid-iteration to ask the user for input.keyis only meaningful whenparallel: true. On a sequential loop it has no effect.on_failuremust be one ofcontinue,fail_fast, orfail_all— any other value is rejected.
Performance: inject attachments
If the loop’sthread.inject block carries attachments with path: or data: sources, those attachments are resolved per iteration. With path:, that means N file reads for N iterations; with data:, the bytes are duplicated into N inject payloads. For large files this adds up quickly.
If the same attachment is needed by every iteration, save it once and reference it by ID instead:
When you don’t need parallelism
If you just want iteration without concurrency — for example, retrying until a check passes or threading the result of one iteration into the next — use a sequential loop instead. Omitparallel, supply a while condition, and (optionally) an items expression. See Verification Loops for the sequential pattern, and the Loop reference for every field in one place.
When to use: Batch analysis over a list of files or records, parallel drafts of the same prompt, fan-out review of pull request changes — anything where iterations are independent and you want wall-clock speedup.
Turn-Taking (Proposer/Critic)
Alternating agents on the same thread see each other’s work:Thread Isolation vs Shared Context
See Threads for complete documentation on thread modes (new, inherit, fork).
The inject option adds a message when entering the node:
Different Models Per Agent
Usegroups to configure different settings for different roles: