Custom Agent Node Guide
Overview
The Custom Agent Node runs a Custom Agent — a saved AI agent you configure outside the workflow under Automation → Custom Agents — on the data flowing into the node. The agent has its own LLM provider, model, system prompt, and toolbelt (MCP server tools plus built-in platform tools); the node simply tells the agent what to do this run via a per-run prompt and forwards the agent's structured output downstream.
Unlike the Scripting Agent Node and Data Transformation Agent Node, which generate Python on every run, the Custom Agent Node runs a live tool-calling agent loop — the model thinks, calls a tool, observes, thinks again, and so on until it writes its final output.
Before you use this node, read the Custom Agents feature guides to set up at least one provider and one agent:
Use Cases
- Reconnaissance with external tools — let an agent run subdomain enumeration, DNS lookups, and HTTP probing via an MCP server, then write a structured asset list.
- Threat triage — feed findings from an upstream Operation Node into an agent that ranks severity and produces a prioritised remediation plan.
- Investigation across third-party APIs — wire up MCP servers for Jira, Datadog, or your in-house tooling and let the agent pull context from each before deciding what to do.
- Adaptive analysis — the agent decides which tool to call based on the input data, rather than following a hardcoded script.
- Structured output for downstream automation — pin the agent to a JSON output schema and feed the result into an Integration Node (e.g. create Jira tickets, post to Slack, write to a SIEM).
- Report generation — pair an agent with the built-in
generate_pdftool to produce branded PDFs as part of a workflow.
How the Custom Agent Node Works
When the node executes, the platform:
- Loads the selected agent by
agent_id— provider, model, system prompt, associated MCP servers, selected tools, execution parameters, and the optional output schema. - Authenticates to the LLM provider using the agent's encrypted API key.
- Connects to MCP servers the agent uses, binding the tools the agent has been granted access to. For OAuth MCP servers, the calling user's OAuth token is used.
- Adds the built-in internal tools:
write_output_file,file_write,think,http_request,generate_pdf. - Sends two things to the model: the agent's static system prompt, and the per-run Prompt you set on this node.
- Runs the tool-calling loop — capped at the agent's Max
steps — until the agent calls
write_output_file(which exits the loop) or hits the cap. - Validates the output against the agent's optional JSON output schema, and writes the result to the node's output path so the downstream nodes can consume it.
If the agent finishes the loop without calling write_output_file,
the runtime nudges it up to twice with a reminder, then fails the
node with a clear error.
Creating a Custom Agent Node
Basic Setup
- Drag a Custom Agent Node from the node palette onto your workflow canvas.
- Connect it to one or more upstream nodes that produce the data the agent should work on (Input, Operation, Integration, Script, etc.).
- (Optional) Connect it to one or more downstream nodes that should consume the agent's output (Integration Node, another Agent Node, Output Node).
- In the node's configuration panel:
- Pick an Agent from the dropdown.
- Write a per-run Prompt describing this run's task.
Configuration Options
Node Properties
| Property | Required | Description |
|---|---|---|
| Name | No | A friendly label shown on the canvas. |
| Prompt | Yes | The user message sent to the agent on this run. Use it to describe the task, not the agent's personality (which lives on the agent's system prompt). |
| Agent | Yes | Which Custom Agent to run. The dropdown lists every agent in your organisation. |
If the dropdown shows No agents available, follow the Manage agents link at the bottom of the panel — you need to create at least one agent before this node can run. The link opens the Custom Agents page in a new tab.
Where the rest of the configuration lives. The provider, model, system prompt, temperature, max tokens, max steps, MCP servers, tool selection, and output schema are all set on the agent itself, not on the node. To change any of those, edit the agent from Automation → Custom Agents → Agents. The change applies to the next workflow run.
Writing Effective Per-Run Prompts
The agent's system prompt defines the role, tone, constraints, and expected output shape. The node's Prompt field defines the specific task for this run.
Keep the two responsibilities clean:
| System prompt (on the agent) | Per-run Prompt (on the node) |
|---|---|
"You are a vulnerability triage analyst. End each run by calling write_output_file with severity-grouped findings." | "Triage the findings in the input file. Pay extra attention to internet-exposed RCE bugs." |
| "You are a recon specialist. Use only the subfinder and dnsx tools. End with a JSON list of subdomains." | "Enumerate subdomains for acme.example. Include only subdomains resolving to public IPs." |
"You are a report writer. Generate a PDF using the blue theme." | "Produce an executive summary of this week's vulnerability scan results." |
Prompt Best Practices
- Describe the task, not the agent. Personality, tools, and output format belong on the agent's system prompt — keep them out of the per-run prompt.
- Reference the input. The agent receives the data flowing in from upstream nodes; explicitly mention what's in it (e.g. "the JSON file in the input is the vulnerability scan output").
- State any run-specific constraints. Scope, target lists, thresholds, and exclusions go here — not in the agent's static system prompt.
Example Per-Run Prompts
Triage
Triage the findings in the input. Group by severity and produce a
remediation plan. Flag anything with a public exploit available as
critical, regardless of CVSS.
Reconnaissance
Enumerate subdomains for the domain in the input file. Filter out
subdomains containing 'test', 'dev', or 'staging'. Return only those
that resolve to public IPs.
Multi-tool investigation
Investigate the alert in the input. Pull related Jira tickets via
the Atlassian MCP, check Datadog logs around the alert timestamp,
and write a JSON summary with affected services and a recommended
next action.
Integration with Other Nodes
Upstream Node Compatibility
The Custom Agent Node can consume output from:
- Input Nodes — files uploaded by users
- Operation Nodes — security tool outputs (subfinder, nmap, etc.)
- Integration Nodes — data pulled from external services
- Script / Scripting Agent / Data Transformation Agent Nodes — pre-processed data
- Other Custom Agent Nodes — chained agents, where one agent's output feeds the next
Downstream Node Usage
The agent's output (the JSON written by write_output_file) can be
consumed by:
- Integration Nodes — ship the result to Jira, Slack, OpenCTI, CrowdStrike, etc.
- Other Agent / Script Nodes — chain further processing.
- Output Nodes — save the final artefact.
If the agent's output schema is set, downstream consumers can rely on the JSON shape; otherwise, treat the output as free-form JSON.
Performance Considerations
Run time and cost
- First step (model inference + first tool call) usually dominates wall time.
- Wall time scales with the number of tool calls the agent decides to make. Heavy MCP-tool agents often take 30–120 seconds.
- Each tool call costs additional model tokens, because the result is fed back into the conversation. Keep the agent's tool list focused on what it actually needs.
- Setting the agent's
max_stepslower is the simplest worst-case cost cap.
Streaming
Streaming is enabled by default at the runtime level so high
max_tokens settings work even on providers (notably Anthropic) that
require streaming for long outputs. You don't need to configure
this; it's transparent.
Best Practices
Workflow Design
- Keep system prompt vs. per-run prompt clean. Personality on the agent, task on the node.
- Use an output schema on the agent if the next node consumes the JSON.
- Watch the chain depth. Custom Agent Nodes feeding into Custom Agent Nodes works, but each layer adds tokens, time, and a place for the agent to mis-format output. Two or three deep is normal; more usually means a single agent with better tools would be simpler.
- Pair with the Integration Node for write-paths. Let the agent produce a clean JSON; let the Integration Node do the actual delivery (create Jira issue, post to Slack, etc.). Don't ask the agent to do both.
Agent Selection
- Build small, role-specific agents rather than one giant do-everything agent. A focused triage agent and a focused recon agent will both outperform a single multi-purpose agent at lower cost.
- Pick the cheapest model that works. Run an agent on a small model first; only escalate to a flagship model if the cheap one produces unusable output.
Troubleshooting
| Issue | Resolution |
|---|---|
| Dropdown shows "No agents available" | Create at least one agent in Automation → Custom Agents → Agents. |
Validation error: agent_id missing | Pick an agent in the node configuration panel. |
Validation error: prompt missing | Type a per-run prompt; this is required even when the agent has a strong system prompt. |
| Run fails with "max steps reached" | The agent needed more reasoning iterations than allowed. Bump Max steps on the agent. |
| Run fails with "no output produced" | The agent ended without calling write_output_file. Add an explicit closing instruction to the agent's system prompt: "End by calling write_output_file with the result as JSON." |
| Run fails with output schema validation error | The agent produced output that didn't match the agent's JSON schema. Loosen the schema, or strengthen the system prompt to describe the expected shape. |
| MCP tool call fails | Check the MCP server's status on Automation → Custom Agents → MCP Servers. For OAuth MCP, ensure the calling user is connected. |
| Provider error / 401 / 403 | The provider's API key is invalid or revoked. Update the API key on the provider record. |
| Different users get different results | Expected for OAuth MCP servers — each user uses their own token, so authorisation scope can differ. |
Example Configurations
Example 1 — Subdomain Recon
Agent: Recon Agent (Anthropic Claude, system prompt = recon
specialist, MCP server: internal recon, tools: subfinder, dnsx,
httpx).
Per-run Prompt:
Enumerate subdomains for the domain in the input file. Skip any
subdomains containing 'test', 'dev', or 'staging'. Resolve each one
and include only entries that resolve to public IPv4 addresses.
End by calling write_output_file with a JSON object:
{ "subdomains": [ { "name": "...", "ip": "..." } ] }.
Downstream: Integration Node creating a Jira ticket per subdomain group.
Example 2 — Vulnerability Triage to Slack
Agent: Vulnerability Triager (OpenAI gpt-…, system prompt = triage analyst, MCP server: threat-intel; output schema = severity-grouped remediation plan).
Per-run Prompt:
Triage the findings in the input. Treat anything CVSS ≥ 9.0 with a
public PoC as critical. Group by severity. Include the top 3
remediation steps for each severity bucket.
Downstream: Data Transformation Agent Node → Slack Integration Node, posting an executive alert.
Example 3 — PDF Executive Report
Agent: Report Writer (Anthropic Claude, system prompt = report
writer using generate_pdf with blue theme).
Per-run Prompt:
Produce an executive PDF report from the scan results in the input.
Include sections: Executive Summary, Top 5 Risks, Remediation
Roadmap (next 30 days). Use the blue theme.
End by calling write_output_file with a JSON summary of the report
sections you generated.
Downstream: Output Node persisting the generated PDF as the workflow artefact.
Updated: 2026-05-04