Skip to main content

Custom Agents

Overview

Custom Agents let you build your own AI agents inside the Zynap platform — each one tied to an LLM provider you control, given a system prompt that defines its role, and equipped with tools from external MCP (Model Context Protocol) servers plus a small set of built-in platform tools.

Once an agent is saved, it becomes selectable inside the NINA workflow builder via the Custom Agent Node, which sends a per-run prompt to the agent and writes the structured output back to the workflow.

The feature lives under Automation → Custom Agents in the dashboard sidebar and is split into three tabs:

TabPurpose
ProvidersRegister LLM providers and store their API keys.
MCP ServersConnect to external MCP servers that expose tools.
AgentsCompose named agents from a provider, model, system prompt, and selected MCP tools.

How the pieces fit together

┌──────────────┐      ┌──────────────┐      ┌──────────────┐
│ Provider │ │ MCP Server │ │ Internal │
│ (OpenAI, │ │ (Atlassian, │ │ Tools │
│ Anthropic, │ │ Datadog, │ │ (file_write, │
│ Gemini, │ │ GitHub MCP, │ │ http_ │
│ OpenRouter, │ │ any custom │ │ request, │
│ DeepSeek) │ │ MCP server) │ │ …) │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
│ provides │ provides │ always
│ model + key │ external tools │ available
│ │ │
└─────────┬───────────┴─────────────────────┘


┌──────────────┐
│ Agent │
│ ────────── │
│ system prompt│
│ provider+model
│ MCP servers │
│ tool subset │
│ output schema│
└──────┬───────┘
│ referenced by agent_id

┌─────────────────────┐
│ Custom Agent Node │ ← per-run prompt set on the canvas
│ (in workflow) │
└─────────────────────┘

The provider supplies the LLM and its API key. MCP servers supply external tools (e.g. read a Jira issue, page Datadog, search a codebase). Each agent picks one provider+model and zero or more MCP servers, then optionally narrows down which tools from each server it is allowed to call. The Custom Agent Node in a workflow references the saved agent by ID and adds a per-run task prompt.

The pieces have a strict dependency order — you'll want to set them up in this sequence the first time:

  1. Add at least one Provider so the platform has an LLM to call. See Providers.
  2. (Optional) Connect MCP Servers if your agent needs external tools. See MCP Servers.
  3. Create an Agent that picks a provider, a model, a system prompt, and any MCP servers/tools you want it to use. See Agents.
  4. Drop a Custom Agent Node onto a workflow and select your agent. See the Custom Agent Node guide.

Authentication and security

  • Provider API keys are encrypted at rest and never returned to the browser after save. Editing a provider always requires re-entering the key.
  • MCP server credentials (bearer tokens, basic-auth passwords, custom headers) are encrypted at rest and never returned to the browser after save.
  • OAuth MCP servers use the MCP Authorization spec — discovery, Dynamic Client Registration, and PKCE all happen server-side. The user only sees a normal "Connect" button that opens the provider's consent screen in a popup. Tokens are scoped per-user; revoking is a one-click action.
  • Per-user OAuth tokens are not shared between users in the same organisation; each user authorises themselves, and the agent runtime uses the calling user's token at execution time.

Feature highlights

  • Five LLM providers: OpenAI, Anthropic, Gemini, OpenRouter, DeepSeek.
  • Two MCP transports: SSE (older) and Streamable HTTP (modern).
  • Five MCP authentication modes: None, Bearer token, Basic auth, Custom headers, OAuth MCP.
  • Five built-in internal tools: write_output_file, file_write, think, http_request, generate_pdf — available to every agent with no setup.
  • Per-agent tool selection: pick a subset of an MCP server's tools rather than handing the agent everything.
  • Optional output schema: enforce a JSON shape on the agent's final output, useful for downstream automation.
  • Tunable execution: per-agent temperature, max tokens, and max reasoning steps.

See also

  • Providers — registering LLM providers and managing API keys.
  • MCP Servers — connecting external tool servers, including OAuth.
  • Agents — building and configuring agents.
  • Custom Agent Node — using an agent inside a workflow.

Updated: 2026-05-04