7acadae
CopilotKitDocs
  • Docs
  • Integrations
  • Reference
Get Started
QuickstartCoding Agents
Concepts
ArchitectureGenerative UI OverviewOSS vs Enterprise
Agentic Protocols
OverviewAG-UIAG-UI MiddlewareMCPA2A
Build Chat UIs
Prebuilt Components
CopilotChatCopilotSidebarCopilotPopup
Custom Look and Feel
CSS CustomizationSlots (Subcomponents)Fully Headless UIReasoning Messages
Multimodal AttachmentsVoice
Build Generative UI
Controlled
Tool-based Generative UITool RenderingState RenderingReasoning
Your Components
Display ComponentsInteractive Components
Declarative
A2UIDynamic Schema A2UIFixed Schema A2UI
Open-Ended
MCP Apps
Adding Agent Powers
Frontend ToolsShared State
Human-in-the-Loop
HITL OverviewPausing the Agent for InputHeadless Interrupts
Sub-AgentsAgent ConfigProgrammatic Control
Agents & Backends
Built-in Agent
Backend
Copilot RuntimeFactory ModeAG-UI
Runtime Server AdapterAuthentication
Built-in Agent (TanStack AI)
Advanced ConfigurationMCP ServersModel SelectionServer Tools
Observe & Operate
InspectorVS Code Extension
Troubleshooting
Common Copilot IssuesError Debugging & ObservabilityDebug ModeAG-UI Event InspectorHook ExplorerError Observability Connectors
Enterprise
CopilotKit PremiumHow the Enterprise Intelligence Platform WorksHow Threads & Persistence WorkObservabilitySelf-Hosting IntelligenceThreads
Deploy
AWS AgentCore
What's New
Full MCP Apps SupportLangGraph Deep Agents in CopilotKitA2UI Launches with full AG-UI SupportCopilotKit v1.50Generative UI Spec SupportA2A and MCP Handshake
Migrate
Migrate to V2Migrate to 1.8.2
Other
Contributing
Code ContributionsDocumentation Contributions
Anonymous Telemetry
Built-in Agent (TanStack AI)Multi-AgentSub-Agents

Sub-Agents

Decompose work across multiple specialized agents with a visible delegation log.

What is this?#

Sub-agents are the canonical multi-agent pattern: a top-level supervisor LLM orchestrates one or more specialized sub-agents by exposing each of them as a tool. The supervisor decides what to delegate, the sub-agents do their narrow job, and their results flow back up to the supervisor's next step.

This is fundamentally the same shape as tool-calling, but each "tool" is itself a full-blown agent with its own system prompt and (often) its own tools, memory, and model.

Live Demo: Built-in Agent (TanStack AI) — subagentsOpen full demo →

When should I use this?#

Reach for sub-agents when a task has distinct specialized sub-tasks that each benefit from their own focus:

  • Research → Write → Critique pipelines, where each stage needs a different system prompt and temperature.
  • Router + specialists, where one agent classifies the request and dispatches to the right expert.
  • Divide-and-conquer — any problem that fits cleanly into parallel or sequential sub-problems.

The example below uses the Research → Write → Critique shape as the canonical example.

Setting up sub-agents#

Each sub-agent is a full create_agent(...) call with its own model, its own system prompt, and (optionally) its own tools. They don't share memory or tools with the supervisor; the supervisor only ever sees what the sub-agent returns.

backend/agent.py — three sub-agents
L1–33
import { z } from "zod";
import { chat, toolDefinition } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";

// Each role becomes its own nested chat() with a dedicated system prompt.
// They don't share memory or tools with the supervisor — the supervisor
// only sees the role's return value via the delegate tool below.
//
// Tool names match the LangGraph Python reference agent (`subagents.py`):
//   research_agent, writing_agent, critique_agent
// This alignment is load-bearing: the D5 fixtures are recorded against
// the LGP agent's tool names, and aimock matches on tool name.
const subagentRoles = [
  {
    id: "research_agent",
    systemPrompt:
      "You are a research sub-agent. Given a topic, produce a concise " +
      "bulleted list of 3-5 key facts. No preamble, no closing.",
  },
  {
    id: "writing_agent",
    systemPrompt:
      "You are a writing sub-agent. Given a brief and optional source " +
      "facts, produce a polished 1-paragraph draft. Be clear and " +
      "concrete. No preamble.",
  },
  {
    id: "critique_agent",
    systemPrompt:
      "You are an editorial critique sub-agent. Given a draft, give " +
      "2-3 crisp, actionable critiques. No preamble.",
  },
] as const;

Keep sub-agent system prompts narrow and focused. The point of this pattern is that each one does one thing well. If a sub-agent needs to know the whole user context to do its job, that's a signal the boundary is wrong.

Exposing sub-agents as tools#

The supervisor delegates by calling tools. Each tool is a thin wrapper around sub_agent.invoke(...) that:

  1. Runs the sub-agent synchronously on the supplied task string.
  2. Records the delegation into a delegations slot in shared agent state (so the UI can render a live log).
  3. Returns the sub-agent's final message as a ToolMessage, which the supervisor sees as a normal tool result on its next turn.
backend/agent.py — supervisor tools
L1–65
import { z } from "zod";
import { chat, toolDefinition } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";

// Each role becomes its own nested chat() with a dedicated system prompt.
// They don't share memory or tools with the supervisor — the supervisor
// only sees the role's return value via the delegate tool below.
//
// Tool names match the LangGraph Python reference agent (`subagents.py`):
//   research_agent, writing_agent, critique_agent
// This alignment is load-bearing: the D5 fixtures are recorded against
// the LGP agent's tool names, and aimock matches on tool name.
const subagentRoles = [
  {
    id: "research_agent",
    systemPrompt:
      "You are a research sub-agent. Given a topic, produce a concise " +
      "bulleted list of 3-5 key facts. No preamble, no closing.",
  },
  {
    id: "writing_agent",
    systemPrompt:
      "You are a writing sub-agent. Given a brief and optional source " +
      "facts, produce a polished 1-paragraph draft. Be clear and " +
      "concrete. No preamble.",
  },
  {
    id: "critique_agent",
    systemPrompt:
      "You are an editorial critique sub-agent. Given a draft, give " +
      "2-3 crisp, actionable critiques. No preamble.",
  },
] as const;

// Builder takes the parent run's AbortController so subagent `chat()` calls
// abort with the parent. Constructing tools at module-import time leaves them
// with their own fresh AbortController, which means a user cancel never reaches
// the in-flight subagent call — orphan async work, billed tokens, hung
// promises. Each parent run threads its controller through here.
// Each `<role>_agent` tool wraps a nested chat() call with the
// role's system prompt. The supervisor LLM "calls" these tools to
// delegate work; each invocation runs the matching subagent and returns
// its output for the supervisor's next step.
export function buildSubagentTools(parentAbortController: AbortController) {
  return subagentRoles.map((role) =>
    toolDefinition({
      name: role.id,
      description: `Delegate a task to the ${role.id.replace(/_/g, " ")}.`,
      inputSchema: z.object({
        task: z
          .string()
          .describe(`Task description for the ${role.id.replace(/_/g, " ")}`),
      }),
    }).server(async ({ task }) => {
      const text = await chat({
        adapter: openaiText("gpt-4o"),
        messages: [{ role: "user", content: task }],
        systemPrompts: [role.systemPrompt],
        abortController: parentAbortController,
        stream: false,
      });
      return { role: role.id, text };
    }),
  );
}

This is where CopilotKit's shared-state channel earns its keep: the supervisor's tool calls mutate delegations as they happen, and the frontend renders every new entry live.

Rendering a live delegation log#

On the frontend, the delegation log is just a reactive render of the delegations slot. Subscribe with useAgent({ updates: [OnStateChanged, OnRunStatusChanged] }), read agent.state.delegations, and render one card per entry.

frontend/src/app/delegation-log.tsx — live log component
L60–144
/**
 * Live delegation log — renders the `delegations` slot of agent state.
 *
 * Each entry corresponds to one sub-agent invocation. The list grows in
 * real time as the supervisor fans work out to its children; each
 * delegation is appended through agent state, and the UI re-renders
 * via the standard shared-state subscription.
 */
export function DelegationLog({ delegations, isRunning }: DelegationLogProps) {
  return (
    <div
      data-testid="delegation-log"
      className="w-full h-full flex flex-col bg-white rounded-2xl shadow-sm border border-[#DBDBE5] overflow-hidden"
    >
      <div className="flex items-center justify-between px-6 py-3 border-b border-[#E9E9EF] bg-[#FAFAFC]">
        <div className="flex items-center gap-3">
          <span className="text-lg font-semibold text-[#010507]">
            Sub-agent delegations
          </span>
          {isRunning && (
            <span
              data-testid="supervisor-running"
              className="inline-flex items-center gap-1.5 px-2 py-0.5 rounded-full border border-[#BEC2FF] bg-[#BEC2FF1A] text-[#010507] text-[10px] font-semibold uppercase tracking-[0.12em]"
            >
              <span className="w-1.5 h-1.5 rounded-full bg-[#010507] animate-pulse" />
              Supervisor running
            </span>
          )}
        </div>
        <span
          data-testid="delegation-count"
          className="text-xs font-mono text-[#838389]"
        >
          {delegations.length} calls
        </span>
      </div>

      <div className="flex-1 overflow-y-auto p-4 space-y-3">
        {delegations.length === 0 ? (
          <p className="text-[#838389] italic text-sm">
            Ask the supervisor to complete a task. Every sub-agent it calls will
            appear here.
          </p>
        ) : (
          delegations.map((d, idx) => {
            const style = SUB_AGENT_STYLE[d.sub_agent];
            return (
              <div
                key={d.id}
                data-testid="delegation-entry"
                className="border border-[#E9E9EF] rounded-xl p-3 bg-[#FAFAFC]"
              >
                <div className="flex items-center justify-between mb-2">
                  <div className="flex items-center gap-2">
                    <span className="text-xs font-mono text-[#AFAFB7]">
                      #{idx + 1}
                    </span>
                    <span
                      className={`inline-flex items-center gap-1 px-2 py-0.5 rounded-full text-[10px] font-semibold uppercase tracking-[0.1em] border ${style.color}`}
                    >
                      <span>{style.emoji}</span>
                      <span>{style.label}</span>
                    </span>
                  </div>
                  <span
                    className={`text-[10px] uppercase tracking-[0.12em] font-semibold ${STATUS_BADGE[d.status]}`}
                  >
                    {d.status}
                  </span>
                </div>
                <div className="text-xs text-[#57575B] mb-2">
                  <span className="font-semibold text-[#010507]">Task: </span>
                  {d.task}
                </div>
                <div className="text-sm text-[#010507] whitespace-pre-wrap bg-white rounded-lg p-2.5 border border-[#E9E9EF]">
                  {d.result}
                </div>
              </div>
            );
          })
        )}
      </div>
    </div>
  );
}

The result: as the supervisor fans work out to its sub-agents, the log grows in real time, giving the user visibility into a process that would otherwise be a long opaque spinner.

Related#

  • Shared State — the channel that makes the delegation log live.
  • State streaming — stream individual sub-agent outputs token-by-token inside each log entry.
Supported by
Built-in Agent (TanStack AI)LangGraph (Python)LangGraph (TypeScript)LangGraph (FastAPI)Google ADKMastraCrewAI (Crews)PydanticAIClaude Agent SDK (Python)Claude Agent SDK (TypeScript)AgnoAG2LlamaIndexAWS StrandsLangroidMS Agent Framework (Python)MS Agent Framework (.NET)Spring AI