Human-in-the-Loop
Allow your agent and users to collaborate on complex tasks.
What is this?
Human-in-the-loop (HITL) lets an agent pause mid-run to collect input, confirmation, or a choice from the user, then resume with that answer folded back into its reasoning. It's what turns an autonomous workflow into a collaborative one — the agent keeps its context, the user keeps the steering wheel.
When should I use this?
Use HITL when you need:
- Quality control — a human gate at high-stakes decision points
- Edge cases — graceful fallbacks when the agent's confidence is low
- Expert input — lean on the user for domain knowledge the model lacks
- Reliability — a more robust loop for real-world, production traffic
Two patterns for HITL in CopilotKit
CopilotKit ships two complementary ways to pause an agent turn and ask the human something. They look similar from the outside — the chat pauses, a custom component appears, the user answers, the run resumes — but they're wired differently on the backend and each has its own niche.
| Pattern | Who decides to pause? | Backend surface |
|---|---|---|
useHumanInTheLoop | The LLM, by calling a registered client-side tool | A frontend-only tool description (Zod schema + render) |
useInterrupt | The graph, by calling interrupt(...) during a node | A server-side interrupt() call in your LangGraph agent |
Pick useHumanInTheLoop when the pause is an agent-initiated
decision — the model chose to ask the user — and you want the picker UI
inlined into the normal tool-call flow.
Pick useInterrupt when the pause is a graph-enforced checkpoint —
the code path deterministically requires a human answer — and you want
langgraph.interrupt() as the server-side contract.
Pattern 1 — useHumanInTheLoop (tool-based)
The agent registers a HITL tool on the client with useHumanInTheLoop.
When the LLM calls that tool, CopilotKit routes the call through your
render function, which shows a custom component and calls respond
with the user's answer. The agent sees the answer as the tool result and
continues from there.
useHumanInTheLoop({
agentId: "hitl-in-chat",
name: "book_call",
description:
"Ask the user to pick a time slot for a call. The picker UI presents fixed candidate slots; the user's choice is returned to the agent.",
parameters: z.object({
topic: z
.string()
.describe("What the call is about (e.g. 'Intro with sales')"),
attendee: z
.string()
.describe("Who the call is with (e.g. 'Alice from Sales')"),
}),
render: ({ args, status, respond }: any) => (
<TimePickerCard
topic={args?.topic ?? "a call"}
attendee={args?.attendee}
slots={DEFAULT_SLOTS}
status={status}
onSubmit={(result) => respond?.(result)}
/>
),
});The picker UI is fed a static list of candidate slots — this is just data the demo page owns, so you can swap in real availability, a calendar API, or anything else:
const DEFAULT_SLOTS: TimeSlot[] = [
{ label: "Tomorrow 10:00 AM", iso: "2026-04-19T10:00:00-07:00" },
{ label: "Tomorrow 2:00 PM", iso: "2026-04-19T14:00:00-07:00" },
{ label: "Monday 9:00 AM", iso: "2026-04-21T09:00:00-07:00" },
{ label: "Monday 3:30 PM", iso: "2026-04-21T15:30:00-07:00" },
];Pattern 2 — useInterrupt (graph-paused)
With LangGraph's interrupt() the pause is enforced by the graph
itself: a node calls interrupt({...}), the run suspends, the client
receives the payload, renders a UI, and resumes the run with the user's
answer. CopilotKit's useInterrupt hook is the render contract.
See the useInterrupt deep dive for
the full walkthrough, including the backend tool and render-prop wiring.
Going headless
Both patterns above ship with a render prop — CopilotKit handles the
"when to show the picker" logic for you. If you want to drive
interrupt resolution from a custom UI that lives anywhere in the tree
(not necessarily inside a chat), see the
headless interrupts guide — it shows
how to compose useAgent, agent.subscribe, and copilotkit.runAgent
to build your own useInterrupt equivalent.