Reasoning

Surface the agent's thinking chain in the chat — default or fully custom.

What is this?

Some models (OpenAI's o1, o3, and o4-mini, Anthropic's thinking variants) emit reasoning tokens — internal chain-of-thought traces that explain how the model is working toward its answer. CopilotKit surfaces these as first-class messages: when a REASONING_MESSAGE_* event arrives from the agent, the chat renders it inline so the user can follow the agent's thinking.

Reasoning isn't a custom-renderer plumb-in — it's a dedicated message type on the chat view. You can either accept the built-in rendering or override the reasoningMessage slot with your own component.

Live Demo: LangGraph (Python)agentic-chat-reasoningOpen full demo →

When should I use this?

Expose reasoning in the UI when you want to:

  • Give users real-time insight into the agent's thought process
  • Show progress on long or multi-step problems
  • Debug prompt behavior during development
  • Brand the reasoning card to match the rest of your product

Default reasoning rendering (zero-config)

Out of the box, reasoning events render inside CopilotKit's built-in CopilotChatReasoningMessage card:

  • A "Thinking…" label with a pulsing indicator while the model reasons.
  • Auto-expanded content so users can follow the chain of thought live.
  • Collapses to "Thought for X seconds" once reasoning finishes, with a chevron to re-expand.
  • Reasoning text rendered as Markdown.

No configuration is needed — if your model emits reasoning tokens, the card appears automatically:

frontend/src/app/page.tsx — default reasoning
L17–20
          <CopilotChat
            agentId="reasoning-default-render"
            className="h-full rounded-2xl"
          />
Live Demo: LangGraph (Python)reasoning-default-renderOpen full demo →

Custom reasoning rendering

For full control over the reasoning card, pass a component to the reasoningMessage slot on messageView. Your component receives the ReasoningMessage object (.content holds the streaming text), the full messages list, and isRunning — enough to decide whether this block is still streaming and whether it's the active trailing message:

frontend/src/app/page.tsx — custom reasoning slot
L44–52
  return (
    <CopilotChat
      agentId="agentic-chat-reasoning"
      className="h-full rounded-2xl"
      messageView={{
        reasoningMessage: ReasoningBlock as typeof CopilotChatReasoningMessage,
      }}
    />
  );

The showcase's ReasoningBlock (imported above) renders the reasoning as an amber-tagged inline banner — intentionally louder than the default card so the thinking chain is the focal UI of the demo. Swap in your own component to match your product's tone.

Info

The messageView.reasoningMessage slot accepts either a full component (as shown) or a sub-slot object like { header, contentView, toggle } if you just want to tweak parts of the default card. See the reference docs for sub-slot props.

Choose your AI backend

See Integrations for all available frameworks (generative-ui/reasoning).