7acadae
CopilotKitDocs
  • Docs
  • Integrations
  • Reference
Get Started
QuickstartCoding Agents
Concepts
ArchitectureGenerative UI OverviewOSS vs Enterprise
Agentic Protocols
OverviewAG-UIAG-UI MiddlewareMCPA2A
Build Chat UIs
Prebuilt Components
CopilotChatCopilotSidebarCopilotPopup
Custom Look and Feel
CSS CustomizationSlots (Subcomponents)Fully Headless UIReasoning Messages
Multimodal AttachmentsVoice
Build Generative UI
Controlled
Tool-based Generative UITool RenderingState RenderingReasoning
Your Components
Display ComponentsInteractive Components
Declarative
A2UIDynamic Schema A2UIFixed Schema A2UI
Open-Ended
MCP Apps
Adding Agent Powers
Frontend ToolsShared State
Human-in-the-Loop
HITL OverviewPausing the Agent for InputHeadless Interrupts
Sub-AgentsAgent ConfigProgrammatic Control
Agents & Backends
Built-in Agent
Backend
Copilot RuntimeFactory ModeAG-UI
Runtime Server AdapterAuthentication
Built-in Agent (TanStack AI)
Advanced ConfigurationMCP ServersModel SelectionServer Tools
Observe & Operate
InspectorVS Code Extension
Troubleshooting
Common Copilot IssuesError Debugging & ObservabilityDebug ModeAG-UI Event InspectorHook ExplorerError Observability Connectors
Enterprise
CopilotKit PremiumHow the Enterprise Intelligence Platform WorksHow Threads & Persistence WorkObservabilitySelf-Hosting IntelligenceThreads
Deploy
AWS AgentCore
What's New
Full MCP Apps SupportLangGraph Deep Agents in CopilotKitA2UI Launches with full AG-UI SupportCopilotKit v1.50Generative UI Spec SupportA2A and MCP Handshake
Migrate
Migrate to V2Migrate to 1.8.2
Other
Contributing
Code ContributionsDocumentation Contributions
Anonymous Telemetry
Built-in Agent (TanStack AI)Build Generative UIReasoning

Reasoning

Surface the agent's thinking chain in the chat — default or fully custom.

What is this?#

Some models (OpenAI's o1, o3, and o4-mini, Anthropic's thinking variants) emit reasoning tokens, internal chain-of-thought traces that explain how the model is working toward its answer. CopilotKit surfaces these as first-class messages: when a REASONING_MESSAGE_* event arrives from the agent, the chat renders it inline so the user can follow the agent's thinking.

Reasoning isn't a custom-renderer plumb-in; it's a dedicated message type on the chat view. You can either accept the built-in rendering or override the reasoningMessage slot with your own component.

Live Demo: Built-in Agent (TanStack AI) — agentic-chat-reasoningOpen full demo →

When should I use this?#

Expose reasoning in the UI when you want to:

  • Give users real-time insight into the agent's thought process
  • Show progress on long or multi-step problems
  • Debug prompt behavior during development
  • Brand the reasoning card to match the rest of your product

Default reasoning rendering (zero-config)#

Out of the box, reasoning events render inside CopilotKit's built-in CopilotChatReasoningMessage card:

  • A "Thinking…" label with a pulsing indicator while the model reasons.
  • Auto-expanded content so users can follow the chain of thought live.
  • Collapses to "Thought for X seconds" once reasoning finishes, with a chevron to re-expand.
  • Reasoning text rendered as Markdown.

No configuration is needed; if your model emits reasoning tokens, the card appears automatically:

frontend/src/app/page.tsx — default reasoning
L23–26
          <CopilotChat
            agentId="reasoning-default-render"
            className="h-full rounded-2xl"
          />

Here's what the built-in card looks like while the model thinks through a multi-step problem:

Live Demo: Built-in Agent (TanStack AI) — reasoning-default-renderOpen full demo →

Custom reasoning rendering#

For full control over the reasoning card, pass a component to the reasoningMessage slot on messageView. Your component receives the ReasoningMessage object (.content holds the streaming text), the full messages list, and isRunning, enough to decide whether this block is still streaming and whether it's the active trailing message:

frontend/src/app/page.tsx — custom reasoning slot
L14–46
import React from "react";
import {
  CopilotKit,
  CopilotChat,
  CopilotChatReasoningMessage,
} from "@copilotkit/react-core/v2";
import { ReasoningBlock } from "./reasoning-block";

export default function AgenticChatReasoningDemo() {
  return (
    <CopilotKit
      runtimeUrl="/api/copilotkit-reasoning"
      agent="agentic-chat-reasoning"
    >
      <div className="flex justify-center items-center h-screen w-full">
        <div className="h-full w-full max-w-4xl">
          <Chat />
        </div>
      </div>
    </CopilotKit>
  );
}

function Chat() {
  return (
    <CopilotChat
      agentId="agentic-chat-reasoning"
      className="h-full rounded-2xl"
      messageView={{
        reasoningMessage: ReasoningBlock as typeof CopilotChatReasoningMessage,
      }}
    />
  );

The ReasoningBlock (imported above) renders the reasoning as an amber-tagged inline banner, intentionally louder than the default card so the thinking chain is the focal UI of the demo. Swap in your own component to match your product's tone.

Info

The messageView.reasoningMessage slot accepts either a full component (as shown) or a sub-slot object like { header, contentView, toggle } if you just want to tweak parts of the default card. See the reference docs for sub-slot props.

Supported by
Built-in Agent (TanStack AI)LangGraph (TypeScript)LangGraph (FastAPI)MastraCrewAI (Crews)PydanticAIClaude Agent SDK (Python)Claude Agent SDK (TypeScript)AgnoAG2LlamaIndexAWS StrandsLangroidMS Agent Framework (Python)MS Agent Framework (.NET)Spring AI