4d4bd19
CopilotKitDocs
  • Docs
  • Integrations
  • Reference
  • Free Developer Access
Get Started
QuickstartCoding Agents
Concepts
ArchitectureGenerative UI OverviewOSS vs Enterprise
Agentic Protocols
OverviewAG-UIAG-UI MiddlewareMCPA2A
Build Chat UIs
Prebuilt Components
CopilotChatCopilotSidebarCopilotPopup
Custom Look and Feel
CSS CustomizationSlots (Subcomponents)Fully Headless UIReasoning Messages
Multimodal AttachmentsVoice
Build Generative UI
Controlled
Tool-based Generative UITool RenderingState RenderingReasoning
Your Components
Display ComponentsInteractive Components
Declarative
A2UIDynamic Schema A2UIFixed Schema A2UI
Open-Ended
MCP Apps
Adding Agent Powers
Frontend ToolsShared State
Human-in-the-Loop
HITL OverviewPausing the Agent for InputHeadless Interrupts
Sub-AgentsAgent ConfigProgrammatic Control
Agents & Backends
Built-in Agent
Backend
Copilot RuntimeFactory ModeAG-UI
Runtime Server AdapterAuthentication
LangGraph (Python)
Your Components
Display-onlyInteractiveInterrupt-based
Shared state
Reading agent stateWriting agent stateInput/Output SchemasState streaming
ReadablesInterruptsConfigurableSubgraphsDeep Agents
Advanced
Disabling state streamingManually emitting messagesExiting the agent loop
Persistence
Loading Agent StateThreadsMessage Persistence
Videos
Video: Research Canvas
Error Debugging & ObservabilityCommon LangGraph issues
Troubleshooting Copilots
Migrate to AG-UI
Observe & Operate
InspectorVS Code Extension
Troubleshooting
Common Copilot IssuesError Debugging & ObservabilityDebug ModeAG-UI Event InspectorHook ExplorerError Observability Connectors
Enterprise
CopilotKit PremiumHow the Enterprise Intelligence Platform WorksHow Threads & Persistence WorkObservabilitySelf-Hosting IntelligenceThreads
Deploy
AWS AgentCore
What's New
Full MCP Apps SupportLangGraph Deep Agents in CopilotKitA2UI Launches with full AG-UI SupportCopilotKit v1.50Generative UI Spec SupportA2A and MCP Handshake
Migrate
Migrate to V2Migrate to 1.8.2
Other
Contributing
Code ContributionsDocumentation Contributions
Anonymous Telemetry
LangGraph (Python)Shared StateState Streaming

State Streaming

Stream partial agent state updates to the UI while a tool call is still running.

What is this?#

By default, agent state only updates between LangGraph node transitions, so a long-running tool call (writing a full document, drafting an email) appears to the UI as one big burst at the end. For agent-native apps, that feels broken: users expect to watch the output materialise.

State streaming forwards the value of a specific tool argument straight into an agent state key as the argument is being generated. The UI, subscribed via useAgent, re-renders every token.

Live Demo: LangGraph (Python) — shared-state-streamingOpen full demo →

When should I use this?#

Use state streaming whenever a tool's output is long-form text or a growing structured value and you want the user to see it assemble in real time. Common shapes:

  • A collaborative writing agent that emits a document
  • A research agent that accumulates a list of findings
  • A planning agent that builds up a step-by-step plan

Without streaming, the user stares at a spinner. With streaming, they see the answer grow token-by-token.

The backend: one middleware, one StateItem#

The canonical pattern for prebuilt agents is StateStreamingMiddleware. It takes one or more StateItem(...) entries, each mapping a tool argument to a state key. When the LLM streams that argument, CopilotKit writes every partial value into shared state before the tool even finishes executing.

backend/agent.py — StateStreamingMiddleware
L15–92
import uuid

from langchain.agents import AgentState as BaseAgentState, create_agent
from langchain.tools import ToolRuntime, tool
from langchain_core.messages import ToolMessage
from langchain_openai import ChatOpenAI
from langgraph.types import Command

from copilotkit import (
    CopilotKitMiddleware,
    StateItem,
    StateStreamingMiddleware,
)


class AgentState(BaseAgentState):
    """Shared state. `document` is streamed token-by-token."""

    document: str


@tool
def write_document(document: str, runtime: ToolRuntime) -> Command:
    """Write a document for the user.

    Always call this tool when the user asks you to write or draft
    something of any length (an essay, poem, email, summary, etc.).
    The `document` argument is streamed *per token* into shared agent
    state under the `document` key, so the UI can render it as it is
    generated.
    """
    return Command(
        update={
            "document": document,
            "messages": [
                ToolMessage(
                    content="Document written to shared state.",
                    name="write_document",
                    id=str(uuid.uuid4()),
                    tool_call_id=runtime.tool_call_id,
                )
            ],
        }
    )


graph = create_agent(
    model=ChatOpenAI(model="gpt-5.4"),
    tools=[write_document],
    middleware=[
        CopilotKitMiddleware(),
        # Forward every token of write_document's `document` argument
        # straight into state["document"] while the tool call is still
        # streaming. Without this, `document` would only update once
        # the tool call completes.
        #
        # NOTE: the frontend `usePredictStateSubscription` hook indexes
        # the (partial-JSON-parsed) tool args by `state_key`, so the
        # tool's argument name MUST match `state_key` ("document") for
        # per-token deltas to land in `state.document`.
        StateStreamingMiddleware(
            StateItem(
                state_key="document",
                tool="write_document",
                tool_argument="document",
            )
        ),
    ],
    state_schema=AgentState,
    system_prompt=(
        "You are a collaborative writing assistant. Whenever the user asks "
        "you to write, draft, or revise any piece of text, ALWAYS call the "
        "`write_document` tool with the full content as a single string in "
        "the `document` argument. Never paste the document into a chat "
        "message directly — the document belongs in shared state and the "
        "UI renders it live as you type."
    ),
)

A few things to note:

  • The state_key must exist on your AgentState schema (document: str in this demo).
  • The tool and tool_argument name the exact LLM-facing tool and argument to forward.
  • When the tool call completes, its final return value is written to the same key, so the streamed partial eventually becomes the authoritative final value.

The frontend: useAgent + OnStateChanged#

The UI side is identical to any other shared-state subscription: useAgent with OnStateChanged gives you a reactive agent.state. Add OnRunStatusChanged if you want a "LIVE" / "done" indicator.

frontend/src/app/page.tsx — useAgent subscription
L26–32
  // Subscribe to BOTH state changes and run-status changes. The former
  // drives the per-token document rerender; the latter toggles the
  // "LIVE" badge when the agent starts / stops.
  const { agent } = useAgent({
    agentId: "shared-state-streaming",
    updates: [UseAgentUpdate.OnStateChanged, UseAgentUpdate.OnRunStatusChanged],
  });

From there, agent.state.document is just a string that grows on every token, and agent.isRunning tells you whether to show a streaming indicator.

Related#

  • Shared State (overview) — the bidirectional read + write pattern this extends.
  • Agent read-only context — for the inverse, UI → agent one-way channel.
Supported by
LangGraph (Python)Google ADK