7acadae
CopilotKitDocs
  • Docs
  • Integrations
  • Reference
Get Started
QuickstartCoding Agents
Concepts
ArchitectureGenerative UI OverviewOSS vs Enterprise
Agentic Protocols
OverviewAG-UIAG-UI MiddlewareMCPA2A
Build Chat UIs
Prebuilt Components
CopilotChatCopilotSidebarCopilotPopup
Custom Look and Feel
CSS CustomizationSlots (Subcomponents)Fully Headless UIReasoning Messages
Multimodal AttachmentsVoice
Build Generative UI
Controlled
Tool-based Generative UITool RenderingState RenderingReasoning
Your Components
Display ComponentsInteractive Components
Declarative
A2UIDynamic Schema A2UIFixed Schema A2UI
Open-Ended
MCP Apps
Adding Agent Powers
Frontend ToolsShared State
Human-in-the-Loop
HITL OverviewPausing the Agent for InputHeadless Interrupts
Sub-AgentsAgent ConfigProgrammatic Control
Agents & Backends
Built-in Agent
Backend
Copilot RuntimeFactory ModeAG-UI
Runtime Server AdapterAuthentication
LangGraph (Python)
Your Components
Display-onlyInteractiveInterrupt-based
Shared state
Reading agent stateWriting agent stateInput/Output SchemasState streaming
ReadablesInterruptsConfigurableSubgraphsDeep Agents
Advanced
Disabling state streamingManually emitting messagesExiting the agent loop
Persistence
Loading Agent StateThreadsMessage Persistence
Videos
Video: Research Canvas
Error Debugging & ObservabilityCommon LangGraph issues
Troubleshooting Copilots
Migrate to AG-UI
Observe & Operate
InspectorVS Code Extension
Troubleshooting
Common Copilot IssuesError Debugging & ObservabilityDebug ModeAG-UI Event InspectorHook ExplorerError Observability Connectors
Enterprise
CopilotKit PremiumHow the Enterprise Intelligence Platform WorksHow Threads & Persistence WorkObservabilitySelf-Hosting IntelligenceThreads
Deploy
AWS AgentCore
What's New
Full MCP Apps SupportLangGraph Deep Agents in CopilotKitA2UI Launches with full AG-UI SupportCopilotKit v1.50Generative UI Spec SupportA2A and MCP Handshake
Migrate
Migrate to V2Migrate to 1.8.2
Other
Contributing
Code ContributionsDocumentation Contributions
Anonymous Telemetry
LangGraph (Python)AdvancedDisabling State Streaming

Disabling state streaming

Granularly control what is streamed to the frontend.

What is this?#

By default, CopilotKit will stream both your state and tool calls to the frontend. You can disable this by using CopilotKit's custom RunnableConfig.

When should I use this?#

Occasionally, you'll want to disable streaming temporarily — for example, the LLM may be doing something the current user should not see, like emitting tool calls or questions pertaining to other employees in an HR system.

Implementation#

Disable all streaming#

You can disable all message streaming and tool call streaming by passing emit_messages=False and emit_tool_calls=False to the CopilotKit config.

from copilotkit.langgraph import copilotkit_customize_config

async def frontend_actions_node(state: AgentState, config: RunnableConfig):

    # 1) Configure CopilotKit not to emit messages
    modifiedConfig = copilotkit_customize_config(
        config,
        emit_messages=False, # if you want to disable message streaming # [!code highlight]
        emit_tool_calls=False # if you want to disable tool call streaming # [!code highlight]
    )

    # 2) Provide the actions to the LLM
    model = ChatOpenAI(model="gpt-5.4").bind_tools([
      *state["copilotkit"]["actions"],
      # ... any tools you want to make available to the model
    ])

    # 3) Call the model with CopilotKit's modified config  # [!code highlight]
    response = await model.ainvoke(state["messages"], modifiedConfig) # [!code highlight]

    # don't return the new response to hide it from the user
    return state
BEWARE!

In LangGraph Python, the config variable in the surrounding namespace is implicitly passed into LangChain LLM calls, even when not explicitly provided.

This is why we create a new variable modifiedConfig rather than modifying config directly. If we modified config itself, it would change the default configuration for all subsequent LLM calls in that namespace.

# if we override the config variable name with a new value
config = copilotkit_customize_config(config, ...)

# it will affect every subsequent LangChain LLM call in the same namespace, even when `config` is not explicitly provided
response = await model2.ainvoke(*state["messages"]) # implicitly uses the modified config!
On this page
What is this?When should I use this?ImplementationDisable all streaming