Occasionally, you'll want to disable streaming temporarily — for example, the LLM may be
doing something the current user should not see, like emitting tool calls or questions
pertaining to other employees in an HR system.
You can disable all message streaming and tool call streaming by passing emit_messages=False and emit_tool_calls=False to the CopilotKit config.
from copilotkit.langgraph import copilotkit_customize_config
asyncdeffrontend_actions_node(state: AgentState, config: RunnableConfig):
# 1) Configure CopilotKit not to emit messages
modifiedConfig = copilotkit_customize_config(
config,
emit_messages=False, # if you want to disable message streaming # [!code highlight]
emit_tool_calls=False# if you want to disable tool call streaming # [!code highlight]
)
# 2) Provide the actions to the LLM
model = ChatOpenAI(model="gpt-5.4").bind_tools([
*state["copilotkit"]["actions"],
# ... any tools you want to make available to the model
])
# 3) Call the model with CopilotKit's modified config # [!code highlight]
response = await model.ainvoke(state["messages"], modifiedConfig) # [!code highlight]# don't return the new response to hide it from the userreturn state
BEWARE!
In LangGraph Python, the config variable in the surrounding namespace is implicitly passed into LangChain LLM calls, even when not explicitly provided.
This is why we create a new variable modifiedConfig rather than modifying config directly. If we modified config itself, it would change the default configuration for all subsequent LLM calls in that namespace.
# if we override the config variable name with a new value
config = copilotkit_customize_config(config, ...)
# it will affect every subsequent LangChain LLM call in the same namespace, even when `config` is not explicitly provided
response = await model2.ainvoke(*state["messages"]) # implicitly uses the modified config!