Tool-based Generative UI

Let your agent render rich React components directly in the chat by calling them as tools.

What is this?

Tool-based Generative UI is the simplest form of Generative UI: you register a React component with useComponent, and CopilotKit exposes it to the agent as a tool. When the agent calls the tool, CopilotKit renders your component inline in the chat, passing the tool's arguments straight through as typed props.

Unlike tool rendering — which wraps a real backend tool in a custom UI — tool-based GenUI is the component. There is no handler, no user interaction, no server-side execution. The agent decides when to show it, populates the data, and CopilotKit paints it.

Live Demo: LangGraph (Python)gen-ui-tool-basedOpen full demo →

When should I use this?

Use useComponent when you want to:

  • Display rich UI (cards, charts, tables, dashboards) inline in the chat
  • Show structured data the agent has derived from its reasoning
  • Render previews, status indicators, or visual summaries
  • Let the agent present information beyond plain text

For components that need user interaction, see Human-in-the-loop. For operational transparency around a real backend tool, see Tool rendering.

How it works in code

useComponent takes a name, a Zod schema for its props, and the component to render. The runtime registers it as a frontend tool so the agent can discover it, and Zod validates the LLM's arguments before they reach your component.

The component itself is ordinary React — it reads only its props and can stream in as the agent fills the payload. The showcase cell uses Recharts for the bar chart and a hand-rolled SVG donut for the pie chart; neither knows anything about CopilotKit.

Info

The name you pass to useComponent is what the agent sees as the tool name. Make it a verb like render_bar_chart or show_weather so the LLM reliably picks it when the user asks for that visualization.

Choose your AI backend

See Integrations for all available frameworks (generative-ui/tool-based).