Factory Mode
Use BuiltInAgent's factory mode to bring your own AI SDK, TanStack AI, or custom LLM backend.
BuiltInAgent's factory mode gives you full control over the LLM call. You provide a factory function that talks to any backend — CopilotKit handles converting the stream to AG-UI events, managing lifecycle, and wiring it into the runtime.
When to use Simple Mode vs Factory Mode
| Simple Mode | Factory Mode | |
|---|---|---|
| Setup | Minimal — pass a model string | You own the LLM call and stream |
| Model resolution | Built-in ("openai/gpt-4o") | You set up the model yourself |
| Tools, MCP, state tools | Automatically wired | You wire them in your factory |
| Backend support | Vercel AI SDK only | Any backend: AI SDK, TanStack AI, or custom |
| Best for | Quick setup, standard use cases | Full control, non-standard backends |
If simple mode covers your needs, stick with it — it's simpler. Use factory mode when you need control that simple mode doesn't offer.
Quick Start
You have an existing LLM backend and you want a CopilotKit copilot using it. Pick your backend:
The frontend setup is the same as BuiltInAgent — wrap your app with <CopilotKit> and add a chat component.
How It Works
Factory mode accepts a config with two fields:
type— which backend you're using:"aisdk","tanstack", or"custom"factory— a function that receives the raw request and returns a backend-native stream
The factory receives an AgentFactoryContext (from @copilotkit/runtime/v2):
interface AgentFactoryContext {
input: RunAgentInput; // messages, tools, state, context, threadId, runId, forwardedProps
abortController: AbortController; // for TanStack AI (requires AbortController)
abortSignal: AbortSignal; // preferred for AI SDK, fetch, and custom backends
}
CopilotKit handles everything else: RUN_STARTED and RUN_FINISHED lifecycle events, stream-to-AG-UI conversion, error handling, and abort/cancellation. Your factory never needs to emit lifecycle events.
The factory can be async — return a Promise if you need to do setup before streaming:
factory: async ({ input, abortSignal }) => {
const apiKey = await getApiKeyFromVault();
return streamText({ model: openai("gpt-4o", { apiKey }), ... });
}
Examples
With Tools
With Reasoning (Thinking Models)
With System Prompt, Context, and State
With forwardedProps
Let the frontend override model, temperature, or other settings at runtime:
Forward props from the frontend using the CopilotKit provider's properties prop:
<CopilotKit properties={{ model: "anthropic/claude-sonnet-4", temperature: 0.3 }}>
<CopilotChat />
</CopilotKit>
Helper Utilities
These utilities are exported from @copilotkit/runtime/v2 to help convert between CopilotKit's input format and your backend's expected format:
| Utility | Description |
|---|---|
convertInputToTanStackAI(input) | Converts RunAgentInput to { messages, systemPrompts } for TanStack AI's chat(). Handles system/developer messages, context, and state. |
convertMessagesToVercelAISDKMessages(messages) | Converts AG-UI messages to Vercel AI SDK's ModelMessage[] format. |
convertToolsToVercelAITools(tools) | Converts frontend-defined tools (JSON Schema) to AI SDK's ToolSet. |
convertToolDefinitionsToVercelAITools(tools) | Converts defineTool() definitions (Standard Schema) to AI SDK's ToolSet. |
resolveModel(spec) | Resolves "openai/gpt-4o" strings to AI SDK LanguageModel instances. |