7acadae
CopilotKitDocs
  • Docs
  • Integrations
  • Reference
Get Started
QuickstartCoding Agents
Concepts
ArchitectureGenerative UI OverviewOSS vs Enterprise
Agentic Protocols
OverviewAG-UIAG-UI MiddlewareMCPA2A
Build Chat UIs
Prebuilt Components
CopilotChatCopilotSidebarCopilotPopup
Custom Look and Feel
CSS CustomizationSlots (Subcomponents)Fully Headless UIReasoning Messages
Multimodal AttachmentsVoice
Build Generative UI
Controlled
Tool-based Generative UITool RenderingState RenderingReasoning
Your Components
Display ComponentsInteractive Components
Declarative
A2UIDynamic Schema A2UIFixed Schema A2UI
Open-Ended
MCP Apps
Adding Agent Powers
Frontend ToolsShared State
Human-in-the-Loop
HITL OverviewPausing the Agent for InputHeadless Interrupts
Sub-AgentsAgent ConfigProgrammatic Control
Agents & Backends
Built-in Agent
Backend
Copilot RuntimeFactory ModeAG-UI
Runtime Server AdapterAuthentication
Built-in Agent (TanStack AI)
Advanced ConfigurationMCP ServersModel SelectionServer Tools
Observe & Operate
InspectorVS Code Extension
Troubleshooting
Common Copilot IssuesError Debugging & ObservabilityDebug ModeAG-UI Event InspectorHook ExplorerError Observability Connectors
Enterprise
CopilotKit PremiumHow the Enterprise Intelligence Platform WorksHow Threads & Persistence WorkObservabilitySelf-Hosting IntelligenceThreads
Deploy
AWS AgentCore
What's New
Full MCP Apps SupportLangGraph Deep Agents in CopilotKitA2UI Launches with full AG-UI SupportCopilotKit v1.50Generative UI Spec SupportA2A and MCP Handshake
Migrate
Migrate to V2Migrate to 1.8.2
Other
Contributing
Code ContributionsDocumentation Contributions
Anonymous Telemetry
Built-in Agent (TanStack AI)BackendFactory Mode

Factory Mode

Use BuiltInAgent's factory mode to bring your own AI SDK, TanStack AI, or custom LLM backend.

BuiltInAgent's factory mode gives you full control over the LLM call. You provide a factory function that talks to any backend — CopilotKit handles converting the stream to AG-UI events, managing lifecycle, and wiring it into the runtime.

When to use Simple Mode vs Factory Mode#

Simple ModeFactory Mode
SetupMinimal — pass a model stringYou own the LLM call and stream
Model resolutionBuilt-in ("openai/gpt-4o")You set up the model yourself
Tools, MCP, state toolsAutomatically wiredYou wire them in your factory
Backend supportVercel AI SDK onlyAny backend: AI SDK, TanStack AI, or custom
Best forQuick setup, standard use casesFull control, non-standard backends
Info

If simple mode covers your needs, stick with it — it's simpler. Use factory mode when you need control that simple mode doesn't offer.

Quick Start#

You have an existing LLM backend and you want a CopilotKit copilot using it. Pick your backend:

src/copilotkit.ts
typescript
import {
  CopilotRuntime,
  createCopilotEndpoint,
  InMemoryAgentRunner,
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) =>
    streamText({
      model: openai("gpt-4o"),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    }),
});

const runtime = new CopilotRuntime({
  agents: { default: agent },
  runner: new InMemoryAgentRunner(),
});

const copilotEndpoint = createCopilotEndpoint({
  runtime,
  basePath: "/api/copilotkit",
});
export default copilotEndpoint;

The frontend setup is the same as BuiltInAgent — wrap your app with <CopilotKit> and add a chat component.

How It Works#

Factory mode accepts a config with two fields:

  • type — which backend you're using: "aisdk", "tanstack", or "custom"
  • factory — a function that receives the raw request and returns a backend-native stream

The factory receives an AgentFactoryContext (from @copilotkit/runtime/v2):

interface AgentFactoryContext {
  input: RunAgentInput;        // messages, tools, state, context, threadId, runId, forwardedProps
  abortController: AbortController;  // for TanStack AI (requires AbortController)
  abortSignal: AbortSignal;          // preferred for AI SDK, fetch, and custom backends
}

CopilotKit handles everything else: RUN_STARTED and RUN_FINISHED lifecycle events, stream-to-AG-UI conversion, error handling, and abort/cancellation. Your factory never needs to emit lifecycle events.

The factory can be async — return a Promise if you need to do setup before streaming:

factory: async ({ input, abortSignal }) => {
  const apiKey = await getApiKeyFromVault();
  return streamText({ model: openai("gpt-4o", { apiKey }), ... });
}

Examples#

With Tools#

src/copilotkit.ts
typescript
import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
  convertToolsToVercelAITools,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const tools = convertToolsToVercelAITools(input.tools);
    return streamText({
      model: openai("gpt-4o"),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      tools,
      abortSignal,
    });
  },
});

convertToolsToVercelAITools converts the frontend-defined tools (from useFrontendTool) into AI SDK's ToolSet format automatically.

With Reasoning (Thinking Models)#

src/copilotkit.ts
typescript
import { BuiltInAgent, convertMessagesToVercelAISDKMessages } from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) =>
    streamText({
      model: anthropic("claude-sonnet-4", {
        thinking: { type: "enabled", budgetTokens: 10000 },
      }),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    }),
});

Reasoning events (REASONING_START, REASONING_MESSAGE_CONTENT, REASONING_END) are automatically extracted from the AI SDK stream.

With System Prompt, Context, and State#

src/copilotkit.ts
typescript
import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const systemParts: string[] = ["You are a helpful assistant."];

    // Add context from the frontend (useAgentContext)
    if (input.context?.length) {
      for (const ctx of input.context) {
        systemParts.push(`${ctx.description}:\n${ctx.value}`);
      }
    }

    // Add shared application state (useCoAgent, etc.)
    if (input.state && Object.keys(input.state).length > 0) {
      systemParts.push(
        `Application State:\n${JSON.stringify(input.state, null, 2)}`,
      );
    }

    const messages = convertMessagesToVercelAISDKMessages(input.messages);
    messages.unshift({ role: "system", content: systemParts.join("\n\n") });

    return streamText({
      model: openai("gpt-4o"),
      messages,
      abortSignal,
    });
  },
});

With forwardedProps#

Let the frontend override model, temperature, or other settings at runtime:

src/copilotkit.ts
typescript
import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
  resolveModel,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const props = (input.forwardedProps ?? {}) as Record<string, unknown>;

    const model =
      typeof props.model === "string"
        ? resolveModel(props.model)
        : openai("gpt-4o");

    const temperature =
      typeof props.temperature === "number" ? props.temperature : 0.7;

    return streamText({
      model,
      temperature,
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    });
  },
});

Forward props from the frontend using the CopilotKit provider's properties prop:

app/page.tsx
tsx
<CopilotKit properties={{ model: "anthropic/claude-sonnet-4", temperature: 0.3 }}>
  <CopilotChat />
</CopilotKit>

Helper Utilities#

These utilities are exported from @copilotkit/runtime/v2 to help convert between CopilotKit's input format and your backend's expected format:

UtilityDescription
convertInputToTanStackAI(input)Converts RunAgentInput to { messages, systemPrompts } for TanStack AI's chat(). Handles system/developer messages, context, and state.
convertMessagesToVercelAISDKMessages(messages)Converts AG-UI messages to Vercel AI SDK's ModelMessage[] format.
convertToolsToVercelAITools(tools)Converts frontend-defined tools (JSON Schema) to AI SDK's ToolSet.
convertToolDefinitionsToVercelAITools(tools)Converts defineTool() definitions (Standard Schema) to AI SDK's ToolSet.
resolveModel(spec)Resolves "openai/gpt-4o" strings to AI SDK LanguageModel instances.
On this page
When to use Simple Mode vs Factory ModeQuick StartHow It WorksExamplesWith ToolsWith Reasoning (Thinking Models)With System Prompt, Context, and StateWith forwardedPropsHelper Utilities