Factory Mode

Use BuiltInAgent's factory mode to bring your own AI SDK, TanStack AI, or custom LLM backend.

BuiltInAgent's factory mode gives you full control over the LLM call. You provide a factory function that talks to any backend — CopilotKit handles converting the stream to AG-UI events, managing lifecycle, and wiring it into the runtime.

When to use Simple Mode vs Factory Mode

Simple ModeFactory Mode
SetupMinimal — pass a model stringYou own the LLM call and stream
Model resolutionBuilt-in ("openai/gpt-4o")You set up the model yourself
Tools, MCP, state toolsAutomatically wiredYou wire them in your factory
Backend supportVercel AI SDK onlyAny backend: AI SDK, TanStack AI, or custom
Best forQuick setup, standard use casesFull control, non-standard backends
Info

If simple mode covers your needs, stick with it — it's simpler. Use factory mode when you need control that simple mode doesn't offer.

Quick Start

You have an existing LLM backend and you want a CopilotKit copilot using it. Pick your backend:

import {
  CopilotRuntime,
  createCopilotEndpoint,
  InMemoryAgentRunner,
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) =>
    streamText({
      model: openai("gpt-4o"),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    }),
});

const runtime = new CopilotRuntime({
  agents: { default: agent },
  runner: new InMemoryAgentRunner(),
});

const copilotEndpoint = createCopilotEndpoint({
  runtime,
  basePath: "/api/copilotkit",
});
export default copilotEndpoint;

The frontend setup is the same as BuiltInAgent — wrap your app with <CopilotKit> and add a chat component.

How It Works

Factory mode accepts a config with two fields:

  • type — which backend you're using: "aisdk", "tanstack", or "custom"
  • factory — a function that receives the raw request and returns a backend-native stream

The factory receives an AgentFactoryContext (from @copilotkit/runtime/v2):

interface AgentFactoryContext {
  input: RunAgentInput;        // messages, tools, state, context, threadId, runId, forwardedProps
  abortController: AbortController;  // for TanStack AI (requires AbortController)
  abortSignal: AbortSignal;          // preferred for AI SDK, fetch, and custom backends
}

CopilotKit handles everything else: RUN_STARTED and RUN_FINISHED lifecycle events, stream-to-AG-UI conversion, error handling, and abort/cancellation. Your factory never needs to emit lifecycle events.

The factory can be async — return a Promise if you need to do setup before streaming:

factory: async ({ input, abortSignal }) => {
  const apiKey = await getApiKeyFromVault();
  return streamText({ model: openai("gpt-4o", { apiKey }), ... });
}

Examples

With Tools

import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
  convertToolsToVercelAITools,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const tools = convertToolsToVercelAITools(input.tools);
    return streamText({
      model: openai("gpt-4o"),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      tools,
      abortSignal,
    });
  },
});

convertToolsToVercelAITools converts the frontend-defined tools (from useCopilotAction) into AI SDK's ToolSet format automatically.

With Reasoning (Thinking Models)

import { BuiltInAgent, convertMessagesToVercelAISDKMessages } from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) =>
    streamText({
      model: anthropic("claude-sonnet-4", {
        thinking: { type: "enabled", budgetTokens: 10000 },
      }),
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    }),
});

Reasoning events (REASONING_START, REASONING_MESSAGE_CONTENT, REASONING_END) are automatically extracted from the AI SDK stream.

With System Prompt, Context, and State

import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const systemParts: string[] = ["You are a helpful assistant."];

    // Add context from the frontend (useCopilotReadable)
    if (input.context?.length) {
      for (const ctx of input.context) {
        systemParts.push(`${ctx.description}:\n${ctx.value}`);
      }
    }

    // Add shared application state (useCoAgent, etc.)
    if (input.state && Object.keys(input.state).length > 0) {
      systemParts.push(
        `Application State:\n${JSON.stringify(input.state, null, 2)}`,
      );
    }

    const messages = convertMessagesToVercelAISDKMessages(input.messages);
    messages.unshift({ role: "system", content: systemParts.join("\n\n") });

    return streamText({
      model: openai("gpt-4o"),
      messages,
      abortSignal,
    });
  },
});

With forwardedProps

Let the frontend override model, temperature, or other settings at runtime:

import {
  BuiltInAgent,
  convertMessagesToVercelAISDKMessages,
  resolveModel,
} from "@copilotkit/runtime/v2";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const agent = new BuiltInAgent({
  type: "aisdk",
  factory: ({ input, abortSignal }) => {
    const props = (input.forwardedProps ?? {}) as Record<string, unknown>;

    const model =
      typeof props.model === "string"
        ? resolveModel(props.model)
        : openai("gpt-4o");

    const temperature =
      typeof props.temperature === "number" ? props.temperature : 0.7;

    return streamText({
      model,
      temperature,
      messages: convertMessagesToVercelAISDKMessages(input.messages),
      abortSignal,
    });
  },
});

Forward props from the frontend using the CopilotKit provider's properties prop:

<CopilotKit properties={{ model: "anthropic/claude-sonnet-4", temperature: 0.3 }}>
  <CopilotChat />
</CopilotKit>

Helper Utilities

These utilities are exported from @copilotkit/runtime/v2 to help convert between CopilotKit's input format and your backend's expected format:

UtilityDescription
convertInputToTanStackAI(input)Converts RunAgentInput to { messages, systemPrompts } for TanStack AI's chat(). Handles system/developer messages, context, and state.
convertMessagesToVercelAISDKMessages(messages)Converts AG-UI messages to Vercel AI SDK's ModelMessage[] format.
convertToolsToVercelAITools(tools)Converts frontend-defined tools (JSON Schema) to AI SDK's ToolSet.
convertToolDefinitionsToVercelAITools(tools)Converts defineTool() definitions (Standard Schema) to AI SDK's ToolSet.
resolveModel(spec)Resolves "openai/gpt-4o" strings to AI SDK LanguageModel instances.