MCP Servers

What are MCP Servers?

MCP (Model Context Protocol) servers provide additional tools and capabilities to your agent. The Built-in Agent supports connecting to MCP servers via HTTP or SSE transports.

When should I use this?

  • You want to connect your agent to existing MCP-compatible tool servers
  • You need to add capabilities from third-party MCP providers
  • You want to share tools across multiple agents or applications

SSE transport

The SSE (Server-Sent Events) transport is the most common option:



const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  mcpServers: [
    {
      type: "sse", // [!code highlight]
      url: "https://my-mcp-server.example.com/sse",
    },
  ],
});

Authentication headers

Pass custom headers for authentication:

const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  mcpServers: [
    {
      type: "sse",
      url: "https://my-mcp-server.example.com/sse",
      headers: { // [!code highlight:3]
        Authorization: `Bearer ${process.env.MCP_API_KEY}`,
      },
    },
  ],
});

HTTP transport

The HTTP (Streamable HTTP) transport uses standard HTTP requests:

const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  mcpServers: [
    {
      type: "http", // [!code highlight]
      url: "https://my-mcp-server.example.com/mcp",
    },
  ],
});

The HTTP transport also accepts optional options for advanced configuration of the underlying StreamableHTTPClientTransport.

Multiple servers

Connect multiple MCP servers — tools from all servers are available to the agent:

const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  mcpServers: [
    {
      type: "sse",
      url: "https://search-mcp.example.com/sse",
    },
    {
      type: "sse",
      url: "https://db-mcp.example.com/sse",
      headers: { Authorization: `Bearer ${process.env.DB_MCP_KEY}` },
    },
    {
      type: "http",
      url: "https://analytics-mcp.example.com/mcp",
    },
  ],
});

Combining with server tools

MCP servers work alongside defineTool server tools — the agent sees all tools from both sources:




const customTool = defineTool({
  name: "getUser",
  description: "Get the current user",
  parameters: z.object({}),
  execute: async () => {
    return { name: "Jane", role: "Admin" };
  },
});

const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  tools: [customTool], // Your custom tools
  mcpServers: [        // Plus tools from MCP servers
    { type: "sse", url: "https://search-mcp.example.com/sse" },
  ],
});

User-managed MCP clients

The mcpServers approach creates a fresh MCP connection on every agent run. For latency-sensitive setups — or when you need persistent connections, dynamic auth, or tool caching — use mcpClients instead. You create and manage the MCP client yourself; the agent just calls .tools() to get tool definitions.





// Create a persistent client at startup
const transport = new StreamableHTTPClientTransport(
  new URL("https://my-mcp-server.example.com/mcp"),
);
const client = await createMCPClient({ transport });

const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  mcpClients: [client], // [!code highlight]
});

Unlike mcpServers, the agent never creates or closes these clients — you control the full lifecycle.

Tool caching

By default, .tools() fetches from the MCP server on every agent run. To cache tools at startup:

const client = await createMCPClient({ transport });

let cachedTools: ToolSet | null = null;
const warmupPromise = client.tools().then(tools => { cachedTools = tools; });

const provider = {
  async tools() {
    if (cachedTools) return cachedTools;
    await warmupPromise;
    return cachedTools!;
  },
};

const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  mcpClients: [provider], // [!code highlight]
});

Tool discovery runs once in the background. The first request waits only if warmup hasn't finished yet; subsequent requests return instantly from cache.

Dynamic authentication

When using a persistent client, auth tokens may expire between requests. Handle this inside your provider:

const provider = {
  async tools() {
    if (isTokenExpired()) {
      await currentClient.close();
      currentClient = await createMCPClient({
        transport: makeTransport(getFreshToken()),
      });
    }
    return currentClient.tools();
  },
};

const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  mcpClients: [provider],
});

You only pay the reconnection cost when the token actually expires, not on every request.

Combining with mcpServers

mcpClients and mcpServers can be used together. Tools from both are merged — on name collision, mcpServers tools take precedence:

const builtInAgent = new BuiltInAgent({
  model: "openai:gpt-5.4",
  mcpClients: [persistentClient],  // User-managed, persistent
  mcpServers: [                     // Agent-managed, per-request
    { type: "sse", url: "https://other-mcp.example.com/sse" },
  ],
});

Transport reference

PropertySSEHTTPDescription
type"sse""http"Transport type
urlstringstringMCP server URL
headersRecord<string, string>Custom HTTP headers (SSE only)
optionsStreamableHTTPClientTransportOptionsTransport options (HTTP only)
2087950ee