Skip to content
  • Use and manage Vercel Sandbox directly from the Vercel CLI

    Vercel Sandboxes can now be used and managed directly from the Vercel CLI, through the vercel sandbox subcommand.

    This eliminates the need to install and maintain a separate command-line tool, and removes the friction of switching contexts. Your entire Sandbox workflow now lives exactly where you already work, keeping your development experience unified and fast.

    Run pnpm i -g vercel@latest to update to the latest Vercel CLI (at least v50.42.0).

  • Summary of CVE-2026-23869

    Link to headingSummary

    A high-severity vulnerability (CVSS 7.5) in React Server Components can lead to Denial of Service.

    We created new rules to address these vulnerabilities and deployed them to the Vercel WAF to automatically protect all projects hosted on Vercel at no cost. However, do not rely on the WAF for full protection. Immediate upgrades to a patched version are required.

    Link to headingImpact

    A specially crafted HTTP request can be sent to any App Router Server Function endpoint that, when deserialized, may trigger excessive CPU usage. This can result in denial of service in unpatched environments.

    These vulnerabilities are present in Next.js 13.x, 14.x, 15.x, 16.x and affected packages using the App Router. The issue is tracked upstream as CVE-2026-23869

    Link to headingResolution

    After creating mitigations to address this vulnerability, we deployed them across our globally-distributed platform to protect our customers. We still recommend upgrading to the latest patched version.

    Updated releases of React and affected downstream frameworks include fixes to prevent this issue. All users should upgrade to a patched version as soon as possible.

    Link to headingFixed In

    • = 15.0.0 to be fixed in 15.5.15

    • = 16.0.0 to be fixed in 16.2.3

  • Vercel Sandbox now supports up to32 vCPU + 64 GB RAM configurations

    Vercel Sandbox now supports creating sandboxes with up to 32 vCPUs and 64 GB of RAM for Enterprise customers. This enables running large, resource-intensive applications that are CPU-bound or require a large amount of memory.

    Get started by setting the resources.vcpus option in the SDK:

    import { Sandbox } from "@vercel/sandbox";
    const sandbox = await Sandbox.create({
    resources: { vcpus: 32 },
    });

    Or using the --vcpus option in the CLI:

    sandbox create --connect --vcpus 32

    Learn more about Sandbox in the docs.

  • Chat SDK adds Liveblocks support

    Chat SDK now supports Liveblocks, enabling bots to read and respond in Liveblocks Comments threads with the new Liveblocks adapter. This is an official vendor adapter built and maintained by the Liveblocks team.

    Teams can build bots that post, edit, and delete comments, react with emojis, and resolve @mentions within Liveblocks rooms.

    Try the Liveblocks adapter today:

    import { Chat } from "chat";
    import { createLiveblocksAdapter } from "@liveblocks/chat-sdk-adapter";
    const bot = new Chat({
    userName: "mybot",
    adapters: {
    liveblocks: createLiveblocksAdapter({
    apiKey: "sk_...",
    webhookSecret: "whsec_...",
    botUserId: "my-bot-user",
    botUserName: "MyBot"
    }),
    },
    });
    bot.onNewMention(async (thread, message) => {
    await thread.post(`You said: ${message.text}`);
    });

    Read the documentation to get started, browse the directory, or build your own adapter.

  • Opus 4.6 Fast Mode available on AI Gateway

    Fast mode support for Claude Opus 4.6 is now available on AI Gateway.

    Fast mode is a premium high-speed option that delivers 2.5x faster output token speeds with the same model intelligence. This is an early, experimental feature.

    Fast mode's increased output token speeds enable new use cases, especially for human-in-the-loop workflows. Run large coding tasks without needing to context switch and get planning results without extended waits.

    To enable fast mode, pass speed: 'fast' in the anthropic provider options in AI SDK:

    import { streamText } from "ai";
    const { text } = await streamText({
    model: 'anthropic/claude-opus-4.6',
    prompt:
    `Analyze this codebase structure and create a step-by-step plan
    to add user authentication.`,
    providerOptions: {
    anthropic: {
    speed: 'fast',
    },
    },
    });

    You can use fast mode with Claude Code via AI Gateway by setting "fastMode": true in your settings.json.

    {
    "model": "opus[1m]",
    "fastMode": true
    }

    Try fast mode directly in the AI Gateway playground for Opus 4.6.

    Fast mode is priced at 6x standard Opus rates.

    Standard

    Fast Mode

    Input: $5 / 1M tokens
    Output: $25 / 1M tokens

    Input: $30 / 1M tokens
    Output: $150 / 1M tokens

    All standard pricing multipliers (e.g., prompt caching) apply on top of these rates.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • GLM 5.1 on AI Gateway

    GLM 5.1 from Z.ai is now available on Vercel AI Gateway.

    Designed for long-horizon autonomous tasks, GLM-5.1 can work continuously on a single task for extended periods, handling planning, execution, testing, and iterative refinement in a closed loop. Rather than one-shot code generation, it runs an autonomous cycle of benchmarking, identifying bottlenecks, and optimizing across many iterations, with particular strength in sustained multi-step engineering workflows.

    Beyond agentic coding, GLM-5.1 improves on general conversation, creative writing, front-end prototyping, and office productivity tasks like generating PowerPoint, Word, and Excel documents.

    To use GLM 5.1, set model to zai/glm-5.1 in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'zai/glm-5.1',
    prompt:
    `Refactor the data ingestion pipeline to support streaming,
    add error recovery, and benchmark throughput against the
    current implementation.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Query and visualize workflow data in Vercel Observability

    Query and Visualize Workflow Data in Vercel Observability - DarkQuery and Visualize Workflow Data in Vercel Observability - Dark

    Observability Plus's query builder now lets you create custom queries on workflow runs and steps, visualizing traffic, performance, and other key metrics across Vercel Workflows.

    Queries include breakdowns by run and step status, and can be filtered and grouped by environment, project, workflow, and step.

    The query builder is available to Pro and Enterprise teams using Observability Plus.

    Learn more about Workflow SDK, Observability and Observability Plus.

  • Manage Vercel Microfrontends with AI Agents and the CLI

    Vercel Microfrontends now include two new setup and management tools: an AI skill for coding agents and new Vercel CLI commands.

    New Vercel Microfrontends skill: Install the Microfrontends skill to let your AI coding agent guide you through group creation with natural language prompts. It will automatically generate microfrontends.json, wire up framework integrations, and manage projects, all without leaving your editor.

    npx skills add vercel/microfrontends

    Once added, ask your agent to create your first microfrontend group using this prompt.

    Get started with the Microfrontends skill.

    New CLI commands: The Vercel CLI now includes commands for managing microfrontend groups, so you can create, inspect, and manage groups from the terminal without opening the dashboard.

    • vercel microfrontends create-group

    • vercel microfrontends inspect-group

    • vercel microfrontends add-to-group

    • vercel microfrontends remove-from-group

    • vercel microfrontends delete-group

    Learn more in the CLI docs.