Vercel Sandboxes can now be used and managed directly from the Vercel CLI, through the vercel sandbox subcommand.
This eliminates the need to install and maintain a separate command-line tool, and removes the friction of switching contexts. Your entire Sandbox workflow now lives exactly where you already work, keeping your development experience unified and fast.
Run pnpm i -g vercel@latest to update to the latest Vercel CLI (at least v50.42.0).
A high-severity vulnerability (CVSS 7.5) in React Server Components can lead to Denial of Service.
We created new rules to address these vulnerabilities and deployed them to the Vercel WAF to automatically protect all projects hosted on Vercel at no cost. However, do not rely on the WAF for full protection. Immediate upgrades to a patched version are required.
A specially crafted HTTP request can be sent to any App Router Server Function endpoint that, when deserialized, may trigger excessive CPU usage. This can result in denial of service in unpatched environments.
These vulnerabilities are present in Next.js 13.x, 14.x, 15.x, 16.x and affected packages using the App Router. The issue is tracked upstream as CVE-2026-23869
After creating mitigations to address this vulnerability, we deployed them across our globally-distributed platform to protect our customers. We still recommend upgrading to the latest patched version.
Updated releases of React and affected downstream frameworks include fixes to prevent this issue. All users should upgrade to a patched version as soon as possible.
Vercel Sandbox now supports creating sandboxes with up to 32 vCPUs and 64 GB of RAM for Enterprise customers. This enables running large, resource-intensive applications that are CPU-bound or require a large amount of memory.
Get started by setting the resources.vcpus option in the SDK:
Chat SDK now supports Liveblocks, enabling bots to read and respond in Liveblocks Comments threads with the new Liveblocks adapter. This is an official vendor adapter built and maintained by the Liveblocks team.
Teams can build bots that post, edit, and delete comments, react with emojis, and resolve @mentions within Liveblocks rooms.
Fast mode support for Claude Opus 4.6 is now available on AI Gateway.
Fast mode is a premium high-speed option that delivers 2.5x faster output token speeds with the same model intelligence. This is an early, experimental feature.
Fast mode's increased output token speeds enable new use cases, especially for human-in-the-loop workflows. Run large coding tasks without needing to context switch and get planning results without extended waits.
To enable fast mode, pass speed: 'fast' in the anthropic provider options in AI SDK:
import{ streamText }from"ai";
const{ text }=awaitstreamText({
model:'anthropic/claude-opus-4.6',
prompt:
`Analyze this codebase structure and create a step-by-step plan
to add user authentication.`,
providerOptions:{
anthropic:{
speed:'fast',
},
},
});
You can use fast mode with Claude Code via AI Gateway by setting "fastMode": true in your settings.json.
Designed for long-horizon autonomous tasks, GLM-5.1 can work continuously on a single task for extended periods, handling planning, execution, testing, and iterative refinement in a closed loop. Rather than one-shot code generation, it runs an autonomous cycle of benchmarking, identifying bottlenecks, and optimizing across many iterations, with particular strength in sustained multi-step engineering workflows.
Beyond agentic coding, GLM-5.1 improves on general conversation, creative writing, front-end prototyping, and office productivity tasks like generating PowerPoint, Word, and Excel documents.
To use GLM 5.1, set model to zai/glm-5.1 in the AI SDK.
import{ streamText }from'ai';
const result =streamText({
model:'zai/glm-5.1',
prompt:
`Refactor the data ingestion pipeline to support streaming,
add error recovery, and benchmark throughput against the
current implementation.`,
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Observability Plus's query builder now lets you create custom queries on workflow runs and steps, visualizing traffic, performance, and other key metrics across Vercel Workflows.
Queries include breakdowns by run and step status, and can be filtered and grouped by environment, project, workflow, and step.
The query builder is available to Pro and Enterprise teams using Observability Plus.
Vercel Microfrontends now include two new setup and management tools: an AI skill for coding agents and new Vercel CLI commands.
New Vercel Microfrontends skill: Install the Microfrontends skill to let your AI coding agent guide you through group creation with natural language prompts. It will automatically generate microfrontends.json, wire up framework integrations, and manage projects, all without leaving your editor.
npx skills add vercel/microfrontends
Once added, ask your agent to create your first microfrontend group using this prompt.
New CLI commands: The Vercel CLI now includes commands for managing microfrontend groups, so you can create, inspect, and manage groups from the terminal without opening the dashboard.