• Automatic build fix suggestions with Vercel Agent

    You can now get automatic code-fix suggestions for broken builds from the Vercel Agent, directly in GitHub pull request reviews or in the Vercel Dashboard.

    When the Vercel Agent reviews your pull request, it now scans your deployments for build errors, and when it detects failures it automatically suggests a code fix based on your code and build logs.

    Vercel Agent - Automatic code suggestion on GitHub pull requestVercel Agent - Automatic code suggestion on GitHub pull request
    Vercel Agent - Automatic code suggestion on GitHub pull request

    In addition, Vercel Agent can automatically suggest code fixes inside the Vercel dashboard whenever a build error is detected, and suggests a code change to a GitHub Pull Request for review before merging with your code.

    Vercel Agent - Build fix suggestions on the Vercel DashboardVercel Agent - Build fix suggestions on the Vercel Dashboard
    Vercel Agent - Build fix suggestions on the Vercel Dashboard

    Get started with Vercel Agent code review in the Agent dashboard, or learn more in the documentation.

  • Automated security audits now available for skills.sh

    Skills on the skills.sh now have automated security audits to help developers use skills with confidence.

    Working with our partners Gen, Socket, and Snyk, these independent security reports allow us to rapidly scale and audit over 60,000 skills and counting.

    Skills.sh provides greater ecosystem support with:

    • Transparent results: Security audits appear publicly on each skill's detail page.

    • Leaderboard protection : Skills flagged as malicious are automatically hidden from the leaderboard and search results. If you navigate directly to a flagged skill, a warning note appears before installation.

    • Security validation: As of skills@1.4.0, adding skills clearly displays audit results and risk levels before installation.

    Learn more at skills.sh.

  • Recraft V4 on AI Gateway

    Recraft V4 is now available on AI Gateway.

    A text-to-image model built for professional design and marketing use cases, V4 was developed with input from working designers. The model has improvements with photorealism, with realistic skin, natural textures, and fewer synthetic artifacts. It also produces images with clean lighting and varied composition. For illustration, the model can generate original characters with less predictable color palettes.

    There are 2 versions:

    • V4: Faster and more cost-efficient, suited for everyday work and iteration

    • V4 Pro: Generates higher-resolution images for print-ready assets and large-scale use

    To use this model, set model to recraft/recraft-v4-pro or recraft/recraft-v4 in the AI SDK:

    import { generateImage } from 'ai';
    const result = await generateImage({
    model: 'recraft/recraft-v4',
    prompt:
    `Product photo of a ceramic coffee mug on a wooden table,
    morning light, shallow depth of field.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Vercel Sandbox snapshots now allow custom retention periods

    Snapshots created with Vercel Sandbox now have configurable expiration, instead of the previous 7 days limit, along with higher defaults.

    import { Sandbox } from '@vercel/sandbox';
    import ms from 'ms';
    const sandbox = Sandbox.create();
    sandbox.snapshot({ expiration: ms('1d') })

    The expiration can be configured between 1 day to infinity. If not provided, the default snapshot expiration is 30 days.

    You can also configure this in the CLI.

    # Create a snapshot of a running sandbox
    sandbox snapshot sb_1234567890 --stop
    # Create a snapshot that expires in 14 days
    sandbox snapshot sb_1234567890 --stop --expiration 14d
    # Create a snapshot that never expires
    sandbox snapshot sb_1234567890 --stop --expiration 0

    Read the documentation to learn more about snapshots.

  • Claude Sonnet 4.6 is live on AI Gateway

    Claude Sonnet 4.6 from Anthropic is now available on AI Gateway with the 1M token context window.

    Sonnet 4.6 approaches Opus-level intelligence with strong improvements in agentic coding, code review, frontend UI quality, and computer use accuracy. The model proactively executes tasks, delegates to subagents, and parallelizes tool calls, with MCP support for scaled tool use. As a hybrid reasoning model, Sonnet 4.6 delivers both near-instant responses and extended thinking within the same model.

    To use this model, set model to anthropic/claude-sonnet-4.6 in the AI SDK. This model supports effort and thinking type adaptive:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'anthropic/claude-sonnet-4.6',
    prompt:
    `Build a dashboard component from this spec with
    responsive layout, dark mode support, and accessibility.`,
    providerOptions: {
    anthropic: {
    effort: 'medium',
    thinking: { type: 'adaptive' },
    },
    },
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Improved streaming runtime logs exports

    With runtime logs, you can view and export your logs. Exports now stream directly to the browser - your download starts immediately and you can continue to use the Vercel dashboard while the export runs in the background. This eliminates the need to wait for large files to buffer.

    Additionally, we've added two new options: You can now export exactly what's on your screen or all requests matching your current search.

    All plans can export up to 10,000 requests per export, and Observability Plus subscribers can export up to 100,000 requests.

    Exported log data is now indexed by request to ensure consistency with the Runtime Logs dashboard interface. Export limits are now applied by request to ensure that the exported data matches the filtered requests shown on the dashboard.

    Learn more about runtime logs.

  • Qwen 3.5 Plus is on AI Gateway

    Qwen 3.5 Plus is now available on AI Gateway.

    The model comes with a 1M context window and built-in adaptive tool use. Qwen 3.5 Plus excels at agentic workflows, thinking, searching, and using tools across multimodal contexts, making it well-suited for web development, frontend tasks, and turning instructions into working code. Compared to Qwen 3 VL, it delivers stronger performance in scientific problem solving and visual reasoning tasks.

    To use this model, set model to alibaba/qwen3.5-plus in the AI SDK:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'alibaba/qwen3.5-plus',
    prompt:
    `Analyze this UI mockup, extract the design system,
    and generate a production-ready React component
    with responsive breakpoints and theme support.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Stale-if-error cache-control directive now supported for all responses

    Vercel CDN now supports the stale-if-error directive with Cache-Control headers, enabling more resilient caching behavior during origin failures.

    You can now use the stale-if-error directive to specify how long (in seconds) a stale cached response can still be served if a request to the origin fails. When this directive is present and the origin returns an error, the CDN may serve a previously cached response instead of returning the error to the client. Stale responses may be served for errors like 500 Internal Server Errors, network failures, or DNS errors.

    This allows applications to remain available and respond gracefully when upstream services are temporarily unavailable.

    Read the stale-if-error documentation to learn more.