Skip to content

fix(deps): update dependency @mastra/core to ^0.24.0#12

Open
renovate[bot] wants to merge 1 commit intomainfrom
renovate/mastra-core-0.x
Open

fix(deps): update dependency @mastra/core to ^0.24.0#12
renovate[bot] wants to merge 1 commit intomainfrom
renovate/mastra-core-0.x

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Aug 9, 2025

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
@mastra/core (source) ^0.12.1^0.24.0 age confidence

Release Notes

mastra-ai/mastra (@​mastra/core)

v0.24.9

Compare Source

v0.24.8

Compare Source

v0.24.7

Compare Source

v0.24.6: 2025-11-27

Compare Source

Highlights

Stream nested execution context from Workflows and Networks to your UI

Agent responses now stream live through workflows and networks, with complete execution metadata flowing to your UI.

In workflows, pipe agent streams directly through steps:

const planActivities = createStep({
  execute: async ({ mastra, writer }) => {
    const agent = mastra?.getAgent('weatherAgent');
    const response = await agent.stream('Plan activities');
    await response.fullStream.pipeTo(writer);
    
    return { activities: await response.text };
  }
});

In networks, each step now tracks properly—unique IDs, iteration counts, task info, and agent handoffs all flow through with correct sequencing. No more duplicated steps or missing metadata.

Both surface text chunks, tool calls, and results as they happen, so users see progress in real time instead of waiting for the full response.

AI-SDK voice models are now supported

CompositeVoice now accepts AI SDK voice models directly—use OpenAI for transcription, ElevenLabs for speech, or any combination you want.

import { CompositeVoice } from "@​mastra/core/voice";
import { openai } from "@​ai-sdk/openai";
import { elevenlabs } from "@​ai-sdk/elevenlabs";

const voice = new CompositeVoice({
  input: openai.transcription('whisper-1'),
  output: elevenlabs.speech('eleven_turbo_v2'),
});

const audio = await voice.speak("Hello from AI SDK!");
const transcript = await voice.listen(audio);

Works with OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and more. AI SDK models are automatically wrapped, so you can swap providers without changing your code.

Changelog

@​mastra/ai-sdk

  • Support streaming agent text chunks from workflow-step-output

    Adds support for streaming text and tool call chunks from agents running inside workflows via the workflow-step-output event. When you pipe an agent's stream into a workflow step's writer, the text chunks, tool calls, and other streaming events are automatically included in the workflow stream and converted to UI messages.

Features:

  • Added includeTextStreamParts option to WorkflowStreamToAISDKTransformer (defaults to true)
  • Added isMastraTextStreamChunk type guard to identify Mastra chunks with text streaming data
  • Support for streaming text chunks: text-start, text-delta, text-end
  • Support for streaming tool calls: tool-call, tool-result
  • Comprehensive test coverage in transformers.test.ts
  • Updated documentation for workflow streaming and workflowRoute()

Example:

const planActivities = createStep({
  execute: async ({ mastra, writer }) => {
    const agent = mastra?.getAgent('weatherAgent');
    const response = await agent.stream('Plan activities');
    await response.fullStream.pipeTo(writer);
    
    return { activities: await response.text };
  }
});

When served via workflowRoute(), the UI receives incremental text updates as the agent generates its response, providing a smooth streaming experience. (#​10568)

  • Fix chat route to use agent ID instead of agent name for resolution. The /chat/:agentId endpoint now correctly resolves agents by their ID property (e.g., weather-agent) instead of requiring the camelCase variable name (e.g., weatherAgent). This fixes issue #​10469 where URLs like /chat/weather-agent would return 404 errors. (#​10565)

  • Fixes propagation of custom data chunks from nested workflows in branches to the root stream when using toAISdkV5Stream with {from: 'workflow'}.

    Previously, when a nested workflow within a branch used writer.custom() to write data-* chunks, those chunks were wrapped in workflow-step-output events and not extracted, causing them to be dropped from the root stream.

    Changes:

    • Added handling for workflow-step-output chunks in transformWorkflow() to extract and propagate data-* chunks
  • When a workflow-step-output chunk contains a data-* chunk in its payload.output, the transformer now extracts it and returns it directly to the root stream

  • Added comprehensive test coverage for nested workflows with branches and custom data propagation

    This ensures that custom data chunks written via writer.custom() in nested workflows (especially those within branches) are properly propagated to the root stream, allowing consumers to receive progress updates, metrics, and other custom data from nested workflow steps. (#​10447)

  • Fix network data step formatting in AI SDK stream transformation

    Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.

    Changes:

    • Enhanced step tracking in AgentNetworkToAISDKTransformer to properly maintain step state throughout execution lifecycle
  • Steps are now identified by unique IDs and updated in place rather than creating duplicates

  • Added proper iteration and task metadata to each step in the network execution flow

  • Fixed agent, workflow, and tool execution events to correctly populate step data

  • Updated network stream event types to include networkId, workflowId, and consistent runId tracking

  • Added test coverage for network custom data chunks with comprehensive validation

    This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#​10432)

  • [0.x] Make workflowRoute includeTextStreamParts option default to false (#​10574)

  • Add support for tool-call-approval and tool-call-suspended events in chatRoute (#​10205)

  • Backports the messageMetadata and onError support from PR #​10313 to the 0.x branch, adding these features to toAISdkFormat function.

    • Added messageMetadata parameter to toAISdkFormat options
    • Function receives the current stream part and returns metadata to attach to start and finish chunks
    • Metadata is included in start and finish chunks when provided
  • Added onError parameter to toAISdkFormat options

    • Allows custom error handling during stream conversion
    • Falls back to safeParseErrorObject utility when not provided
  • Added safeParseErrorObject utility function for error parsing

  • Updated AgentStreamToAISDKTransformer to accept and use messageMetadata and onError

  • Updated JSDoc documentation with parameter descriptions and usage examples

  • Added comprehensive test suite for messageMetadata functionality (6 test cases)

  • Fixed existing test file to use toAISdkFormat instead of removed toAISdkV5Stream

    • All existing tests pass (14 tests across 3 test files)
  • New tests verify:

    • messageMetadata is called with correct part structure

    • Metadata is included in start and finish chunks

    • Proper handling when messageMetadata is not provided or returns null/undefined

    • Function is called for each relevant part in the stream

      • Uses UIMessageStreamOptions<UIMessage>['messageMetadata'] and UIMessageStreamOptions<UIMessage>['onError'] types from AI SDK v5 for full type compatibility

      • Backport of: #​10313 (#​10314)

  • Fixed workflow routes to properly receive request context from middleware. This aligns the behavior of workflowRoute with chatRoute, ensuring that context set in middleware is consistently forwarded to workflows.

    When both middleware and request body provide a request context, the middleware value now takes precedence, and a warning is emitted to help identify potential conflicts.

    See #​10427 (#​10427)

@​mastra/astra

@​mastra/auth-clerk

  • remove organization requirement from default authorization (#​10551)

@​mastra/chroma

@​mastra/clickhouse

  • fix: ensure score responses match saved payloads for Mastra Stores. (#​10570)
  • Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#​10573)

@​mastra/cloudflare

  • fix: ensure score responses match saved payloads for Mastra Stores. (#​10570)
  • Fix message sorting in listMessages when using semantic recall (include parameter). Messages are now always sorted by createdAt instead of storage order, ensuring correct chronological ordering of conversation history. (#​10545)

@​mastra/cloudflare-d1

  • fix: ensure score responses match saved payloads for Mastra Stores. (#​10570)
  • Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#​10573)

@​mastra/core

  • Fix base64 encoded images with threads - issue #​10480

    Fixed "Invalid URL" error when using base64 encoded images (without data: prefix) in agent calls with threads and resources. Raw base64 strings are now automatically converted to proper data URIs before being processed.

    Changes:

  • Updated attachments-to-parts.ts to detect and convert raw base64 strings to data URIs

  • Fixed MessageList image processing to handle raw base64 in two locations:

    • Image part conversion in aiV4CoreMessageToV1PromptMessage
    • File part to experimental_attachments conversion in mastraDBMessageToAIV4UIMessage
  • Added comprehensive tests for base64 images, data URIs, and HTTP URLs with threads

    Breaking Change: None - this is a bug fix that maintains backward compatibility while adding support for raw base64 strings. (#​10483)

  • SimpleAuth and improved CloudAuth (#​10569)

  • Fixed OpenAI schema compatibility when using agent.generate() or agent.stream() with structuredOutput.

Changes

- **Automatic transformation**: Zod schemas are now automatically transformed for OpenAI strict mode compatibility when using OpenAI models (including reasoning models like o1, o3, o4)
  • Optional field handling: .optional() fields are converted to .nullable() with a transform that converts nullundefined, preserving optional semantics while satisfying OpenAI's strict mode requirements
  • Preserves nullable fields: Intentionally .nullable() fields remain unchanged
  • Deep transformation: Handles .optional() fields at any nesting level (objects, arrays, unions, etc.)
  • JSON Schema objects: Not transformed, only Zod schemas

Example

const agent = new Agent({
  name: 'data-extractor',
  model: { provider: 'openai', modelId: 'gpt-4o' },
  instructions: 'Extract user information',
});

	const schema = z.object({
  name: z.string(),
  age: z.number().optional(),
  deletedAt: z.date().nullable(),
});

	// Schema is automatically transformed for OpenAI compatibility
const result = await agent.generate('Extract: John, deleted yesterday', {
  structuredOutput: { schema },
});

	// Result: { name: 'John', age: undefined, deletedAt: null }

(#​10454)

  • deleteVectors, deleteFilter when upserting, updateVector filter (#​10244) (#​10244)

  • Fix generateTitle model type to accept AI SDK LanguageModelV2

    Updated the generateTitle.model config option to accept MastraModelConfig instead of MastraLanguageModel. This allows users to pass raw AI SDK LanguageModelV2 models (e.g., anthropic.languageModel('claude-3-5-haiku-20241022')) directly without type errors.

    Previously, passing a standard LanguageModelV2 would fail because MastraLanguageModelV2 has different doGenerate/doStream return types. Now MastraModelConfig is used consistently across:

  • memory/types.ts - generateTitle.model config

  • agent.ts - genTitle, generateTitleFromUserMessage, resolveTitleGenerationConfig

  • agent-legacy.ts - AgentLegacyCapabilities interface (#​10567)

  • Fix message metadata not persisting when using simple message format. Previously, custom metadata passed in messages (e.g., {role: 'user', content: 'text', metadata: {userId: '123'}}) was not being saved to the database. This occurred because the CoreMessage conversion path didn't preserve metadata fields.

    Now metadata is properly preserved for all message input formats:

  • Simple CoreMessage format: {role, content, metadata}

  • Full UIMessage format: {role, content, parts, metadata}

  • AI SDK v5 ModelMessage format with metadata

    Fixes #​8556 (#​10488)

  • feat: Composite auth implementation (#​10359)

  • Fix requireApproval property being ignored for tools passed via toolsets, clientTools, and memoryTools parameters. The requireApproval flag now correctly propagates through all tool conversion paths, ensuring tools requiring approval will properly request user approval before execution. (#​10562)

  • Fix Azure Foundry rate limit handling for -1 values (#​10409)

  • Fix model headers not being passed through gateway system

    Previously, custom headers specified in MastraModelConfig were not being passed through the gateway system to model providers. This affected:

  • OpenRouter (preventing activity tracking with HTTP-Referer and X-Title)

  • Custom providers using custom URLs (headers not passed to createOpenAICompatible)

  • Custom gateway implementations (headers not available in resolveLanguageModel)

    Now headers are correctly passed through the entire gateway system:

  • Base MastraModelGateway interface updated to accept headers

  • ModelRouterLanguageModel passes headers from config to all gateways

  • OpenRouter receives headers for activity tracking

  • Custom URL providers receive headers via createOpenAICompatible

  • Custom gateways can access headers in their resolveLanguageModel implementation

    Example usage:

// Works with OpenRouter
const agent = new Agent({
  name: 'my-agent',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'openrouter/anthropic/claude-3-5-sonnet',
    headers: {
      'HTTP-Referer': 'https://myapp.com',
      'X-Title': 'My Application',
    },
  },
});

	// Also works with custom providers
const customAgent = new Agent({
  name: 'custom-agent',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'custom-provider/model',
    url: 'https://api.custom.com/v1',
    apiKey: 'key',
    headers: {
      'X-Custom-Header': 'custom-value',
    },
  },
});

Fixes #​9760
(#​10564)

  • fix(agent): persist messages before tool suspension

    Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.

    Backend changes (@​mastra/core):

  • Add assistant messages to messageList immediately after LLM execution

  • Flush messages synchronously before suspension to persist state

  • Create thread if it doesn't exist before flushing

  • Add metadata helpers to persist and remove tool approval state

  • Pass saveQueueManager and memory context through workflow for immediate persistence

    Frontend changes (@​mastra/react):

  • Extract runId from pending approvals to enable resumption after refresh

  • Convert pendingToolApprovals (DB format) to requireApprovalMetadata (runtime format)

  • Handle both dynamic-tool and tool-{NAME} part types for approval state

  • Change runId from hardcoded agentId to unique uuid()

    UI changes (@​mastra/playground-ui):

  • Handle tool calls awaiting approval in message initialization

  • Convert approval metadata format when loading initial messages

    Fixes #​9745, #​9906 (#​10369)

  • Fix race condition in parallel tool stream writes

    Introduces a write queue to ToolStream to serialize access to the underlying stream, preventing writer locked errors (#​10463)

  • Remove unneeded console warning when flushing messages and no threadId or saveQueueManager is found. (#​10369)

  • Fixes GPT-5 reasoning which was failing on subsequent tool calls with the error:

Item 'fc_xxx' of type 'function_call' was provided without its required 'reasoning' item: 'rs_xxx'

(#​10489)

  • Add optional includeRawChunks parameter to agent execution options,
    allowing users to include raw chunks in stream output where supported
    by the model provider. (#​10456)

  • When mastra dev runs, multiple processes can write to provider-registry.json concurrently (auto-refresh, syncGateways, syncGlobalCacheToLocal). This causes file corruption where the end of the JSON appears twice, making it unparseable.

    The fix uses atomic writes via the write-to-temp-then-rename pattern. Instead of:

fs.writeFileSync(filePath, content, 'utf-8');
We now do:
const tempPath = `${filePath}.${process.pid}.${Date.now()}.${randomSuffix}.tmp`;
fs.writeFileSync(tempPath, content, 'utf-8');
fs.renameSync(tempPath, filePath); // atomic on POSIX

fs.rename() is atomic on POSIX systems when both paths are on the same filesystem, so concurrent writes will each complete fully rather than interleaving. (#​10529)

  • Ensures that data chunks written via writer.custom() always bubble up directly to the top-level stream, even when nested in sub-agents. This allows tools to emit custom progress updates, metrics, and other data that can be consumed at any level of the agent hierarchy.

    • Added bubbling logic in sub-agent execution: When sub-agents execute, data chunks (chunks with type starting with data-) are detected and written via writer.custom() instead of writer.write(), ensuring they bubble up directly without being wrapped in tool-output chunks.

    • Added comprehensive tests:

    • Test for writer.custom() with direct tool execution

    • Test for writer.custom() with sub-agent tools (nested execution)

    • Test for mixed usage of writer.write() and writer.custom() in the same tool

      When a sub-agent's tool uses writer.custom() to write data chunks, those chunks appear in the sub-agent's stream. The parent agent's execution logic now detects these chunks and uses writer.custom() to bubble them up directly, preserving their structure and making them accessible at the top level.

      This ensures that:

      • Data chunks from tools always appear directly in the stream (not wrapped)
  • Data chunks bubble up correctly through nested agent hierarchies

  • Regular chunks continue to be wrapped in tool-output as expected (#​10309)

  • Adds ability to create custom MastraModelGateway's that can be added to the Mastra class instance under the gateways property. Giving you typescript autocompletion in any model picker string.

import { MastraModelGateway, type ProviderConfig } from '@&#8203;mastra/core/llm';
import { createOpenAICompatible } from '@&#8203;ai-sdk/openai-compatible';
import type { LanguageModelV2 } from '@&#8203;ai-sdk/provider';

	class MyCustomGateway extends MastraModelGateway {
  readonly id = 'custom';
  readonly name = 'My Custom Gateway';

	  async fetchProviders(): Promise<Record<string, ProviderConfig>> {
    return {
      'my-provider': {
        name: 'My Provider',
        models: ['model-1', 'model-2'],
        apiKeyEnvVar: 'MY_API_KEY',
        gateway: this.id,
      },
    };
  }

	  buildUrl(modelId: string, envVars?: Record<string, string>): string {
    return 'https://api.my-provider.com/v1';
  }

	  async getApiKey(modelId: string): Promise<string> {
    const apiKey = process.env.MY_API_KEY;
    if (!apiKey) throw new Error('MY_API_KEY not set');
    return apiKey;
  }

	  async resolveLanguageModel({
    modelId,
    providerId,
    apiKey,
  }: {
    modelId: string;
    providerId: string;
    apiKey: string;
  }): Promise<LanguageModelV2> {
    const baseURL = this.buildUrl(`${providerId}/${modelId}`);
    return createOpenAICompatible({
      name: providerId,
      apiKey,
      baseURL,
    }).chatModel(modelId);
  }
}

	new Mastra({
  gateways: {
    myGateway: new MyCustomGateway(),
  },
});

(#​10535)

  • Support AI SDK voice models

    Mastra now supports AI SDK's transcription and speech models directly in CompositeVoice, enabling seamless integration with a wide range of voice providers through the AI SDK ecosystem. This allows you to use models from OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and many more for both speech-to-text (transcription) and text-to-speech capabilities.

    AI SDK models are automatically wrapped when passed to CompositeVoice, so you can mix and match AI SDK models with existing Mastra voice providers for maximum flexibility.

Usage Example

import { CompositeVoice } from "@&#8203;mastra/core/voice";
import { openai } from "@&#8203;ai-sdk/openai";
import { elevenlabs } from "@&#8203;ai-sdk/elevenlabs";

	// Use AI SDK models directly with CompositeVoice
const voice = new CompositeVoice({
  input: openai.transcription('whisper-1'),      // AI SDK transcription model
  output: elevenlabs.speech('eleven_turbo_v2'),  // AI SDK speech model
});

	// Convert text to speech
const audioStream = await voice.speak("Hello from AI SDK!");

	// Convert speech to text
const transcript = await voice.listen(audioStream);
console.log(transcript);

Fixes #​9947 (#​10558)

  • Fix network data step formatting in AI SDK stream transformation

    Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.

    Changes:

    • Enhanced step tracking in AgentNetworkToAISDKTransformer to properly maintain step state throughout execution lifecycle
  • Steps are now identified by unique IDs and updated in place rather than creating duplicates

  • Added proper iteration and task metadata to each step in the network execution flow

  • Fixed agent, workflow, and tool execution events to correctly populate step data

  • Updated network stream event types to include networkId, workflowId, and consistent runId tracking

  • Added test coverage for network custom data chunks with comprehensive validation

    This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#​10432)

  • Fix generating provider-registry.json (#​10535)

  • Fix message-list conversion issues when persisting messages before tool suspension: filter internal metadata fields (__originalContent) from UI messages, keep reasoning field empty for consistent cache keys during message deduplication, and only include providerMetadata on parts when defined. (#​10552)

  • Fix agent.generate() to use model's doGenerate method instead of doStream

    When calling agent.generate(), the model's doGenerate method is now correctly invoked instead of always using doStream. This aligns the non-streaming generation path with the intended behavior where providers can implement optimized non-streaming responses. (#​10572)

@​mastra/couchbase

@​mastra/deployer

  • Rename "Playground" to "Studio" (#​10443)
  • Fixed a bug where imports that were not used in the main entry point were tree-shaken during analysis, causing bundling errors. Tree-shaking now only runs during the bundling step. (#​10470)

@​mastra/deployer-cloud

  • SimpleAuth and improved CloudAuth (#​10569)
  • Do not initialize local storage when using mastra cloud storage instead (#​10495)

@​mastra/dynamodb

  • fix: ensure score responses match saved payloads for Mastra Stores. (#​10570)
  • Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#​10573)

@​mastra/inngest

  • Emit workflow-step-result and workflow-step-finish when step fails in inngest workflow (#​10555)

@​mastra/lance

  • deleteVectors, deleteFilter when upserting, updateVector filter (#​10244) (#​10244)
  • fix: ensure score responses match saved payloads for Mastra Stores. (#​10570)

@​mastra/libsql

  • deleteVectors, deleteFilter when upserting, updateVector filter (#​10244) (#​10244)
  • Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#​10573)

@​mastra/mcp

  • Fix MCP client to return structuredContent directly when tools define an outputSchema, ensuring output validation works correctly instead of failing with "expected X, received undefined" errors. (#​10442)

@​mastra/mcp-docs-server

  • Ensure changelog truncation includes at least 2 versions before cutting off (#​10496)

@​mastra/mongodb

  • deleteVectors, deleteFilter when upserting, updateVector filter (#​10244) (#​10244)
  • fix: ensure score responses match saved payloads for Mastra Stores. (#​10570)
  • Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#​10573)

@​mastra/mssql

  • fix: ensure score responses match saved payloads for Mastra Stores. (#​10570)
  • Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#​10573)

@​mastra/opensearch

@​mastra/pg

  • deleteVectors, deleteFilter when upserting, updateVector filter (#​10244) (#​10244)
  • fix: ensure score responses match saved payloads for Mastra Stores. (#​10570)
  • Fix message sorting in getMessagesPaginated when using semantic recall (include parameter). Messages are now always sorted by createdAt after combining paginated and included messages, ensuring correct chronological ordering of conversation history. All stores now consistently use MessageList for deduplication followed by explicit sorting. (#​10573)

@​mastra/playground-ui

  • fix(agent): persist messages before tool suspension

    Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.

    Backend changes (@​mastra/core):

  • Add assistant messages to messageList immediately after LLM execution

  • Flush messages synchronously before suspension to persist state

  • Create thread if it doesn't exist before flushing

  • Add metadata helpers to persist and remove tool approval state

  • Pass saveQueueManager and memory context through workflow for immediate persistence

    Frontend changes (@​mastra/react):

  • Extract runId from pending approvals to enable resumption after refresh

  • Convert pendingToolApprovals (DB format) to requireApprovalMetadata (runtime format)

  • Handle both dynamic-tool and tool-{NAME} part types for approval state

  • Change runId from hardcoded agentId to unique uuid()

    UI changes (@​mastra/playground-ui):

  • Handle tool calls awaiting approval in message initialization

  • Convert approval metadata format when loading initial messages

    Fixes #​9745, #​9906 (#​10369)

@​mastra/rag

@​mastra/react

  • Configurable resourceId in react useChat (#​10561)

  • fix(agent): persist messages before tool suspension

    Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.

    Backend changes (@​mastra/core):

  • Add assistant messages to messageList immediately after LLM execution

  • Flush messages synchronously before suspension to persist state

  • Create thread if it doesn't exist before flushing

  • Add metadata helpers to persist and remove tool approval state

  • Pass saveQueueManager and memory context through workflow for immediate persistence

    Frontend changes (@​mastra/react):

  • Extract runId from pending approvals to enable resumption after refresh

  • Convert pendingToolApprovals (DB format) to requireApprovalMetadata (runtime format)

  • Handle both dynamic-tool and tool-{NAME} part types for approval state

  • Change runId from hardcoded agentId to unique uuid()

    UI changes (@​mastra/playground-ui):

  • Handle tool calls awaiting approval in message initialization

  • Convert approval metadata format when loading initial messages

    Fixes #​9745, #​9906 (#​10369)

@​mastra/schema-compat

  • Fixed OpenAI schema compatibility when using agent.generate() or agent.stream() with structuredOutput.

Changes

- **Automatic transformation**: Zod schemas are now automatically transformed for OpenAI strict mode compatibility when using OpenAI models (including reasoning models like o1, o3, o4)
  • Optional field handling: .optional() fields are converted to .nullable() with a transform that converts nullundefined, preserving optional semantics while satisfying OpenAI's strict mode requirements
  • Preserves nullable fields: Intentionally .nullable() fields remain unchanged
  • Deep transformation: Handles .optional() fields at any nesting level (objects, arrays, unions, etc.)
  • JSON Schema objects: Not transformed, only Zod schemas

Example

const agent = new Agent({
  name: 'data-extractor',
  model: { provider: 'openai', modelId: 'gpt-4o' },
  instructions: 'Extract user information',
});

	const schema = z.object({
  name: z.string(),
  age: z.number().optional(),
  deletedAt: z.date().nullable(),
});

	// Schema is automatically transformed for OpenAI compatibility
const result = await agent.generate('Extract: John, deleted yesterday', {
  structuredOutput: { schema },
});

	// Result: { name: 'John', age: undefined, deletedAt: null }

(#​10454)

@​mastra/upstash

  • Fix message sorting in listMessages when using semantic recall (include parameter). Messages are now always sorted by createdAt instead of storage order, ensuring correct chronological ordering of conversation history. (#​10545)

@​mastra/voice-deepgram

  • feat(voice-deepgram): add speaker diarization support for STT (#​10536)

mastra

  • Rename "Playground" to "Studio" (#​10443)

v0.24.5

Compare Source

v0.24.4

Compare Source

v0.24.3: 2025-11-19

Compare Source

Highlights

Generate Endpoint Fix for OpenAI Streaming

We've switched to using proper generate endpoints for model calls, fixing a critical permission issue with OpenAI streaming. No more 403 errors when your users don't have full model permissions - the generate endpoint respects granular API key scopes properly.

AI SDK v5: Fine-Grained Stream Control

Building custom UIs? You now have complete control over what gets sent in your AI SDK streams. Configure exactly which message chunks your frontend receives with the new sendStart, sendFinish, sendReasoning, and sendSources options.

Changelog

@​mastra/ai-sdk

  • Add sendStart, sendFinish, sendReasoning, and sendSources options to toAISdkV5Stream function, allowing fine-grained control over which message chunks are included in the converted stream. Previously, these values were hardcoded in the transformer.

    BREAKING CHANGE: AgentStreamToAISDKTransformer now accepts an options object instead of a single lastMessageId parameter

    Also, add sendStart, sendFinish, sendReasoning, and sendSources parameters to
    chatRoute function, enabling fine-grained control over which chunks are
    included in the AI SDK stream output. (#​10127)

  • Added support for tripwire data chunks in streaming responses.

    Tripwire chunks allow the AI SDK to emit special data events when certain conditions are triggered during stream processing. These chunks include a tripwireReason field explaining why the tripwire was activated.

Usage
When converting Mastra chunks to AI SDK v5 format, tripwire chunks are now automatically handled:
// Tripwire chunks are converted to data-tripwire format
const chunk = {
  type: 'tripwire',
  payload: { tripwireReason: 'Rate limit approaching' }
};

	// Converts to:
{
  type: 'data-tripwire',
  data: { tripwireReason: 'Rate limit approaching' }
}

(#​10269)

@​mastra/auth

  • Allow provider to pass through options to the auth config (#​10284)

@​mastra/auth-auth0

  • Allow provider to pass through options to the auth config (#​10284)

@​mastra/auth-clerk

  • Allow provider to pass through options to the auth config (#​10284)

@​mastra/auth-firebase

  • Allow provider to pass through options to the auth config (#​10284)

@​mastra/auth-supabase

  • Allow provider to pass through options to the auth config (#​10284)

@​mastra/auth-workos

  • Allow provider to pass through options to the auth config (#​10284)

@​mastra/client-js

  • Added optional description field to GetAgentResponse to support richer agent metadata (#​10305)

@​mastra/core

  • Only handle download image asset transformation if needed (#​10122)

  • Fix tool outputSchema validation to allow unsupported Zod types like ZodTuple. The outputSchema is only used for internal validation and never sent to the LLM, so model compatibility checks are not needed. (#​9409)

  • Fix vector definition to fix pinecone (#​10150)

  • Add type bailed to workflowRunStatus (#​10091)

  • Allow provider to pass through options to the auth config (#​10284)

  • Fix deprecation warning when agent network executes workflows by using .fullStream instead of iterating WorkflowRunOutput directly (#​10306)

  • Add support for doGenerate in LanguageModelV2. This change fixes issues with OpenAI stream permissions.

    • Added new abstraction over LanguageModelV2 (#​10239)

@​mastra/mcp

  • Add timeout configuration to mcp server config (#​9891)

@​mastra/mcp-docs-server

  • Add migration tool to mcp docs server for stable branch that will let users know to upgrade mcp docs server @​latest to @​beta to get the proper migration tool. (#​10200)

@​mastra/server

  • Network handler now accesses thread and resource parameters from the nested memory object instead of directly from request body. (#​10294)

@​mastra/observability

  • Updates console warning when cloud access token env is not set. (#​9149)

@​mastra/pinecone

@​mastra/playground-ui

  • Fix scorer filtering for SpanScoring, add error and info message for user (#​10160)

@​mastra/voice-google-gemini-live

  • gemini live fix (#​10234)
  • fix(voice): Fix Vertex AI WebSocket connection failures in GeminiLiveVoice (#​10243)

create-mastra

  • fix: detect bun runtime and cleanup on failure (#​10307)

mastra

  • Add support to skip dotenv/env file loading by adding MASTRA_SKIP_DOTENV (#​9455)
  • fix: detect bun runtime and cleanup on failure (#​10307)

v0.24.2

Compare Source

v0.24.1: 2025-11-14

Compare Source

Highlights

1.0 Beta is ready!

We've worked hard on a 1.0 beta version to signal that Mastra is ready for prime time and there will not be any breaking changes in the near future. Please visit the migration guide to get started.

Improved support for files in models

We added the ability not to download images or any supported files by the model, and instead send the raw URL so it can handle it on its own. This improves the speed of the LLM call.

Mistral

Added improved support for Mistral by using the ai-sdk provider under the hood instead of the openai compat provider.

Changelog

@​mastra/ai-sdk

  • Fix bad dane change in 0.x workflowRoute (#​10090)

  • Improve ai-sdk transformers, handle custom data from agent sub workflow, sug agent tools (#​10026)

  • Extend the workflow route to accept optional runId and resourceId parameters, allowing clients to specify custom identifiers when creating workflow runs. These parameters are now properly validated in the OpenAPI schema and passed through to the createRun method.

    Also updates the OpenAPI schema to include previously undocumented
    resumeData and step fields. (#​10034)

@​mastra/client-js

  • Fix clientTools execution in client js (#​9880)

@​mastra/core

  • Integrates the native Mistral AI SDK provider (@ai-sdk/mistral) to replace the current OpenAI-compatible endpoint implementation for Mistral models. (#​9789)

  • Fix: Don't download unsupported media (#​9209)

  • Use a shared getAllToolPaths() method from the bundler to discover tool paths. (#​9204)

  • Add an additional check to determine whether the model natively supports specific file types. Only download the file if the model does not support it natively. (#​9790)

  • Fix agent network iteration counter bug causing infinite loops

    The iteration counter in agent networks was stuck at 0 due to a faulty ternary operator that treated 0 as falsy. This prevented maxSteps from working correctly, causing infinite loops when the routing agent kept selecting primitives instead of returning "none".

    Changes:

  • Fixed iteration counter logic in loop/network/index.ts from (inputData.iteration ? inputData.iteration : -1) + 1 to (inputData.iteration ?? -1) + 1

  • Changed initial iteration value from 0 to -1 so first iteration correctly starts at 0

  • Added checkIterations() helper to validate iteration counting in all network tests

    Fixes #​9314 (#​9762)

  • Exposes requiresAuth to custom api routes (#​9952)

  • Fix agent network working memory tool routing. Memory tools are now included in routing agent instructions but excluded from its direct tool calls, allowing the routing agent to properly route to tool execution steps for memory updates. (#​9428)

  • Fixes assets not being downloaded when available (#​10079)

@​mastra/deployer

  • Added /health endpoint for service monitoring (#​9142)
  • Use a shared getAllToolPaths() method from the bundler to discover tool paths. (#​9204)

@​mastra/deployer-cloud

  • Use a shared getAllToolPaths() method from the bundler to discover tool paths. (#​9204)

@​mastra/evals

@​mastra/mssql

  • Prevents double stringification for MSSQL jsonb columns by reusing incoming strings that already contain valid JSON while still stringifying other inputs as needed. (#​9901)

mastra

  • Add warning to mastra dev and mastra build about upcoming stable release of v1.0.0. (#​9524)
  • Use a shared getAllToolPaths() method from the bundler to discover tool paths. (#​9204)

v0.24.0: 2025-11-05

Compare Source

Highlights

This release focuses primarily on bug fixes and stability improvements.

AI-SDK

We've resolved several issues related to message deduplication and preserving lastMessageIds. More importantly, this release adds support for suspend/resume operations and custom data writes, with network data now properly surfacing as data-parts.

Bundling

We've fully resolved bundling issues with the reflect-metadata package by ensuring it's not removed during the bundling step. This means packages no longer need to be marked as externals to avoid runtime crashes in the Mastra server.

Changelog

@​mastra/agent-builder

@​mastra/ai-sdk

  • update peerdeps (5ca1cca)
  • Preserve lastMessageId in chatRoute (#​9556)
  • Handle custom data writes in agent network execution events in ai sdk transformers (#​9717)
  • Add support for suspend/resume in AI SDK workflowRoute (#​9392)

@​mastra/arize

@​mastra/astra

[@&#820


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 4 times, most recently from 0e9382e to cb7795f Compare August 14, 2025 15:04
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 2 times, most recently from 00355c2 to c92d2a0 Compare August 19, 2025 21:37
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.13.0 fix(deps): update dependency @mastra/core to ^0.14.0 Aug 21, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 3 times, most recently from b2610d9 to f52481e Compare August 21, 2025 21:32
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.14.0 fix(deps): update dependency @mastra/core to ^0.15.0 Aug 27, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 3 times, most recently from 2de1fb1 to 51d8bd3 Compare September 1, 2025 21:05
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 2 times, most recently from b60deb7 to b42e2eb Compare September 5, 2025 03:36
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.15.0 fix(deps): update dependency @mastra/core to ^0.16.0 Sep 5, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 3 times, most recently from 74d4e2c to c126fca Compare September 13, 2025 00:06
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from c126fca to 664f586 Compare September 17, 2025 18:38
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.16.0 fix(deps): update dependency @mastra/core to ^0.17.0 Sep 17, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 2 times, most recently from 74a2989 to 0a76b10 Compare September 23, 2025 21:11
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.17.0 fix(deps): update dependency @mastra/core to ^0.18.0 Sep 23, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from 0a76b10 to e7528bb Compare September 30, 2025 21:57
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.18.0 fix(deps): update dependency @mastra/core to ^0.19.0 Sep 30, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 2 times, most recently from ec812e9 to 503d9cc Compare October 3, 2025 00:42
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.19.0 fix(deps): update dependency @mastra/core to ^0.20.0 Oct 3, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from 2dfbe8d to 5f62cdd Compare October 15, 2025 02:07
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.20.0 fix(deps): update dependency @mastra/core to ^0.21.0 Oct 15, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 3 times, most recently from 38d728a to b70ea26 Compare October 22, 2025 16:15
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.21.0 fix(deps): update dependency @mastra/core to ^0.22.0 Oct 22, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from b70ea26 to 98a8d0b Compare October 24, 2025 22:58
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.22.0 fix(deps): update dependency @mastra/core to ^0.23.0 Oct 24, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 2 times, most recently from 6c0465b to 3a1d696 Compare November 5, 2025 21:44
@renovate renovate bot changed the title fix(deps): update dependency @mastra/core to ^0.23.0 fix(deps): update dependency @mastra/core to ^0.24.0 Nov 5, 2025
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 5 times, most recently from 761645b to b55d48c Compare November 21, 2025 04:31
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 2 times, most recently from 22abf8f to 282bca0 Compare December 3, 2025 17:38
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 2 times, most recently from 093fda8 to cb4edeb Compare December 11, 2025 03:48
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from cb4edeb to 723f5a0 Compare December 19, 2025 01:44
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from 723f5a0 to 45c5f78 Compare December 31, 2025 16:55
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch 3 times, most recently from 9c7aadd to 1e1e5fe Compare January 22, 2026 07:19
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from 1e1e5fe to 8c61ad1 Compare February 2, 2026 18:36
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from 8c61ad1 to fee382e Compare February 12, 2026 13:29
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from fee382e to 8a9ad19 Compare March 2, 2026 19:36
@renovate renovate bot force-pushed the renovate/mastra-core-0.x branch from 8a9ad19 to b77d2c5 Compare March 5, 2026 14:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants