fix(deps): update dependency @mastra/core to ^0.24.0#12
Open
renovate[bot] wants to merge 1 commit intomainfrom
Open
fix(deps): update dependency @mastra/core to ^0.24.0#12renovate[bot] wants to merge 1 commit intomainfrom
renovate[bot] wants to merge 1 commit intomainfrom
Conversation
0e9382e to
cb7795f
Compare
00355c2 to
c92d2a0
Compare
b2610d9 to
f52481e
Compare
2de1fb1 to
51d8bd3
Compare
b60deb7 to
b42e2eb
Compare
74d4e2c to
c126fca
Compare
c126fca to
664f586
Compare
74a2989 to
0a76b10
Compare
0a76b10 to
e7528bb
Compare
ec812e9 to
503d9cc
Compare
2dfbe8d to
5f62cdd
Compare
38d728a to
b70ea26
Compare
b70ea26 to
98a8d0b
Compare
6c0465b to
3a1d696
Compare
761645b to
b55d48c
Compare
22abf8f to
282bca0
Compare
093fda8 to
cb4edeb
Compare
cb4edeb to
723f5a0
Compare
723f5a0 to
45c5f78
Compare
9c7aadd to
1e1e5fe
Compare
1e1e5fe to
8c61ad1
Compare
8c61ad1 to
fee382e
Compare
fee382e to
8a9ad19
Compare
8a9ad19 to
b77d2c5
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
^0.12.1→^0.24.0Release Notes
mastra-ai/mastra (@mastra/core)
v0.24.9Compare Source
v0.24.8Compare Source
v0.24.7Compare Source
v0.24.6: 2025-11-27Compare Source
Highlights
Stream nested execution context from Workflows and Networks to your UI
Agent responses now stream live through workflows and networks, with complete execution metadata flowing to your UI.
In workflows, pipe agent streams directly through steps:
In networks, each step now tracks properly—unique IDs, iteration counts, task info, and agent handoffs all flow through with correct sequencing. No more duplicated steps or missing metadata.
Both surface text chunks, tool calls, and results as they happen, so users see progress in real time instead of waiting for the full response.
AI-SDK voice models are now supported
CompositeVoicenow accepts AI SDK voice models directly—use OpenAI for transcription, ElevenLabs for speech, or any combination you want.Works with OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and more. AI SDK models are automatically wrapped, so you can swap providers without changing your code.
Changelog
@mastra/ai-sdk
Support streaming agent text chunks from workflow-step-output
Adds support for streaming text and tool call chunks from agents running inside workflows via the workflow-step-output event. When you pipe an agent's stream into a workflow step's writer, the text chunks, tool calls, and other streaming events are automatically included in the workflow stream and converted to UI messages.
Features:
includeTextStreamPartsoption toWorkflowStreamToAISDKTransformer(defaults totrue)isMastraTextStreamChunktype guard to identify Mastra chunks with text streaming datatext-start,text-delta,text-endtool-call,tool-resulttransformers.test.tsworkflowRoute()Example:
When served via
workflowRoute(), the UI receives incremental text updates as the agent generates its response, providing a smooth streaming experience. (#10568)Fix chat route to use agent ID instead of agent name for resolution. The
/chat/:agentIdendpoint now correctly resolves agents by their ID property (e.g.,weather-agent) instead of requiring the camelCase variable name (e.g.,weatherAgent). This fixes issue #10469 where URLs like/chat/weather-agentwould return 404 errors. (#10565)Fixes propagation of custom data chunks from nested workflows in branches to the root stream when using
toAISdkV5Streamwith{from: 'workflow'}.Previously, when a nested workflow within a branch used
writer.custom()to write data-* chunks, those chunks were wrapped inworkflow-step-outputevents and not extracted, causing them to be dropped from the root stream.Changes:
workflow-step-outputchunks intransformWorkflow()to extract and propagate data-* chunksWhen a
workflow-step-outputchunk contains a data-* chunk in itspayload.output, the transformer now extracts it and returns it directly to the root streamAdded comprehensive test coverage for nested workflows with branches and custom data propagation
This ensures that custom data chunks written via
writer.custom()in nested workflows (especially those within branches) are properly propagated to the root stream, allowing consumers to receive progress updates, metrics, and other custom data from nested workflow steps. (#10447)Fix network data step formatting in AI SDK stream transformation
Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.
Changes:
AgentNetworkToAISDKTransformerto properly maintain step state throughout execution lifecycleSteps are now identified by unique IDs and updated in place rather than creating duplicates
Added proper iteration and task metadata to each step in the network execution flow
Fixed agent, workflow, and tool execution events to correctly populate step data
Updated network stream event types to include
networkId,workflowId, and consistentrunIdtrackingAdded test coverage for network custom data chunks with comprehensive validation
This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#10432)
[0.x] Make workflowRoute includeTextStreamParts option default to false (#10574)
Add support for tool-call-approval and tool-call-suspended events in chatRoute (#10205)
Backports the
messageMetadataandonErrorsupport from PR #10313 to the 0.x branch, adding these features totoAISdkFormatfunction.messageMetadataparameter totoAISdkFormatoptionsstartandfinishchunks when providedAdded
onErrorparameter totoAISdkFormatoptionssafeParseErrorObjectutility when not providedAdded
safeParseErrorObjectutility function for error parsingUpdated
AgentStreamToAISDKTransformerto accept and usemessageMetadataandonErrorUpdated JSDoc documentation with parameter descriptions and usage examples
Added comprehensive test suite for
messageMetadatafunctionality (6 test cases)Fixed existing test file to use
toAISdkFormatinstead of removedtoAISdkV5StreamNew tests verify:
messageMetadatais called with correct part structureMetadata is included in start and finish chunks
Proper handling when
messageMetadatais not provided or returns null/undefinedFunction is called for each relevant part in the stream
Uses
UIMessageStreamOptions<UIMessage>['messageMetadata']andUIMessageStreamOptions<UIMessage>['onError']types from AI SDK v5 for full type compatibilityBackport of: #10313 (#10314)
Fixed workflow routes to properly receive request context from middleware. This aligns the behavior of
workflowRoutewithchatRoute, ensuring that context set in middleware is consistently forwarded to workflows.When both middleware and request body provide a request context, the middleware value now takes precedence, and a warning is emitted to help identify potential conflicts.
See #10427 (#10427)
@mastra/astra
@mastra/auth-clerk
@mastra/chroma
@mastra/clickhouse
@mastra/cloudflare
@mastra/cloudflare-d1
@mastra/core
Fix base64 encoded images with threads - issue #10480
Fixed "Invalid URL" error when using base64 encoded images (without
data:prefix) in agent calls with threads and resources. Raw base64 strings are now automatically converted to proper data URIs before being processed.Changes:
Updated
attachments-to-parts.tsto detect and convert raw base64 strings to data URIsFixed
MessageListimage processing to handle raw base64 in two locations:aiV4CoreMessageToV1PromptMessagemastraDBMessageToAIV4UIMessageAdded comprehensive tests for base64 images, data URIs, and HTTP URLs with threads
Breaking Change: None - this is a bug fix that maintains backward compatibility while adding support for raw base64 strings. (#10483)
SimpleAuth and improved CloudAuth (#10569)
Fixed OpenAI schema compatibility when using
agent.generate()oragent.stream()withstructuredOutput.Changes
.optional()fields are converted to.nullable()with a transform that convertsnull→undefined, preserving optional semantics while satisfying OpenAI's strict mode requirements.nullable()fields remain unchanged.optional()fields at any nesting level (objects, arrays, unions, etc.)Example
(#10454)
deleteVectors, deleteFilter when upserting, updateVector filter (#10244) (#10244)
Fix generateTitle model type to accept AI SDK LanguageModelV2
Updated the
generateTitle.modelconfig option to acceptMastraModelConfiginstead ofMastraLanguageModel. This allows users to pass raw AI SDKLanguageModelV2models (e.g.,anthropic.languageModel('claude-3-5-haiku-20241022')) directly without type errors.Previously, passing a standard
LanguageModelV2would fail becauseMastraLanguageModelV2has differentdoGenerate/doStreamreturn types. NowMastraModelConfigis used consistently across:memory/types.ts-generateTitle.modelconfigagent.ts-genTitle,generateTitleFromUserMessage,resolveTitleGenerationConfigagent-legacy.ts-AgentLegacyCapabilitiesinterface (#10567)Fix message metadata not persisting when using simple message format. Previously, custom metadata passed in messages (e.g.,
{role: 'user', content: 'text', metadata: {userId: '123'}}) was not being saved to the database. This occurred because the CoreMessage conversion path didn't preserve metadata fields.Now metadata is properly preserved for all message input formats:
Simple CoreMessage format:
{role, content, metadata}Full UIMessage format:
{role, content, parts, metadata}AI SDK v5 ModelMessage format with metadata
Fixes #8556 (#10488)
feat: Composite auth implementation (#10359)
Fix requireApproval property being ignored for tools passed via toolsets, clientTools, and memoryTools parameters. The requireApproval flag now correctly propagates through all tool conversion paths, ensuring tools requiring approval will properly request user approval before execution. (#10562)
Fix Azure Foundry rate limit handling for -1 values (#10409)
Fix model headers not being passed through gateway system
Previously, custom headers specified in
MastraModelConfigwere not being passed through the gateway system to model providers. This affected:OpenRouter (preventing activity tracking with
HTTP-RefererandX-Title)Custom providers using custom URLs (headers not passed to
createOpenAICompatible)Custom gateway implementations (headers not available in
resolveLanguageModel)Now headers are correctly passed through the entire gateway system:
Base
MastraModelGatewayinterface updated to accept headersModelRouterLanguageModelpasses headers from config to all gatewaysOpenRouter receives headers for activity tracking
Custom URL providers receive headers via
createOpenAICompatibleCustom gateways can access headers in their
resolveLanguageModelimplementationExample usage:
Fixes #9760
(#10564)
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
Add assistant messages to messageList immediately after LLM execution
Flush messages synchronously before suspension to persist state
Create thread if it doesn't exist before flushing
Add metadata helpers to persist and remove tool approval state
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
Extract runId from pending approvals to enable resumption after refresh
Convert
pendingToolApprovals(DB format) torequireApprovalMetadata(runtime format)Handle both
dynamic-toolandtool-{NAME}part types for approval stateChange runId from hardcoded
agentIdto uniqueuuid()UI changes (@mastra/playground-ui):
Handle tool calls awaiting approval in message initialization
Convert approval metadata format when loading initial messages
Fixes #9745, #9906 (#10369)
Fix race condition in parallel tool stream writes
Introduces a write queue to ToolStream to serialize access to the underlying stream, preventing writer locked errors (#10463)
Remove unneeded console warning when flushing messages and no threadId or saveQueueManager is found. (#10369)
Fixes GPT-5 reasoning which was failing on subsequent tool calls with the error:
(#10489)
Add optional includeRawChunks parameter to agent execution options,
allowing users to include raw chunks in stream output where supported
by the model provider. (#10456)
When
mastra devruns, multiple processes can write toprovider-registry.jsonconcurrently (auto-refresh, syncGateways, syncGlobalCacheToLocal). This causes file corruption where the end of the JSON appears twice, making it unparseable.The fix uses atomic writes via the write-to-temp-then-rename pattern. Instead of:
fs.rename()is atomic on POSIX systems when both paths are on the same filesystem, so concurrent writes will each complete fully rather than interleaving. (#10529)Ensures that data chunks written via
writer.custom()always bubble up directly to the top-level stream, even when nested in sub-agents. This allows tools to emit custom progress updates, metrics, and other data that can be consumed at any level of the agent hierarchy.Added bubbling logic in sub-agent execution: When sub-agents execute, data chunks (chunks with type starting with
data-) are detected and written viawriter.custom()instead ofwriter.write(), ensuring they bubble up directly without being wrapped intool-outputchunks.Added comprehensive tests:
Test for
writer.custom()with direct tool executionTest for
writer.custom()with sub-agent tools (nested execution)Test for mixed usage of
writer.write()andwriter.custom()in the same toolWhen a sub-agent's tool uses
writer.custom()to write data chunks, those chunks appear in the sub-agent's stream. The parent agent's execution logic now detects these chunks and useswriter.custom()to bubble them up directly, preserving their structure and making them accessible at the top level.This ensures that:
Data chunks bubble up correctly through nested agent hierarchies
Regular chunks continue to be wrapped in
tool-outputas expected (#10309)Adds ability to create custom
MastraModelGateway's that can be added to theMastraclass instance under thegatewaysproperty. Giving you typescript autocompletion in any model picker string.(#10535)
Support AI SDK voice models
Mastra now supports AI SDK's transcription and speech models directly in
CompositeVoice, enabling seamless integration with a wide range of voice providers through the AI SDK ecosystem. This allows you to use models from OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and many more for both speech-to-text (transcription) and text-to-speech capabilities.AI SDK models are automatically wrapped when passed to
CompositeVoice, so you can mix and match AI SDK models with existing Mastra voice providers for maximum flexibility.Usage Example
Fixes #9947 (#10558)
Fix network data step formatting in AI SDK stream transformation
Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.
Changes:
AgentNetworkToAISDKTransformerto properly maintain step state throughout execution lifecycleSteps are now identified by unique IDs and updated in place rather than creating duplicates
Added proper iteration and task metadata to each step in the network execution flow
Fixed agent, workflow, and tool execution events to correctly populate step data
Updated network stream event types to include
networkId,workflowId, and consistentrunIdtrackingAdded test coverage for network custom data chunks with comprehensive validation
This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#10432)
Fix generating provider-registry.json (#10535)
Fix message-list conversion issues when persisting messages before tool suspension: filter internal metadata fields (
__originalContent) from UI messages, keep reasoning field empty for consistent cache keys during message deduplication, and only include providerMetadata on parts when defined. (#10552)Fix agent.generate() to use model's doGenerate method instead of doStream
When calling
agent.generate(), the model'sdoGeneratemethod is now correctly invoked instead of always usingdoStream. This aligns the non-streaming generation path with the intended behavior where providers can implement optimized non-streaming responses. (#10572)@mastra/couchbase
@mastra/deployer
@mastra/deployer-cloud
@mastra/dynamodb
@mastra/inngest
@mastra/lance
@mastra/libsql
@mastra/mcp
@mastra/mcp-docs-server
@mastra/mongodb
@mastra/mssql
@mastra/opensearch
@mastra/pg
@mastra/playground-ui
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
Add assistant messages to messageList immediately after LLM execution
Flush messages synchronously before suspension to persist state
Create thread if it doesn't exist before flushing
Add metadata helpers to persist and remove tool approval state
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
Extract runId from pending approvals to enable resumption after refresh
Convert
pendingToolApprovals(DB format) torequireApprovalMetadata(runtime format)Handle both
dynamic-toolandtool-{NAME}part types for approval stateChange runId from hardcoded
agentIdto uniqueuuid()UI changes (@mastra/playground-ui):
Handle tool calls awaiting approval in message initialization
Convert approval metadata format when loading initial messages
Fixes #9745, #9906 (#10369)
@mastra/rag
@mastra/react
Configurable resourceId in react useChat (#10561)
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
Add assistant messages to messageList immediately after LLM execution
Flush messages synchronously before suspension to persist state
Create thread if it doesn't exist before flushing
Add metadata helpers to persist and remove tool approval state
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
Extract runId from pending approvals to enable resumption after refresh
Convert
pendingToolApprovals(DB format) torequireApprovalMetadata(runtime format)Handle both
dynamic-toolandtool-{NAME}part types for approval stateChange runId from hardcoded
agentIdto uniqueuuid()UI changes (@mastra/playground-ui):
Handle tool calls awaiting approval in message initialization
Convert approval metadata format when loading initial messages
Fixes #9745, #9906 (#10369)
@mastra/schema-compat
agent.generate()oragent.stream()withstructuredOutput.Changes
.optional()fields are converted to.nullable()with a transform that convertsnull→undefined, preserving optional semantics while satisfying OpenAI's strict mode requirements.nullable()fields remain unchanged.optional()fields at any nesting level (objects, arrays, unions, etc.)Example
(#10454)
@mastra/upstash
@mastra/voice-deepgram
mastra
v0.24.5Compare Source
v0.24.4Compare Source
v0.24.3: 2025-11-19Compare Source
Highlights
Generate Endpoint Fix for OpenAI Streaming
We've switched to using proper generate endpoints for model calls, fixing a critical permission issue with OpenAI streaming. No more 403 errors when your users don't have full model permissions - the generate endpoint respects granular API key scopes properly.
AI SDK v5: Fine-Grained Stream Control
Building custom UIs? You now have complete control over what gets sent in your AI SDK streams. Configure exactly which message chunks your frontend receives with the new
sendStart,sendFinish,sendReasoning, andsendSourcesoptions.Changelog
@mastra/ai-sdk
Add sendStart, sendFinish, sendReasoning, and sendSources options to toAISdkV5Stream function, allowing fine-grained control over which message chunks are included in the converted stream. Previously, these values were hardcoded in the transformer.
BREAKING CHANGE: AgentStreamToAISDKTransformer now accepts an options object instead of a single lastMessageId parameter
Also, add sendStart, sendFinish, sendReasoning, and sendSources parameters to
chatRoute function, enabling fine-grained control over which chunks are
included in the AI SDK stream output. (#10127)
Added support for tripwire data chunks in streaming responses.
Tripwire chunks allow the AI SDK to emit special data events when certain conditions are triggered during stream processing. These chunks include a
tripwireReasonfield explaining why the tripwire was activated.Usage
(#10269)
@mastra/auth
@mastra/auth-auth0
@mastra/auth-clerk
@mastra/auth-firebase
@mastra/auth-supabase
@mastra/auth-workos
@mastra/client-js
descriptionfield toGetAgentResponseto support richer agent metadata (#10305)@mastra/core
Only handle download image asset transformation if needed (#10122)
Fix tool outputSchema validation to allow unsupported Zod types like ZodTuple. The outputSchema is only used for internal validation and never sent to the LLM, so model compatibility checks are not needed. (#9409)
Fix vector definition to fix pinecone (#10150)
Add type bailed to workflowRunStatus (#10091)
Allow provider to pass through options to the auth config (#10284)
Fix deprecation warning when agent network executes workflows by using
.fullStreaminstead of iteratingWorkflowRunOutputdirectly (#10306)Add support for doGenerate in LanguageModelV2. This change fixes issues with OpenAI stream permissions.
@mastra/mcp
@mastra/mcp-docs-server
@mastra/server
@mastra/observability
@mastra/pinecone
@mastra/playground-ui
@mastra/voice-google-gemini-live
create-mastra
mastra
v0.24.2Compare Source
v0.24.1: 2025-11-14Compare Source
Highlights
1.0 Beta is ready!
We've worked hard on a 1.0 beta version to signal that Mastra is ready for prime time and there will not be any breaking changes in the near future. Please visit the migration guide to get started.
Improved support for files in models
We added the ability not to download images or any supported files by the model, and instead send the raw URL so it can handle it on its own. This improves the speed of the LLM call.
Mistral
Added improved support for Mistral by using the ai-sdk provider under the hood instead of the openai compat provider.
Changelog
@mastra/ai-sdk
Fix bad dane change in 0.x workflowRoute (#10090)
Improve ai-sdk transformers, handle custom data from agent sub workflow, sug agent tools (#10026)
Extend the workflow route to accept optional runId and resourceId parameters, allowing clients to specify custom identifiers when creating workflow runs. These parameters are now properly validated in the OpenAPI schema and passed through to the createRun method.
Also updates the OpenAPI schema to include previously undocumented
resumeData and step fields. (#10034)
@mastra/client-js
@mastra/core
Integrates the native Mistral AI SDK provider (
@ai-sdk/mistral) to replace the current OpenAI-compatible endpoint implementation for Mistral models. (#9789)Fix: Don't download unsupported media (#9209)
Use a shared
getAllToolPaths()method from the bundler to discover tool paths. (#9204)Add an additional check to determine whether the model natively supports specific file types. Only download the file if the model does not support it natively. (#9790)
Fix agent network iteration counter bug causing infinite loops
The iteration counter in agent networks was stuck at 0 due to a faulty ternary operator that treated 0 as falsy. This prevented
maxStepsfrom working correctly, causing infinite loops when the routing agent kept selecting primitives instead of returning "none".Changes:
Fixed iteration counter logic in
loop/network/index.tsfrom(inputData.iteration ? inputData.iteration : -1) + 1to(inputData.iteration ?? -1) + 1Changed initial iteration value from
0to-1so first iteration correctly starts at 0Added
checkIterations()helper to validate iteration counting in all network testsFixes #9314 (#9762)
Exposes requiresAuth to custom api routes (#9952)
Fix agent network working memory tool routing. Memory tools are now included in routing agent instructions but excluded from its direct tool calls, allowing the routing agent to properly route to tool execution steps for memory updates. (#9428)
Fixes assets not being downloaded when available (#10079)
@mastra/deployer
getAllToolPaths()method from the bundler to discover tool paths. (#9204)@mastra/deployer-cloud
getAllToolPaths()method from the bundler to discover tool paths. (#9204)@mastra/evals
@mastra/mssql
mastra
mastra devandmastra buildabout upcoming stable release of v1.0.0. (#9524)getAllToolPaths()method from the bundler to discover tool paths. (#9204)v0.24.0: 2025-11-05Compare Source
Highlights
This release focuses primarily on bug fixes and stability improvements.
AI-SDK
We've resolved several issues related to message deduplication and preserving lastMessageIds. More importantly, this release adds support for suspend/resume operations and custom data writes, with network data now properly surfacing as data-parts.
Bundling
We've fully resolved bundling issues with the reflect-metadata package by ensuring it's not removed during the bundling step. This means packages no longer need to be marked as externals to avoid runtime crashes in the Mastra server.
Changelog
@mastra/agent-builder
@mastra/ai-sdk
@mastra/arize
@mastra/astra
[@̴
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.