Skip to content

DurableAgent experimental_telemetry does not emit AI SDK spans for LLM or tool calls #1296

@craze3

Description

@craze3

Summary

@workflow/ai DurableAgent accepts experimental_telemetry, but durable agent turns do not emit the expected AI SDK telemetry spans (ai.streamText, ai.streamText.doStream, ai.toolCall).

In the same app, plain AI SDK streamText() calls emit Langfuse/OpenTelemetry spans correctly. Only the durable workflow path is missing.

Expected behavior

When experimental_telemetry: { isEnabled: true, functionId, metadata } is passed to DurableAgent, the durable execution path should emit the same AI SDK telemetry shape developers get from streamText() / generateText(), including:

  • ai.streamText
  • ai.streamText.doStream
  • ai.toolCall

This should work with standard OpenTelemetry setups such as @vercel/otel + langfuse-vercel.

Actual behavior

DurableAgent stores and forwards telemetry config, but the internal execution path appears to bypass AI SDK telemetry wrappers:

  • DurableAgent stores experimental_telemetry
  • streamTextIterator forwards it into doStreamStep
  • doStreamStep then calls raw model.doStream(callOptions) without applying telemetry
  • durable tool execution also bypasses AI SDK ai.toolCall tracing helpers

Result:

  • plain streamText() spans show up
  • durable workflow turns do not create observations/spans in Langfuse

Reproduction

  1. Next.js app with instrumentation.ts using registerOTel({ traceExporter: new LangfuseExporter() })
  2. Verify a normal streamText() call emits spans in Langfuse
  3. Create a DurableAgent with experimental_telemetry: { isEnabled: true, functionId: 'my-agent' }
  4. Run agent.stream(...) inside a workflow with tool calls
  5. Observe that planner / plain AI SDK traces appear, but durable main-turn spans do not

Why this seems to be an implementation bug

In the published package:

  • durable-agent.js stores telemetry on the agent and forwards it during stream()
  • stream-text-iterator.js passes telemetry into doStreamStep
  • do-stream-step.js performs a raw model.doStream(...) and does not use the telemetry setting
  • executeTool() in durable-agent.js does not create AI SDK-style ai.toolCall spans

So telemetry is exposed in the API surface but not honored by the durable execution path.

What we need

Please make DurableAgent telemetry behavior match AI SDK core behavior.

Acceptable fixes would be:

  1. Route durable LLM execution through the same AI SDK telemetry wrappers used by streamText() / generateText(), or
  2. Add an internal telemetry implementation in @workflow/ai that preserves the same span names/structure/attributes, including ai.toolCall, or
  3. Expose an official helper from AI SDK / Workflow so DurableAgent can reuse the supported telemetry path instead of duplicating logic

Request for regression coverage

Please add a regression test that verifies DurableAgent emits telemetry spans when experimental_telemetry.isEnabled === true, including:

  • outer generation span
  • provider call span
  • tool call span
  • functionId / metadata propagation

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions