AI SDK

Deeper Telemetry

Add tool execution timing and total wall time with createEvlogIntegration. Compose with other middleware like supermemory or guardrails.

createAILogger covers tokens, model info, and streaming metrics. For deeper observability — per-tool execution timing, success/failure tracking, and total generation wall time — add createEvlogIntegration() on top. It implements the AI SDK's TelemetryIntegration interface and captures data middleware alone cannot see.

When passed an AILogger, the integration shares its accumulator. Both paths write to the same ai.* field:

server/api/agent.post.ts
import { generateText } from 'ai'
import { createAILogger, createEvlogIntegration } from 'evlog/ai'

export default defineEventHandler(async (event) => {
  const log = useLogger(event)
  const ai = createAILogger(log)

  const result = await generateText({
    model: ai.wrap('anthropic/claude-sonnet-4.6'),
    tools: { getWeather, searchDB },
    experimental_telemetry: {
      isEnabled: true,
      integrations: [createEvlogIntegration(ai)],
    },
  })

  return { text: result.text }
})

Your wide event now includes per-tool timing:

Wide Event
{
  "ai": {
    "calls": 2,
    "steps": 2,
    "model": "claude-sonnet-4.6",
    "provider": "anthropic",
    "inputTokens": 3500,
    "outputTokens": 800,
    "totalTokens": 4300,
    "toolCalls": ["getWeather", "searchDB"],
    "tools": [
      { "name": "getWeather", "durationMs": 150, "success": true },
      { "name": "searchDB", "durationMs": 45, "success": true }
    ],
    "totalDurationMs": 2340,
    "msToFirstChunk": 180,
    "msToFinish": 2100,
    "tokensPerSecond": 380
  }
}

Standalone (without middleware)

If your model is already wrapped (e.g. by another middleware), pass the request logger directly:

server/api/chat.post.ts
import { createEvlogIntegration } from 'evlog/ai'

const integration = createEvlogIntegration(log)

const result = await generateText({
  model: somePreWrappedModel,
  experimental_telemetry: {
    isEnabled: true,
    integrations: [integration],
  },
})

What the integration captures

DataSourceDescription
ai.tools[]onToolCallFinishPer-tool name, durationMs, success, and error (if failed)
ai.totalDurationMsonStartonFinishTotal wall time from generation start to completion

The middleware captures tokens, model info, and streaming metrics. The integration captures tool execution timing. Together, they give you complete AI observability.

Composability

ai.wrap() works with models that are already wrapped by other tools. If you use supermemory, guardrails middleware, or any other model wrapper, pass the wrapped model to ai.wrap():

server/api/chat.post.ts
import { createAILogger } from 'evlog/ai'
import { withSupermemory } from '@supermemory/tools/ai-sdk'
import { createGateway } from 'ai'

const gateway = createGateway({ ... })
const ai = createAILogger(log)
const base = gateway('anthropic/claude-sonnet-4.6')
const model = ai.wrap(withSupermemory(base, 'your-org-id', { mode: 'full' }))

For explicit middleware composition, use createAIMiddleware to get the raw middleware and compose it yourself via wrapLanguageModel:

server/api/chat.post.ts
import { createAIMiddleware } from 'evlog/ai'
import { wrapLanguageModel } from 'ai'

const model = wrapLanguageModel({
  model: base,
  middleware: [createAIMiddleware(log, { toolInputs: true }), otherMiddleware],
})

createAIMiddleware returns the same middleware that createAILogger uses internally. The difference: createAIMiddleware does not include captureEmbed (embedding models don't use middleware). Use createAILogger for the full API, createAIMiddleware when you need explicit middleware ordering.