AI SDK Integration
evlog/ai gives you full AI observability by wrapping your model with middleware. Token usage, tool calls, streaming performance, cache hits, reasoning tokens, and cost estimation — all captured into the wide event automatically.
Add AI observability with evlog
Install
Add the AI SDK as a dependency:
pnpm add ai
bun add ai
yarn add ai
npm install ai
Quick Start
- streamText()
model wrapped, request sent
- first chunk
msToFirstChunk: 234
- streaming…
output tokens flowing
- tool: getWeather
paris
- tool result
22°C, sunny
- streaming…
final answer being generated
- stream finish
finishReason: stop · msToFinish: 4500
Two lines to add, one param to change:
export default defineEventHandler(async (event) => {
const result = streamText({
model: 'anthropic/claude-sonnet-4.6',
messages,
})
return result.toTextStreamResponse()
})
import { useLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'
export default defineEventHandler(async (event) => {
const log = useLogger(event)
const ai = createAILogger(log)
const result = streamText({
model: ai.wrap('anthropic/claude-sonnet-4.6'),
messages,
})
return result.toTextStreamResponse()
})
Your wide event now includes:
{
"method": "POST",
"path": "/api/chat",
"status": 200,
"duration": "4.5s",
"ai": {
"calls": 1,
"model": "claude-sonnet-4.6",
"provider": "anthropic",
"inputTokens": 3312,
"outputTokens": 814,
"totalTokens": 4126,
"reasoningTokens": 225,
"finishReason": "stop",
"msToFirstChunk": 234,
"msToFinish": 4500,
"tokensPerSecond": 180
}
}
How It Works
createAILogger(log, options?) returns an AILogger with the following methods:
| Method | Description |
|---|---|
wrap(model) | Wraps a language model with middleware. Accepts a model string (e.g. 'anthropic/claude-sonnet-4.6') or a LanguageModelV3 object. Works with generateText, streamText, and ToolLoopAgent. |
captureEmbed(result) | Manually captures token usage, model info, and dimensions from embed() or embedMany() results. |
getMetadata() | Returns a snapshot of the current execution metadata. See Access Metadata. |
getEstimatedCost() | Returns the current estimated cost in dollars when a cost map is configured. |
onUpdate(callback) | Subscribe to metadata updates. Fires on every step, embed, error, and integration finish. |
The middleware intercepts calls at the provider level. It does not touch your callbacks, prompts, or responses. Captured data flows through the normal evlog pipeline (sampling, enrichers, drains) and lands in Axiom, Better Stack, or wherever you drain to.
Where to next
Usage Patterns
streamText, generateText, multi-step agents, RAG, multiple models — every common pattern, ready to copy.Options
Works With All Frameworks
evlog/ai works with any framework that evlog supports:
import { useLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'
const log = useLogger(event)
const ai = createAILogger(log)
import { withEvlog, useLogger } from '@/lib/evlog'
import { createAILogger } from 'evlog/ai'
export const POST = withEvlog(async () => {
const log = useLogger()
const ai = createAILogger(log)
// ...
})
import { createAILogger } from 'evlog/ai'
app.post('/api/chat', (req, res) => {
const ai = createAILogger(req.log)
// ...
})
import { createAILogger } from 'evlog/ai'
app.post('/api/chat', (c) => {
const ai = createAILogger(c.get('log'))
// ...
})
import { createAILogger } from 'evlog/ai'
app.post('/api/chat', async (request) => {
const ai = createAILogger(request.log)
// ...
})
import { useLogger } from 'evlog/nestjs'
import { createAILogger } from 'evlog/ai'
const log = useLogger()
const ai = createAILogger(log)
import { createLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'
const log = createLogger()
const ai = createAILogger(log)
// ...
log.emit()