Audit Logs

Drains & Integrity

auditEnricher to auto-fill request context, auditOnly to route audits to a dedicated sink, and signed for tamper-evident HMAC or hash-chain integrity.

Three building blocks: auditEnricher fills context, auditOnly routes audits to a dedicated drain, and signed adds tamper-evident integrity. Each is opt-in and replaceable.

auditEnricher()

auditEnricher() populates event.audit.context.{requestId, traceId, ip, userAgent, tenantId}. Skip it and ship a custom enricher if your strategy differs.

server/plugins/evlog.ts
import { auditEnricher } from 'evlog'

nitro.hooks.hook('evlog:enrich', auditEnricher())

For multi-tenant apps and custom session bridges, pass options:

nitro.hooks.hook('evlog:enrich', auditEnricher({
  tenantId: ctx => ctx.event.tenant as string | undefined,
  bridge: { getSession: async ctx => readSessionActor(ctx.headers) },
}))

Without auditEnricher, audit.context stays empty — auditors and incident responders need at least requestId and ip to triangulate a recorded action.

auditOnly()

Why filter audits to a separate sink? Three reasons: cost (audit volume is tiny next to product telemetry — keep them separate so retention costs don't explode), permissions (the audit dataset should be read-only for engineers and write-only for the app), and retention (audits often live 7+ years; product logs rarely live more than 90 days).

auditOnly(drain) only forwards events with an audit field. Compose with any drain:

import { auditOnly } from 'evlog'
import { createAxiomDrain } from 'evlog/axiom'

// Send audits to a dedicated Axiom dataset:
nitro.hooks.hook('evlog:drain', auditOnly(
  createAxiomDrain({ dataset: 'audit', token: process.env.AXIOM_AUDIT_TOKEN }),
))

Set await: true to make audit writes synchronous (no fire-and-forget for audits — crash-safe by default):

auditOnly(createFsDrain({ dir: '.audit' }), { await: true })

The await: true flag costs you a small bit of latency per request that records an audit (one synchronous drain call), but guarantees the audit hits disk before the response is sent. For compliance-grade audits, the trade-off is always worth it.

signed()

signed(drain, opts) adds tamper-evident integrity. Two strategies:

StrategyWhat it addsUse case
'hmac'event.audit.signature (HMAC of the canonical event)Single-event integrity check (any later mutation fails verification).
'hash-chain'event.audit.prevHash and event.audit.hashA verifiable chain — deletions and reordering also become detectable.
What signed() actually buys you. Detection, not prevention. Anyone with write access to the underlying sink can still nuke the file or table — but the chain proves which events were dropped or modified after the fact. Skip signed() if you already write to an append-only / WORM store (S3 Object Lock, Postgres with row-level immutability, BigQuery append-only tables); doubling integrity layers just adds latency without raising the bar.

HMAC

Each event gets a signature. Tampering with one row breaks that row's verification, but doesn't break later rows.

import { signed } from 'evlog'

signed(drain, { strategy: 'hmac', secret: process.env.AUDIT_SECRET! })

Hash-chain

Each event references the previous event's hash. Deleting any row breaks the chain forward of that point, so the verifier can pinpoint the exact row that was tampered with.

signed(drain, {
  strategy: 'hash-chain',
  state: {
    load: () => fs.readFile('.audit/head', 'utf8').catch(() => null),
    save: (h) => fs.writeFile('.audit/head', h),
  },
})

The state config is required for cross-process or durable chains: load the previous head hash from your own store (Redis, Postgres, file) before each event, save the new head after.

audit chain·idle
  1. #1invoice.refund·success
    prev:hash:3f2c8e1a
  2. #2user.update·denied
    prev:3f2c8e1ahash:9a1b4d7c
  3. #3apiKey.revoke·success
    prev:9a1b4d7chash:c4e7f2b9
link ok tamper detected chain verified · 3 events intact
A CLI to walk and verify the chain (evlog audit verify) is on the roadmap. Until then, validate by recomputing the hashes of stored events and comparing each prevHash against the previous event's hash.