Silgi

Analytics

Built-in monitoring dashboard with session tracking, deep error tracing, AI-ready markdown export, and custom span tracking.

Built-in analytics for Silgi. No Prometheus, no Grafana, no external infrastructure. One flag gives you a live dashboard, full error context, session tracking, and one-click markdown export for AI debugging.

Quick start

Pass analytics: true to serve() or handler():

s.serve(appRouter, {
  : 3000,
  : true,
})

Open http://localhost:3000/analytics in your browser.

What it tracks

MetricHowOverhead
Request count per procedureAtomic counter~0
Error log with full contextInput, headers, stack trace, spans~0
Latency (avg, p50, p95, p99)Ring buffer + performance.now()~2ns
Requests/secCount / elapsed time~0
Time-series (sparkline)1-second windows~0
Custom spans (DB, API calls)trace() helper~0 when unused
HTTP request groupingRequest accumulator with IDs~0
Session trackingCookie-based (_sid)~0
Response headersCaptured after Response is built~0

All data lives in memory. No disk I/O, no network calls, no serialization until the dashboard is opened.

Dashboard

The dashboard is a self-contained single-file app served at /analytics. It auto-refreshes every 2 seconds and groups the most important signals into overview, request, session, and error workflows.

Overview tab:

  • Health summary banner with busiest, noisiest, and slowest procedure callouts
  • Total requests, req/s, error rate, and average latency stat cards
  • Traffic console with peak windows, focus-mode panels for traffic, latency, and failures, plus a sortable per-procedure breakdown

Requests tab:

  • Recent traced requests with procedure, status, and latency filters plus sortable columns
  • Clickable session ID badges to navigate to the session detail
  • Per-request detail pages with span waterfall timing, captured input/output payloads, request and response headers, timing breakdown by category (db, cache, http, etc.), and session link

Sessions tab:

  • All active sessions with request count, error count, total/average latency, and procedures called
  • Search and sortable columns
  • Session detail page with:
    • Stat strip (requests, errors, wall clock time, CPU time, avg latency, slowest/fastest request)
    • Procedure journey — visual breadcrumb trail showing the sequence of procedures called across the session
    • Session flow Gantt chart — timeline visualization of when each request happened relative to session start
    • Clickable request timeline with inline span waterfall and input/output preview
    • Per-request mini timing bars showing db/cache/http breakdown at a glance
    • Method and status code breakdown
    • Time by category chart with tooltips
    • Per-procedure stats cards (call count, total/avg duration, spans, errors)
    • Copy for AI (Markdown) and Copy JSON for the entire session

Errors tab:

  • Filterable error log with procedure, severity, and trace-presence controls
  • Detail pages with full input, headers, stack trace, and traced spans for each error
  • Clickable request ID badge to navigate to the HTTP request that caused the error
  • Copy for AI (Markdown) — one click to copy a structured markdown document, paste into any AI assistant
  • Copy JSON — raw error data for programmatic use (sensitive headers auto-redacted)

Request IDs

Every response includes an x-request-id header with a unique Snowflake-style ID. The ID is a 13-character Base36 string that is lexicographically time-sorted and collision-resistant across processes.

x-request-id: 1a2b3c4d5e6f7

Request IDs link HTTP requests to errors in the dashboard — clicking an error's request badge navigates to the full request detail.

Session tracking

Analytics automatically tracks user sessions via a _sid cookie. Sessions group multiple HTTP requests from the same browser or client, letting you see the full user journey.

  • Cookie: _sidHttpOnly, SameSite=Lax, 1-year expiry
  • ID format: Same Snowflake format as request IDs — unique and time-sorted
  • No server-side session state: Session data is derived from request entries client-side

Sessions appear in the dashboard's Sessions tab and as clickable badges in request detail pages.

Tracing DB queries and API calls

When analytics is enabled, every request context gets a trace() method. Use it to measure any async operation inside your procedures:

const  = s.query().$resolve(async ({  }) => {
  const  = await .trace('db.users.findMany', () => db.users.findMany())
  const  = await .trace('db.users.count', () => db.users.count())
  return { ,  }
})

Each traced operation records its name, duration, start offset, and error status. Spans appear in the request detail page with a waterfall timeline, and in the error detail panel when a request fails.

Span options

The trace() method accepts an options object as the third argument:

await ctx.trace('db.users.findMany', () => db.users.findMany(), {
  : 'db',
  : 'SELECT * FROM users WHERE active = true',
  : { : 'active' },
  : () => ({ : .length }),
})
OptionTypeDescription
kindSpanKindCategory for color-coding in the dashboard (see below)
detailstringExtra info shown in the expanded span view (SQL query, URL, etc.)
inputunknownInput data captured for this span (shown in expanded detail)
outputunknown | (result) => unknownOutput data — a value or a function that receives the result
procedure{ input?, output? }Set procedure-level input/output (recorded on the HTTP request)

The output option can be a function to derive the captured value from the trace result, avoiding capturing the full response:

// Capture just the count, not the full user array
await ctx.trace('db.users.findMany', () => db.users.findMany(), {
  : { : 10 },
  : () => ({ : .length }),
})

Procedure-level capture

Use procedure to record input/output at the HTTP request level (visible in the request detail page), not just on the individual span:

const  = s.$input(z.object({ : z.number() })).$resolve(async ({ ,  }) => {
  return .trace('db.users.findById', () => db.users.findById(.id), {
    : { : .id },
    : () => ({ : .name }),
    : {
      ,
      : () => ,
    },
  })
})

Span kinds

Spans are automatically categorized by their name prefix. You can override this with the kind option.

KindAuto-detected when name containsDashboard color
dbdb., sql, prisma, drizzle, query, mongoPurple
httphttp., fetch, api.Blue
cachecache., redis, memcacheEmerald
queuequeue, publish, nats, kafkaAmber
emailemail, smtp, sesOrange
aiai, llm, openai, geminiCyan
customAnything elseGray

Standalone trace() helper

If you prefer an explicit import over the context method, use the standalone trace() function. It works whether analytics is enabled or not — when disabled, it calls the function directly with zero overhead:

import {  } from 'silgi/analytics'

const  = s.query().$resolve(async ({  }) => {
  const  = await (, 'db.users.findMany', () => db.users.findMany())
  const  = await (, 'api.weather', () => (weatherUrl))
  return { ,  }
})

This is useful when you want tracing code that compiles cleanly regardless of whether the handler has analytics turned on.

Copy for AI

Both errors and requests support one-click copy for AI debugging:

  • Errors: Full context with procedure path, error code, input, headers (authorization redacted), stack trace, traced spans, and total duration
  • Requests: HTTP metadata, all procedure calls with input/output, span waterfall with timing breakdown
  • Timing: Focused performance view with just the span timeline and timing categories
  • Sessions: Full session summary with all requests and their procedures

The markdown format is optimized for pasting into Claude, ChatGPT, or any AI assistant. Each export ends with analysis prompts for performance optimization.

Protecting the dashboard

By default, the analytics dashboard is open to anyone who can reach the URL. In production you should always set the auth option.

Token auth

Pass a secret string. The dashboard will prompt for the token on first visit and store it in the browser's session storage.

s.serve(appRouter, {
  : {
    : ..,
  },
})

The token is checked against the Authorization: Bearer <token> header or the silgi-auth cookie (set automatically by the login page).

Custom auth

Pass a function for full control — check cookies, IP allowlists, OAuth tokens, or anything else:

s.serve(appRouter, {
  : {
    : () => {
      const  = .headers.get('cookie') ?? ''
      return .includes('admin_session=valid')
    },
  },
})

The function receives the raw Request and returns boolean or Promise<boolean>. Return true to allow access, false to block with a 401.

Never deploy analytics without auth in production. The dashboard exposes request payloads, headers, error stacks, and internal procedure names.

Configuration

Pass an options object instead of true to customize buffer sizes and history:

s.serve(appRouter, {
  : {
    : 'my-secret-token',
    : 2048,
    : 300,
    : 200,
    : 500,
  },
})
OptionTypeDefaultDescription
authstring | (req: Request) => booleanundefinedToken string or custom auth function
bufferSizenumber1024Latency samples to keep per procedure (ring buffer)
historySecondsnumber120Time-series sparkline history in seconds
maxErrorsnumber100Maximum error log entries to keep
maxRequestsnumber200Maximum recent request entries to keep

All buffers are fixed-size ring buffers — memory usage stays constant regardless of traffic volume.

With handler()

Works the same way when using s.handler() instead of s.serve():

const  = s.handler(appRouter, { : true })

Bun.serve({ : 3000, :  })
// Dashboard at /analytics

JSON API

The dashboard reads from three JSON endpoints. You can also query them directly for custom integrations, alerting, or external dashboards:

EndpointDescription
/analytics/_api/statsProcedure metrics, latency percentiles, time-series
/analytics/_api/errorsFull error log with input, headers, stack trace, spans
/analytics/_api/requestsRecent requests with trace spans and session IDs

All endpoints return JSON with cache-control: no-cache.

Stats response shape

interface AnalyticsSnapshot {
  : number // seconds since start
  : number
  : number
  : number // percentage (0-100)
  : number
  : number // milliseconds
  : <
    string,
    {
      : number
      : number
      : number
      : { : number; : number; : number; : number }
      : string | null
      : number | null
    }
  >
  : <{ : number; : number; : number }>
}

Error response shape

interface ErrorEntry {
  : number
  : string // links to the HTTP request that caused this error
  : number
  : string
  : string
  : string // e.g. 'BAD_REQUEST', 'NOT_FOUND', 'INTERNAL_SERVER_ERROR'
  : number
  : string
  : unknown
  : <string, string> // authorization redacted
  : number
  : []
}

Request response shape

interface RequestEntry {
  : number
  : string // unique ID (also in x-request-id response header)
  : string // persistent session ID (from cookie)
  : number
  : number
  : string
  : string
  : string
  : <string, string>
  : <string, string>
  : string
  : number
  : ProcedureCall[]
  : boolean
}

interface ProcedureCall {
  : string
  : number
  : number
  : unknown
  : unknown
  : TraceSpan[]
  ?: string
}

interface TraceSpan {
  : string
  : 'db' | 'http' | 'cache' | 'queue' | 'email' | 'ai' | 'custom'
  : number
  ?: number
  ?: string
  ?: unknown
  ?: unknown
  ?: string
}

Server-side markdown export

You can also generate the markdown programmatically for Slack notifications, log files, or custom alerting:

import { , ,  } from 'silgi/analytics'

// Error context for AI debugging
const  = (errorEntry)

// Full request with all procedures and spans
const  = (requestEntry)

// Entire session with all requests
const  = (sessionRequests, sessionId)

For production monitoring with alerting, long-term storage, and multi-service correlation, use OpenTelemetry with your preferred backend. Built-in analytics is designed for development and single-service visibility.

What's next?

On this page