Analytics
Built-in monitoring dashboard with session tracking, deep error tracing, AI-ready markdown export, and custom span tracking.
Built-in analytics for Silgi. No Prometheus, no Grafana, no external infrastructure. One flag gives you a live dashboard, full error context, session tracking, and one-click markdown export for AI debugging.
Quick start
Pass analytics: true to serve() or handler():
s.serve(appRouter, {
: 3000,
: true,
})Open http://localhost:3000/analytics in your browser.
What it tracks
| Metric | How | Overhead |
|---|---|---|
| Request count per procedure | Atomic counter | ~0 |
| Error log with full context | Input, headers, stack trace, spans | ~0 |
| Latency (avg, p50, p95, p99) | Ring buffer + performance.now() | ~2ns |
| Requests/sec | Count / elapsed time | ~0 |
| Time-series (sparkline) | 1-second windows | ~0 |
| Custom spans (DB, API calls) | trace() helper | ~0 when unused |
| HTTP request grouping | Request accumulator with IDs | ~0 |
| Session tracking | Cookie-based (_sid) | ~0 |
| Response headers | Captured after Response is built | ~0 |
All data lives in memory. No disk I/O, no network calls, no serialization until the dashboard is opened.
Dashboard
The dashboard is a self-contained single-file app served at /analytics. It auto-refreshes every 2 seconds and groups the most important signals into overview, request, session, and error workflows.
Overview tab:
- Health summary banner with busiest, noisiest, and slowest procedure callouts
- Total requests, req/s, error rate, and average latency stat cards
- Traffic console with peak windows, focus-mode panels for traffic, latency, and failures, plus a sortable per-procedure breakdown
Requests tab:
- Recent traced requests with procedure, status, and latency filters plus sortable columns
- Clickable session ID badges to navigate to the session detail
- Per-request detail pages with span waterfall timing, captured input/output payloads, request and response headers, timing breakdown by category (db, cache, http, etc.), and session link
Sessions tab:
- All active sessions with request count, error count, total/average latency, and procedures called
- Search and sortable columns
- Session detail page with:
- Stat strip (requests, errors, wall clock time, CPU time, avg latency, slowest/fastest request)
- Procedure journey — visual breadcrumb trail showing the sequence of procedures called across the session
- Session flow Gantt chart — timeline visualization of when each request happened relative to session start
- Clickable request timeline with inline span waterfall and input/output preview
- Per-request mini timing bars showing db/cache/http breakdown at a glance
- Method and status code breakdown
- Time by category chart with tooltips
- Per-procedure stats cards (call count, total/avg duration, spans, errors)
- Copy for AI (Markdown) and Copy JSON for the entire session
Errors tab:
- Filterable error log with procedure, severity, and trace-presence controls
- Detail pages with full input, headers, stack trace, and traced spans for each error
- Clickable request ID badge to navigate to the HTTP request that caused the error
- Copy for AI (Markdown) — one click to copy a structured markdown document, paste into any AI assistant
- Copy JSON — raw error data for programmatic use (sensitive headers auto-redacted)
Request IDs
Every response includes an x-request-id header with a unique Snowflake-style ID. The ID is a 13-character Base36 string that is lexicographically time-sorted and collision-resistant across processes.
x-request-id: 1a2b3c4d5e6f7Request IDs link HTTP requests to errors in the dashboard — clicking an error's request badge navigates to the full request detail.
Session tracking
Analytics automatically tracks user sessions via a _sid cookie. Sessions group multiple HTTP requests from the same browser or client, letting you see the full user journey.
- Cookie:
_sid—HttpOnly,SameSite=Lax, 1-year expiry - ID format: Same Snowflake format as request IDs — unique and time-sorted
- No server-side session state: Session data is derived from request entries client-side
Sessions appear in the dashboard's Sessions tab and as clickable badges in request detail pages.
Tracing DB queries and API calls
When analytics is enabled, every request context gets a trace() method. Use it to measure any async operation inside your procedures:
const = s.query().$resolve(async ({ }) => {
const = await .trace('db.users.findMany', () => db.users.findMany())
const = await .trace('db.users.count', () => db.users.count())
return { , }
})Each traced operation records its name, duration, start offset, and error status. Spans appear in the request detail page with a waterfall timeline, and in the error detail panel when a request fails.
Span options
The trace() method accepts an options object as the third argument:
await ctx.trace('db.users.findMany', () => db.users.findMany(), {
: 'db',
: 'SELECT * FROM users WHERE active = true',
: { : 'active' },
: () => ({ : .length }),
})| Option | Type | Description |
|---|---|---|
kind | SpanKind | Category for color-coding in the dashboard (see below) |
detail | string | Extra info shown in the expanded span view (SQL query, URL, etc.) |
input | unknown | Input data captured for this span (shown in expanded detail) |
output | unknown | (result) => unknown | Output data — a value or a function that receives the result |
procedure | { input?, output? } | Set procedure-level input/output (recorded on the HTTP request) |
The output option can be a function to derive the captured value from the trace result, avoiding capturing the full response:
// Capture just the count, not the full user array
await ctx.trace('db.users.findMany', () => db.users.findMany(), {
: { : 10 },
: () => ({ : .length }),
})Procedure-level capture
Use procedure to record input/output at the HTTP request level (visible in the request detail page), not just on the individual span:
const = s.$input(z.object({ : z.number() })).$resolve(async ({ , }) => {
return .trace('db.users.findById', () => db.users.findById(.id), {
: { : .id },
: () => ({ : .name }),
: {
,
: () => ,
},
})
})Span kinds
Spans are automatically categorized by their name prefix. You can override this with the kind option.
| Kind | Auto-detected when name contains | Dashboard color |
|---|---|---|
db | db., sql, prisma, drizzle, query, mongo | Purple |
http | http., fetch, api. | Blue |
cache | cache., redis, memcache | Emerald |
queue | queue, publish, nats, kafka | Amber |
email | email, smtp, ses | Orange |
ai | ai, llm, openai, gemini | Cyan |
custom | Anything else | Gray |
Standalone trace() helper
If you prefer an explicit import over the context method, use the standalone trace() function. It works whether analytics is enabled or not — when disabled, it calls the function directly with zero overhead:
import { } from 'silgi/analytics'
const = s.query().$resolve(async ({ }) => {
const = await (, 'db.users.findMany', () => db.users.findMany())
const = await (, 'api.weather', () => (weatherUrl))
return { , }
})This is useful when you want tracing code that compiles cleanly regardless of whether the handler has analytics turned on.
Copy for AI
Both errors and requests support one-click copy for AI debugging:
- Errors: Full context with procedure path, error code, input, headers (authorization redacted), stack trace, traced spans, and total duration
- Requests: HTTP metadata, all procedure calls with input/output, span waterfall with timing breakdown
- Timing: Focused performance view with just the span timeline and timing categories
- Sessions: Full session summary with all requests and their procedures
The markdown format is optimized for pasting into Claude, ChatGPT, or any AI assistant. Each export ends with analysis prompts for performance optimization.
Protecting the dashboard
By default, the analytics dashboard is open to anyone who can reach the URL. In production you should always set the auth option.
Token auth
Pass a secret string. The dashboard will prompt for the token on first visit and store it in the browser's session storage.
s.serve(appRouter, {
: {
: ..,
},
})The token is checked against the Authorization: Bearer <token> header or the silgi-auth cookie (set automatically by the login page).
Custom auth
Pass a function for full control — check cookies, IP allowlists, OAuth tokens, or anything else:
s.serve(appRouter, {
: {
: () => {
const = .headers.get('cookie') ?? ''
return .includes('admin_session=valid')
},
},
})The function receives the raw Request and returns boolean or Promise<boolean>. Return true to allow access, false to block with a 401.
Never deploy analytics without auth in production. The dashboard exposes request payloads, headers, error stacks,
and internal procedure names.
Configuration
Pass an options object instead of true to customize buffer sizes and history:
s.serve(appRouter, {
: {
: 'my-secret-token',
: 2048,
: 300,
: 200,
: 500,
},
})| Option | Type | Default | Description |
|---|---|---|---|
auth | string | (req: Request) => boolean | undefined | Token string or custom auth function |
bufferSize | number | 1024 | Latency samples to keep per procedure (ring buffer) |
historySeconds | number | 120 | Time-series sparkline history in seconds |
maxErrors | number | 100 | Maximum error log entries to keep |
maxRequests | number | 200 | Maximum recent request entries to keep |
All buffers are fixed-size ring buffers — memory usage stays constant regardless of traffic volume.
With handler()
Works the same way when using s.handler() instead of s.serve():
const = s.handler(appRouter, { : true })
Bun.serve({ : 3000, : })
// Dashboard at /analyticsJSON API
The dashboard reads from three JSON endpoints. You can also query them directly for custom integrations, alerting, or external dashboards:
| Endpoint | Description |
|---|---|
/analytics/_api/stats | Procedure metrics, latency percentiles, time-series |
/analytics/_api/errors | Full error log with input, headers, stack trace, spans |
/analytics/_api/requests | Recent requests with trace spans and session IDs |
All endpoints return JSON with cache-control: no-cache.
Stats response shape
interface AnalyticsSnapshot {
: number // seconds since start
: number
: number
: number // percentage (0-100)
: number
: number // milliseconds
: <
string,
{
: number
: number
: number
: { : number; : number; : number; : number }
: string | null
: number | null
}
>
: <{ : number; : number; : number }>
}Error response shape
interface ErrorEntry {
: number
: string // links to the HTTP request that caused this error
: number
: string
: string
: string // e.g. 'BAD_REQUEST', 'NOT_FOUND', 'INTERNAL_SERVER_ERROR'
: number
: string
: unknown
: <string, string> // authorization redacted
: number
: []
}Request response shape
interface RequestEntry {
: number
: string // unique ID (also in x-request-id response header)
: string // persistent session ID (from cookie)
: number
: number
: string
: string
: string
: <string, string>
: <string, string>
: string
: number
: ProcedureCall[]
: boolean
}
interface ProcedureCall {
: string
: number
: number
: unknown
: unknown
: TraceSpan[]
?: string
}
interface TraceSpan {
: string
: 'db' | 'http' | 'cache' | 'queue' | 'email' | 'ai' | 'custom'
: number
?: number
?: string
?: unknown
?: unknown
?: string
}Server-side markdown export
You can also generate the markdown programmatically for Slack notifications, log files, or custom alerting:
import { , , } from 'silgi/analytics'
// Error context for AI debugging
const = (errorEntry)
// Full request with all procedures and spans
const = (requestEntry)
// Entire session with all requests
const = (sessionRequests, sessionId)For production monitoring with alerting, long-term storage, and multi-service correlation, use OpenTelemetry with your preferred backend. Built-in analytics is designed for development and single-service visibility.
What's next?
- OpenTelemetry — distributed tracing for production
- Pino Logging — structured request logging
- Plugins — all available plugins