Silgi

Benchmarks

Real performance numbers — runtime throughput, type-checking speed, memory usage.

This page exists to help you make an informed decision — not to diminish any project. Every framework listed here is the result of incredible open-source work, and the TypeScript ecosystem is better because all of them exist. We've measured our own weaknesses alongside our strengths, and we encourage you to run these benchmarks yourself. If any number is wrong or unfair, let us know.

All benchmarks: Apple M3 Max, Node.js v24.11.0. HTTP throughput measured with bombardier (--fasthttp), each server in a separate process, 10s duration × 3 rounds median, 3s warmup (discarded). Numbers are real — reproduce them on your hardware.

HTTP throughput — 64 concurrent connections

Simple POST endpoint returning JSON. Server and load generator run as separate processes.

Frameworkavg latencyp50p95p99req/s
Fastify1,161µs1,061µs1,450µs4,249µs55,074/s
Express1,387µs1,327µs1,690µs2,526µs46,125/s
Silgi1,523µs1,465µs1,805µs2,831µs42,001/s
Hono1,709µs1,571µs2,360µs3,202µs37,433/s

HTTP throughput — 256 concurrent connections

Frameworkavg latencyp50p95p99req/s
Fastify4,839µs4,462µs7,685µs8,594µs52,866/s
Express5,585µs5,446µs6,427µs6,942µs45,813/s
Silgi6,408µs6,258µs7,378µs8,037µs39,920/s
Hono7,041µs6,721µs8,196µs9,154µs36,333/s

Silgi and Hono use the Fetch API (Request/Response) while Fastify and Express use native Node.js req/res. This adapter overhead accounts for the throughput gap. Silgi consistently beats Hono, and the gap narrows as middleware complexity grows — see the RPC pipeline benchmarks below.

RPC pipeline (no HTTP)

Measures middleware pipeline overhead in isolation — no TCP, no serialization. Only applicable to RPC frameworks with a compiled pipeline.

ScenarioSilgioRPCH3 v2
No middleware112 ns752 ns2,859 ns
Zod validation136 ns954 ns4,758 ns
3 middleware + Zod179 ns1,773 ns4,152 ns
5 middleware + Zod343 ns2,260 ns4,126 ns

Hono, Elysia, Fastify, and Express are not included — they don't have an equivalent pipeline concept measurable in isolation. Their overhead is in the HTTP benchmarks above.

HTTP/1.1 — RPC over TCP (Node.js v24.11.0)

Full request/response latency — 3000 sequential requests per scenario.

ScenarioSilgiH3 v2oRPC
Simple (no mw, no validation)77µs (12,908/s)88µs (11,388/s)77µs (12,914/s)
Zod input validation89µs (11,182/s)96µs (10,396/s)116µs (8,597/s)
Guard + Zod validation83µs (12,046/s)94µs (10,647/s)114µs (8,737/s)

Silgi wins when middleware complexity grows — compiled pipelines eliminate per-request overhead that accumulates in other frameworks.

WebSocket (persistent connection)

RPC over WebSocket — 2000 sequential messages on a persistent connection.

ScenarioSilgioRPCH3 v2
Simple query (persistent conn)38µs39µs32µs

All three frameworks are nearly identical on WebSocket — the persistent connection eliminates TCP handshake overhead, leaving only message serialization and pipeline execution.

Memory

50K calls, 3 guards + Zod, --expose-gc

FrameworkBytes/call
Silgi~40 bytes
oRPC~56 bytes

TypeScript type-checking — 3,000 procedures

Measures tsc --noEmit wall-clock time on a project with 500 routers × 6 procedures = 3,000 procedures plus 500 consumer files that exercise client types with explicit annotations. Uses Standard Schema instead of Zod to isolate framework type overhead.

FrameworkInstantiationsCheck timeTotal timeMemory
Silgi385,5090.91s1.36s413 MB
oRPC896,7961.05s1.34s475 MB
tRPC1,260,2731.88s2.15s541 MB

Silgi produces 2.3× fewer type instantiations than oRPC and 3.3× fewer than tRPC. In practice this means faster editor responsiveness and shorter CI type-check runs as your router grows.

Run the typecheck benchmark yourself:

# Generate 3,000 procedures and measure
cd bench/typecheck-silgi && npm run bench
cd bench/typecheck-orpc  && npm run bench
cd bench/typecheck-trpc  && npm run bench

Notes

  • Real-world APIs spend 95%+ of time in DB queries. Framework overhead matters for high-throughput services.
  • HTTP throughput benchmarks use bombardier with --fasthttp flag — same methodology as Hono and Elysia benchmarks.
  • Each framework runs as a separate process to avoid shared-resource bias.
  • Elysia is Bun-only — excluded from Node.js benchmarks.

Run benchmarks yourself:

node --experimental-strip-types bench/http-throughput.ts  # HTTP throughput (bombardier)
node --experimental-strip-types bench/http.ts             # HTTP latency (sequential)
pnpm bench                                               # Pipeline + RPC benchmarks

Choosing the right tool

Every framework here is well-built and actively maintained.

oRPC — broadest RPC ecosystem, Cloudflare Durable Objects, dedicated RPC protocol.

tRPC — most mature type-safe RPC, largest community, proven track record.

Hono — lightweight, runs everywhere: Node.js, Bun, Deno, Cloudflare Workers.

Elysia — best Bun performance, elegant TypeBox API.

Fastify — mature Node.js, vast plugin ecosystem.

Express — maximum middleware compatibility, widest community.

Silgi — compiled pipelines, single package, built-in analytics, Guard/Wrap model.

We respect every project in this space. If you find something wrong, open an issue.

On this page