Benchmarks
Real performance numbers — runtime throughput, type-checking speed, memory usage.
This page exists to help you make an informed decision — not to diminish any project. Every framework listed here is the result of incredible open-source work, and the TypeScript ecosystem is better because all of them exist. We've measured our own weaknesses alongside our strengths, and we encourage you to run these benchmarks yourself. If any number is wrong or unfair, let us know.
All benchmarks: Apple M3 Max, Node.js v24.11.0. HTTP throughput measured with bombardier (--fasthttp), each server in a separate process, 10s duration × 3 rounds median, 3s warmup (discarded). Numbers are real — reproduce them on your hardware.
HTTP throughput — 64 concurrent connections
Simple POST endpoint returning JSON. Server and load generator run as separate processes.
| Framework | avg latency | p50 | p95 | p99 | req/s |
|---|---|---|---|---|---|
| Fastify | 1,161µs | 1,061µs | 1,450µs | 4,249µs | 55,074/s |
| Express | 1,387µs | 1,327µs | 1,690µs | 2,526µs | 46,125/s |
| Silgi | 1,523µs | 1,465µs | 1,805µs | 2,831µs | 42,001/s |
| Hono | 1,709µs | 1,571µs | 2,360µs | 3,202µs | 37,433/s |
HTTP throughput — 256 concurrent connections
| Framework | avg latency | p50 | p95 | p99 | req/s |
|---|---|---|---|---|---|
| Fastify | 4,839µs | 4,462µs | 7,685µs | 8,594µs | 52,866/s |
| Express | 5,585µs | 5,446µs | 6,427µs | 6,942µs | 45,813/s |
| Silgi | 6,408µs | 6,258µs | 7,378µs | 8,037µs | 39,920/s |
| Hono | 7,041µs | 6,721µs | 8,196µs | 9,154µs | 36,333/s |
Silgi and Hono use the Fetch API (Request/Response) while Fastify and Express use native Node.js req/res. This adapter overhead accounts for the throughput gap. Silgi consistently beats Hono, and the gap narrows as middleware complexity grows — see the RPC pipeline benchmarks below.
RPC pipeline (no HTTP)
Measures middleware pipeline overhead in isolation — no TCP, no serialization. Only applicable to RPC frameworks with a compiled pipeline.
| Scenario | Silgi | oRPC | H3 v2 |
|---|---|---|---|
| No middleware | 112 ns | 752 ns | 2,859 ns |
| Zod validation | 136 ns | 954 ns | 4,758 ns |
| 3 middleware + Zod | 179 ns | 1,773 ns | 4,152 ns |
| 5 middleware + Zod | 343 ns | 2,260 ns | 4,126 ns |
Hono, Elysia, Fastify, and Express are not included — they don't have an equivalent pipeline concept measurable in isolation. Their overhead is in the HTTP benchmarks above.
HTTP/1.1 — RPC over TCP (Node.js v24.11.0)
Full request/response latency — 3000 sequential requests per scenario.
| Scenario | Silgi | H3 v2 | oRPC |
|---|---|---|---|
| Simple (no mw, no validation) | 77µs (12,908/s) | 88µs (11,388/s) | 77µs (12,914/s) |
| Zod input validation | 89µs (11,182/s) | 96µs (10,396/s) | 116µs (8,597/s) |
| Guard + Zod validation | 83µs (12,046/s) | 94µs (10,647/s) | 114µs (8,737/s) |
Silgi wins when middleware complexity grows — compiled pipelines eliminate per-request overhead that accumulates in other frameworks.
WebSocket (persistent connection)
RPC over WebSocket — 2000 sequential messages on a persistent connection.
| Scenario | Silgi | oRPC | H3 v2 |
|---|---|---|---|
| Simple query (persistent conn) | 38µs | 39µs | 32µs |
All three frameworks are nearly identical on WebSocket — the persistent connection eliminates TCP handshake overhead, leaving only message serialization and pipeline execution.
Memory
50K calls, 3 guards + Zod,
--expose-gc
| Framework | Bytes/call |
|---|---|
| Silgi | ~40 bytes |
| oRPC | ~56 bytes |
TypeScript type-checking — 3,000 procedures
Measures tsc --noEmit wall-clock time on a project with 500 routers × 6 procedures = 3,000 procedures plus 500 consumer files that exercise client types with explicit annotations. Uses Standard Schema instead of Zod to isolate framework type overhead.
| Framework | Instantiations | Check time | Total time | Memory |
|---|---|---|---|---|
| Silgi | 385,509 | 0.91s | 1.36s | 413 MB |
| oRPC | 896,796 | 1.05s | 1.34s | 475 MB |
| tRPC | 1,260,273 | 1.88s | 2.15s | 541 MB |
Silgi produces 2.3× fewer type instantiations than oRPC and 3.3× fewer than tRPC. In practice this means faster editor responsiveness and shorter CI type-check runs as your router grows.
Run the typecheck benchmark yourself:
# Generate 3,000 procedures and measure
cd bench/typecheck-silgi && npm run bench
cd bench/typecheck-orpc && npm run bench
cd bench/typecheck-trpc && npm run benchNotes
- Real-world APIs spend 95%+ of time in DB queries. Framework overhead matters for high-throughput services.
- HTTP throughput benchmarks use bombardier with
--fasthttpflag — same methodology as Hono and Elysia benchmarks. - Each framework runs as a separate process to avoid shared-resource bias.
- Elysia is Bun-only — excluded from Node.js benchmarks.
Run benchmarks yourself:
node --experimental-strip-types bench/http-throughput.ts # HTTP throughput (bombardier)
node --experimental-strip-types bench/http.ts # HTTP latency (sequential)
pnpm bench # Pipeline + RPC benchmarksChoosing the right tool
Every framework here is well-built and actively maintained.
oRPC — broadest RPC ecosystem, Cloudflare Durable Objects, dedicated RPC protocol.
tRPC — most mature type-safe RPC, largest community, proven track record.
Hono — lightweight, runs everywhere: Node.js, Bun, Deno, Cloudflare Workers.
Elysia — best Bun performance, elegant TypeBox API.
Fastify — mature Node.js, vast plugin ecosystem.
Express — maximum middleware compatibility, widest community.
Silgi — compiled pipelines, single package, built-in analytics, Guard/Wrap model.
We respect every project in this space. If you find something wrong, open an issue.