Comparison
How Silgi compares to oRPC, tRPC, ts-rest, and Hono — features, architecture, and benchmarks.
Feature matrix
| Feature | Silgi | oRPC | tRPC | ts-rest |
|---|---|---|---|---|
| End-to-end type-safe I/O | Yes | Yes | Yes | Yes |
| End-to-end type-safe errors | Yes | Yes | Partial | Yes |
| End-to-end type-safe File/Blob | Yes | Yes | Partial | No |
| End-to-end type-safe streaming | Yes | Yes | Yes | No |
| Single package | 1 package | 35 packages | 4+ packages | 3+ packages |
| Middleware model | Guard + Wrap | Middleware chain | Middleware chain | — |
| Compiled pipeline | Yes | No | No | No |
| Content negotiation | Automatic | RPC protocol | No | No |
| JSON protocol | Yes | Yes | Yes | Yes |
| MessagePack (binary) | Built-in | Via RPC protocol | Via superjson link | No |
| devalue (rich types) | Built-in | Native types via RPC | Via superjson | No |
| WebSocket RPC | Yes | Yes | Yes | No |
| Contract-first | Yes | Yes | No | Yes |
| Standard Schema (Zod, Valibot, ArkType) | Yes | Yes | Yes | No |
| OpenAPI generation | Built-in (Scalar) | Plugin | Via trpc-openapi | Built-in |
| SSE / Streaming | Yes | Yes | Yes | No |
| Server Actions (React) | Built-in | Built-in | Yes | No |
| TanStack Query (React) | Built-in | Built-in | Built-in | Partial |
| TanStack Query (Vue) | Yes | Yes | No | Partial |
| TanStack Query (Solid) | Yes | Yes | No | Partial |
| TanStack Query (Svelte) | Yes | Yes | No | No |
| AI SDK integration | Built-in | Built-in | No | No |
| Batch requests | Yes | Yes | Yes | No |
| Lazy routing | Yes | Yes | Yes | No |
| NestJS integration | Yes | Yes | Partial | Yes |
| Message Port (Electron, Workers) | Yes | Yes | Partial | No |
| CF WebSocket Hibernation | No | Yes | No | No |
| Framework adapters | 15 | 17+ | 4+ | 2 |
| Typed guard errors (auto-merge) | Yes | No | No | No |
| Response caching (SWR, pluggable) | Built-in | No | No | No |
| Offline API docs (no CDN) | Yes | No | No | No |
| HTTP Cache-Control headers | Built-in | No | No | No |
| Built-in plugins | 16 | 14+ | — | — |
Architecture differences
Single package vs 35 packages
Silgi ships everything as one npm install silgi with subpath imports:
import { } from 'silgi'
import { } from 'silgi/client'
import { } from 'silgi/hono'
import { } from 'silgi/otel'oRPC requires separate installs: @orpc/server, @orpc/client, @orpc/react-query, @orpc/openapi, @orpc/zod, etc. Both approaches work — single package is simpler to manage, monorepo packages allow finer dependency control.
Guard / Wrap vs middleware chain
Most RPC libraries have one middleware type. Silgi has two:
- Guard — runs before, enriches context. No
next(). Sync fast-path. - Wrap — runs before AND after (onion). Has
next(). For timing, caching, error capture.
oRPC: middleware → middleware → middleware → handler
Silgi: guard → guard → guard → [wrap → [wrap → handler]]Guards are semantically clearer and faster (no async overhead when sync). oRPC's single middleware is more flexible.
Compiled pipeline vs runtime dispatch
Silgi compiles the middleware chain once at startup:
- Guards are unrolled (0-4 specialization, no loop)
- Context pool eliminates per-request allocation
- Handler analysis skips unused features
oRPC evaluates the chain at runtime per request. More flexible for dynamic middleware, but Silgi is faster for static pipelines.
Content negotiation vs RPC protocol
Silgi inspects Accept header and responds in the matching format (JSON, MessagePack, or devalue). Standard HTTP, works with any client.
oRPC uses its own RPC protocol that natively handles Date, File, Blob, BigInt without negotiation. Simpler for oRPC-to-oRPC, less interoperable with non-oRPC consumers.
tRPC uses JSON by default, SuperJSON via transformer for rich types.
Benchmarks
All benchmarks run on the same machine (Apple M3 Max, Node v24.11.0). Sequential requests to isolate per-request latency.
Pipeline performance (no HTTP, pure execution)
Measures raw middleware pipeline overhead — how fast the framework processes a call after TCP/HTTP is stripped away.
| Scenario | Silgi | oRPC | H3 v2 | vs oRPC | vs H3 |
|---|---|---|---|---|---|
| No middleware | 111 ns | 685 ns | 2,025 ns | 6.2x faster | 18.2x faster |
| Zod input validation | 241 ns | 804 ns | 4,214 ns | 3.3x faster | 17.5x faster |
| 3 middleware + Zod | 297 ns | 1,718 ns | 3,954 ns | 5.8x faster | 13.3x faster |
| 5 middleware + Zod | 413 ns | 2,219 ns | 3,917 ns | 5.4x faster | 9.5x faster |
Silgi's compiled pipeline (unrolled guards, context pool) is 3-6x faster than oRPC and 9-18x faster than H3 at the pipeline level.
HTTP performance (Silgi vs oRPC vs H3 vs Hono)
Real HTTP servers, real TCP connections, 3000 sequential requests per scenario.
| Scenario | Silgi | oRPC | H3 v2 | Hono |
|---|---|---|---|---|
| Simple (no middleware) | 79µs (12,592/s) | 83µs (12,048/s) | 78µs (12,753/s) | 74µs (13,516/s) |
| Zod validation | 86µs (11,627/s) | 120µs (8,315/s) | 93µs (10,707/s) | 97µs (10,280/s) |
| Guard + Zod | 79µs (12,706/s) | 116µs (8,625/s) | 96µs (10,359/s) | 102µs (9,799/s) |
| Comparison | Simple | Zod | Guard + Zod |
|---|---|---|---|
| Silgi vs oRPC | ~tied | 1.4x faster | 1.5x faster |
| Silgi vs H3 | ~tied | 1.1x faster | 1.2x faster |
| Silgi vs Hono | 0.9x | 1.1x faster | 1.3x faster |
For simple routes, all four are within 10% — TCP overhead dominates. As middleware complexity grows, Silgi's compiled pipeline pulls ahead: 1.5x faster than oRPC and 1.2-1.3x faster than H3/Hono with guards + validation.
Router performance (Silgi compiled vs rou3)
JIT-compiled radix tree router vs rou3's compiled router.
| Scenario | Silgi | rou3 | |
|---|---|---|---|
Static /users/list | 3.1 ns | 3.9 ns | 1.23x faster |
Param /users/123 | 22.0 ns | 25.6 ns | 1.16x faster |
Deep /users/1/posts/2 | 23.7 ns | 26.6 ns | 1.12x faster |
Wildcard /files/a/b/c | 19.2 ns | 91.3 ns | 4.75x faster |
Miss /missing/deep | 4.5 ns | 22.3 ns | 4.96x faster |
Silgi's router uses indexOf fast path for simple branches (avoids split()), switch-based charCodeAt dispatch, and compile-time substring() for wildcards.
Tail latency (p99)
| Scenario (Guard + Zod) | avg | p50 | p99 |
|---|---|---|---|
| Silgi | 79µs | 70µs | 148µs |
| oRPC | 116µs | 103µs | 223µs |
| H3 v2 | 96µs | 82µs | 236µs |
| Hono | 102µs | 87µs | 194µs |
Silgi's p99 is 34% lower than oRPC and 37% lower than H3 — compiled pipelines keep tail latency predictable.
Memory usage
50K calls, 3 guards + Zod validation, measured with
--expose-gc
| Framework | Bytes per call | Ratio |
|---|---|---|
| Silgi | ~40 bytes | 1x |
| oRPC | ~56 bytes | 1.4x more |
WebSocket performance
2000 sequential messages over persistent connection
| Scenario | Silgi | oRPC | H3 v2 |
|---|---|---|---|
| Simple query | 39µs | 42µs | 34µs |
WebSocket eliminates TCP handshake overhead. All three are close — H3 is slightly faster here.
What the benchmarks don't show
- Real-world APIs spend 95%+ of time in DB queries and business logic. Pipeline overhead matters most for high-throughput, low-latency services.
- All benchmarks are sequential (1 connection). Concurrent load may show different characteristics.
node --experimental-strip-types bench/router.tsWhere oRPC is ahead
Being honest about oRPC's strengths:
- Ecosystem breadth — 35 packages with more niche integrations (React SWR, Vue Pinia Colada, Hey API, Sentry)
- Cloudflare Durable Objects — Hibernation + Durable Iterator support
- RPC protocol — Native type serialization without header negotiation
- Community size — Larger user base means more battle-tested edge cases
- Angular support — TanStack Query for Angular
Where Silgi is ahead
- Single package — one install, no version coordination, no transitive dependency issues
- Compiled pipeline — measurably faster, lower p99 latency
- Guard / Wrap model — clearer separation of concerns
- Typed guard errors — guards declare errors that auto-merge into procedures and OpenAPI spec
- Response caching —
cacheQuery()with TTL, SWR, dedup, pluggable backends via unstorage - Rich type serialization — Set, Map, Date via devalue codec without extra plugins
- Content negotiation — standard HTTP, interoperable with any client
- Built-in Scalar —
scalar: truein serve() for API docs, with offline mode (cdn: 'local') - HTTP Cache-Control —
route: { cache: 60 }for CDN/browser caching - HTTP/2 support — one-flag TLS setup
- Auto port selection — no
EADDRINUSEhandling needed
What's next?
- Getting Started — build your first Silgi API
- Examples — download and run in one command
- Migrating from tRPC — step-by-step guide