Production-grade Server-Sent Events (SSE) for Fiber v3
React SDK available:
npm install fibersse-reactβ hooks for TanStack Query / SWR cache invalidation. GitHubBlog: How We Eliminated 90% of API Calls by Replacing Polling with SSE
Stop polling. Start pushing. The only SSE library built natively for Fiber v3 β with built-in cache invalidation, event coalescing, and one-line domain event publishing.
Replace setInterval with one line of Go:
// Before: client polls every 30 seconds (wasteful)
// setInterval(() => fetch("/api/orders"), 30_000)
// After: server pushes when data ACTUALLY changes
hub.Invalidate("orders", order.ID, "created") // β client refetches instantly80-90% fewer API calls. Real-time UI. Zero polling.
Every Go SSE library (r3labs/sse, tmaxmax/go-sse) is built on net/http and breaks on Fiber β fasthttp.RequestCtx.Done() only fires on server shutdown, not per-client disconnect. Zombie subscribers leak forever. fibersse uses Fiber's native SendStreamWriter with w.Flush() error detection.
Most SSE libraries just push events. fibersse has built-in patterns for replacing polling:
| API | What It Does | Replaces |
|---|---|---|
hub.Invalidate() |
Signal clients to refetch a resource | setInterval polling |
hub.InvalidateForTenant() |
Tenant-scoped invalidation (multi-tenant SaaS) | Tenant polling |
hub.InvalidateForTenantWithHint() |
Tenant-scoped + data hints in one call | Polling + extra fetch |
hub.DomainEvent() |
Structured event from any handler/worker | Manual event wiring |
hub.BatchDomainEvents() |
Multiple resource changes in one SSE frame | Multiple polling loops |
hub.Progress() |
Coalesced progress (5%β8% sends only 8%) | 2s progress polling |
hub.Complete() |
Operation done signal (instant delivery) | Completion polling |
hub.Signal() / SignalForTenant() |
Generic "something changed" refresh | Dashboard polling |
| Feature | r3labs/sse | tmaxmax/go-sse | fibersse |
|---|---|---|---|
| Fiber v3 native | No | No | Yes |
| Disconnect detection | Broken on Fiber | Broken on Fiber | Works (flush-based) |
| Event coalescing | No | No | Yes (last-writer-wins) |
| Priority lanes | No | No | Yes (P0 instant / P1 batched / P2 coalesced) |
| Topic wildcards | No | No | Yes (NATS-style * and >) |
| Adaptive throttling | No | No | Yes (buffer-depth AIMD) |
| Connection groups | No | No | Yes (publish by metadata) |
| Backpressure | Blocks sender | Blocks sender | Drops + reconnect hint |
| Built-in auth | No | No | Yes (JWT + ticket helpers) |
| Prometheus metrics | No | No | Yes |
| Graceful drain | No | No | Yes (Kubernetes-style) |
| Event TTL | No | No | Yes |
| Last-Event-ID replay | Yes | Yes | Yes (pluggable) |
| Fan-out middleware | No | No | Yes (Redis/NATS bridge) |
Fiber's official SSE recipe is ~50 lines of raw SendStreamWriter code. It's a great starting point, but it's a recipe (copy-paste example), not a library. Here's what fibersse adds:
| Feature | Fiber Recipe | fibersse |
|---|---|---|
| Hub pattern (managed connections) | β | β |
| Topic routing | β | β |
NATS-style wildcard topics (*, >) |
β | β |
| Event coalescing (P0/P1/P2 priorities) | β | β |
| Authentication (JWT + ticket) | β | β |
| Last-Event-ID replay | β | β |
| Heartbeat management | β | β (adaptive) |
| Connection tracking + groups | β | β |
| Prometheus metrics | β | β |
| Graceful Kubernetes-style drain | β | β |
| Cache invalidation helpers | β | β |
| Multi-tenant support | β | β |
| Domain event publishing | β | β |
| Progress tracking (coalesced) | β | β |
| Auto fan-out from Redis/NATS | β | β |
| Visibility hints (paused tabs) | β | β |
| Adaptive per-connection throttling | β | β |
React SDK (fibersse-react) |
β | β |
The recipe is perfect if you need to push a single event to a single client. fibersse is for production apps that need topic routing, multi-tenancy, auth, coalescing, and monitoring.
go get github.com/vinod-morya/fibersse@latestRequirements: Go 1.23+ and Fiber v3.
package main
import (
"time"
"github.com/gofiber/fiber/v3"
"github.com/vinod-morya/fibersse"
)
func main() {
app := fiber.New()
// Create the SSE hub
hub := fibersse.New(fibersse.HubConfig{
FlushInterval: 2 * time.Second,
HeartbeatInterval: 30 * time.Second,
OnConnect: func(c fiber.Ctx, conn *fibersse.Connection) error {
// Authenticate and set topics
conn.Topics = []string{"notifications", "live"}
conn.Metadata["user_id"] = "user_123"
return nil
},
})
// Mount the SSE endpoint
app.Get("/events", hub.Handler())
// Publish events from anywhere in your app
go func() {
for i := 0; ; i++ {
hub.Publish(fibersse.Event{
Type: "heartbeat",
Data: map[string]int{"count": i},
Topics: []string{"live"},
})
time.Sleep(5 * time.Second)
}
}()
app.Listen(":3000")
}Client (browser):
const es = new EventSource('/events');
es.addEventListener('heartbeat', (e) => {
console.log(JSON.parse(e.data)); // { count: 0 }
});
es.addEventListener('notification', (e) => {
showToast(JSON.parse(e.data));
});Backend β publish when data changes:
// In your order handler
func (h *OrderHandler) Create(c fiber.Ctx) error {
order, err := h.svc.Create(...)
if err != nil { return err }
// One line β replaces 30s polling for ALL connected clients
hub.InvalidateForTenant(tenantID, "orders", order.ID, "created")
return c.JSON(order)
}Frontend β listen and refetch:
// With TanStack Query (React Query)
const es = new EventSource('/events?topics=orders');
es.addEventListener('invalidate', (e) => {
const { resource } = JSON.parse(e.data);
queryClient.invalidateQueries({ queryKey: [resource] });
});
// With SWR
es.addEventListener('invalidate', (e) => {
const { resource } = JSON.parse(e.data);
mutate(`/api/${resource}`);
});// Backend β in your import worker
for i, row := range rows {
processRow(row)
hub.Progress("import", importID, tenantID, i+1, len(rows))
// Fires 1000 times but client receives ~10 updates (coalesced!)
}
hub.Complete("import", importID, tenantID, true, nil)// Frontend
es.addEventListener('progress', (e) => {
const { pct } = JSON.parse(e.data);
setProgressBar(pct); // Smooth updates, no polling
});
es.addEventListener('complete', (e) => {
showToast("Import complete!");
queryClient.invalidateQueries({ queryKey: ['products'] });
});// Backend β after ANY mutation that affects the dashboard
hub.SignalForTenant(tenantID, "dashboard") // coalesced, won't flood
// Or with hints:
hub.InvalidateWithHint("orders", orderID, "created", map[string]any{
"total": 149.99,
"customer": "John Doe",
})| Metric | Before (Polling) | After (SSE) |
|---|---|---|
| API calls per user/minute | ~12 (6 pages Γ 30s) | ~0-2 (only when data changes) |
| Time to see new data | 0-30 seconds | < 200ms |
| Server load | Constant (even idle users poll) | Proportional to actual changes |
| Battery drain (mobile) | High (constant network) | Minimal (idle connection) |
Three priority lanes control how events reach clients:
// P0: INSTANT β bypasses all buffering, sent immediately
// Use for: notifications, errors, chat messages, auth revocations
hub.Publish(fibersse.Event{
Type: "notification",
Data: map[string]string{"title": "New order!"},
Topics: []string{"notifications"},
Priority: fibersse.PriorityInstant,
})
// P1: BATCHED β collected in a time window, all sent together
// Use for: status updates, media processing
hub.Publish(fibersse.Event{
Type: "media_status",
Data: map[string]string{"id": "m_1", "status": "ready"},
Topics: []string{"media"},
Priority: fibersse.PriorityBatched,
})
// P2: COALESCED β last-writer-wins per key
// If progress goes 5% β 6% β 7% β 8% in 2 seconds, client receives only 8%
hub.Publish(fibersse.Event{
Type: "progress",
Data: map[string]int{"pct": 8},
Topics: []string{"tasks"},
Priority: fibersse.PriorityCoalesced,
CoalesceKey: "task:abc123",
})Subscribe to topic patterns using * (one segment) and > (one or more trailing segments):
// Client subscribes to "analytics.*"
conn.Topics = []string{"analytics.*"}
// These events all match:
hub.Publish(fibersse.Event{Topics: []string{"analytics.live"}}) // matched by *
hub.Publish(fibersse.Event{Topics: []string{"analytics.revenue"}}) // matched by *
// Subscribe to everything under analytics:
conn.Topics = []string{"analytics.>"}
// Now these also match:
hub.Publish(fibersse.Event{Topics: []string{"analytics.live.visitors"}}) // matched by >
hub.Publish(fibersse.Event{Topics: []string{"analytics.funnel.checkout"}}) // matched by >Publish to connections by metadata instead of topics β perfect for multi-tenant SaaS:
// During OnConnect, set metadata:
conn.Metadata["tenant_id"] = "t_123"
conn.Metadata["plan"] = "pro"
// Publish to ALL connections for a specific tenant:
hub.Publish(fibersse.Event{
Type: "tenant_update",
Data: map[string]string{"message": "Plan upgraded"},
Group: map[string]string{"tenant_id": "t_123"},
})
// Publish to all pro-plan users:
hub.Publish(fibersse.Event{
Type: "feature_announcement",
Data: "New feature available!",
Group: map[string]string{"plan": "pro"},
})The hub automatically adjusts flush intervals per connection based on buffer saturation:
| Buffer Saturation | Effective Interval | Behavior |
|---|---|---|
| < 10% (healthy) | FlushInterval / 4 | Fast delivery |
| 10-50% (normal) | FlushInterval | Default cadence |
| 50-80% (warning) | FlushInterval Γ 2 | Slowing down |
| > 80% (critical) | FlushInterval Γ 4 | Backpressure relief |
Mobile users on slow connections automatically get fewer updates. Desktop users on fast connections get near-real-time delivery. Zero configuration needed.
Pause non-critical events for hidden browser tabs:
// Server-side: pause/resume a connection
hub.SetPaused(connID, true) // tab hidden β skip P1/P2 events
hub.SetPaused(connID, false) // tab visible β resume all eventsP0 (instant) events are always delivered regardless of pause state β critical messages like errors and auth revocations never get dropped.
JWT Auth β validate Bearer tokens or query parameters:
hub := fibersse.New(fibersse.HubConfig{
OnConnect: fibersse.JWTAuth(func(token string) (map[string]string, error) {
claims, err := myJWTValidator(token)
if err != nil {
return nil, err
}
return map[string]string{
"tenant_id": claims.TenantID,
"user_id": claims.UserID,
}, nil
}),
})Ticket Auth β one-time tickets for EventSource (which can't send headers):
store := fibersse.NewMemoryTicketStore() // or implement TicketStore with Redis
// Issue ticket (in your authenticated POST endpoint):
ticket, _ := fibersse.IssueTicket(store, `{"tenant":"t1","topics":"notifications,live"}`, 30*time.Second)
// Use ticket auth in hub:
hub := fibersse.New(fibersse.HubConfig{
OnConnect: fibersse.TicketAuth(store, func(value string) (map[string]string, []string, error) {
var data struct{ Tenant, Topics string }
json.Unmarshal([]byte(value), &data)
return map[string]string{"tenant_id": data.Tenant},
strings.Split(data.Topics, ","), nil
}),
})Bridge external pub/sub to SSE with one line:
// Redis pub/sub β SSE (implement PubSubSubscriber interface)
cancel := hub.FanOut(fibersse.FanOutConfig{
Subscriber: myRedisSubscriber,
Channel: "notifications:tenant_123",
Topic: "notifications",
EventType: "notification",
Priority: fibersse.PriorityInstant,
})
defer cancel()
// Multiple channels at once:
cancel := hub.FanOutMulti(
fibersse.FanOutConfig{Subscriber: redis, Channel: "notifications:*", Topic: "notifications", EventType: "notification", Priority: fibersse.PriorityInstant},
fibersse.FanOutConfig{Subscriber: redis, Channel: "media:*", Topic: "media", EventType: "media_status", Priority: fibersse.PriorityBatched},
fibersse.FanOutConfig{Subscriber: redis, Channel: "import:*", Topic: "import", EventType: "progress", Priority: fibersse.PriorityCoalesced},
)
defer cancel()Implement the PubSubSubscriber interface for your broker:
type PubSubSubscriber interface {
Subscribe(ctx context.Context, channel string, onMessage func(payload string)) error
}Drop stale events instead of delivering outdated data:
hub.Publish(fibersse.Event{
Type: "live_count",
Data: map[string]int{"visitors": 42},
Topics: []string{"live"},
TTL: 5 * time.Second, // useless after 5 seconds
})Built-in monitoring endpoints:
// JSON metrics (for dashboards)
app.Get("/admin/sse/metrics", hub.MetricsHandler())
// Prometheus format (for Grafana/Datadog)
app.Get("/metrics/sse", hub.PrometheusHandler())Exposed metrics:
fibersse_connections_activeβ current open connectionsfibersse_connections_pausedβ hidden-tab connectionsfibersse_events_published_totalβ lifetime events publishedfibersse_events_dropped_totalβ events dropped (backpressure/TTL)fibersse_pending_eventsβ events buffered in coalescersfibersse_buffer_saturation_avgβ average send buffer usagefibersse_buffer_saturation_maxβ worst-case buffer usagefibersse_connections_by_topic{topic="..."}β per-topic breakdownfibersse_events_by_type_total{type="..."}β per-event-type breakdown (invalidate, progress, signal, batch, etc.)
Pluggable replay for reconnecting clients:
hub := fibersse.New(fibersse.HubConfig{
Replayer: fibersse.NewMemoryReplayer(fibersse.MemoryReplayerConfig{
MaxEvents: 1000,
TTL: 5 * time.Minute,
}),
})Implement the Replayer interface for Redis Streams or any durable store:
type Replayer interface {
Store(event MarshaledEvent, topics []string) error
Replay(lastEventID string, topics []string) ([]MarshaledEvent, error)
}On shutdown, the hub:
- Enters drain mode (rejects new connections with
503 + Retry-After: 5) - Sends
server-shutdownevent to all connected clients - Waits for context deadline to let clients reconnect elsewhere
- Closes all connections and stops the run loop
// In your shutdown handler:
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
hub.Shutdown(ctx)Each connection has a bounded send buffer (default: 256 events). If a client can't keep up:
- New events are dropped (not queued infinitely)
MessagesDroppedcounter increments- Monitor via
hub.Metrics()to identify slow clients - The client's EventSource auto-reconnects and gets current state
Run on Apple M4 Max, Go 1.25, -benchmem:
| Operation | ns/op | B/op | allocs/op |
|---|---|---|---|
| Publish (1 conn) | 477 | 72 | 2 |
| Publish (1,000 conns) | 81,976 | 101,572 | 22 |
| Coalesce same key | 21 | 0 | 0 |
| Topic match (exact) | 8 | 0 | 0 |
Topic match (wildcard *) |
51 | 64 | 2 |
Topic match (wildcard >) |
60 | 96 | 2 |
| Marshal event (string) | 3 | 0 | 0 |
| Marshal event (struct) | 89 | 96 | 2 |
| Connection send | 14 | 0 | 0 |
| Backpressure drop | 2 | 0 | 0 |
| Throttle decision | 19 | 0 | 0 |
| Group match (single key) | 27 | 0 | 0 |
| Replayer store | 140 | 687 | 4 |
Key takeaway: Publishing to 1,000 connections takes 82ΞΌs. Zero-alloc on all hot paths (topic match, send, backpressure, throttle).
go test -bench=. -benchmem ./...fibersse.HubConfig{
FlushInterval: 2 * time.Second, // P1/P2 coalescing window
SendBufferSize: 256, // per-connection buffer capacity
HeartbeatInterval: 30 * time.Second, // keepalive for disconnect detection
MaxLifetime: 30 * time.Minute, // max connection duration (0 = unlimited)
RetryMS: 3000, // client reconnection hint (ms)
Replayer: nil, // Last-Event-ID replay (nil = disabled)
Logger: slog.Default(), // structured logging (nil = disabled)
OnConnect: nil, // auth + topic selection callback
OnDisconnect: nil, // cleanup callback
OnPause: nil, // called when client tab goes hidden
OnResume: nil, // called when client tab becomes visible
} Publish()
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββ
β Hub Run Loop (single goroutine) β
β β
β register βββ new connections β
β unregister βββ disconnects β
β events βββ published events β
β β
β For each event: β
β 1. Match topics (exact + wildcard) β
β 2. Match groups (metadata k-v) β
β 3. Skip paused connections (P1/P2) β
β 4. Route by priority: β
β P0 β send channel (immediate) β
β P1 β batch buffer β
β P2 β coalesce buffer (LWW) β
β β
β Flush ticker (every FlushInterval): β
β Adaptive throttle per connection β
β Drain batch + coalesce β send chan β
β β
β Heartbeat ticker: β
β Send comment to idle connections β
ββββββββββββββββββββββββββββββββββββββββββ
β
βΌ (per-connection send channel)
ββββββββββββββββββββββββββββββββββββββββββ
β Connection Writer (in SendStreamWriter)β
β β
β for event := range sendChan: β
β write SSE format β bufio.Writer β
β w.Flush() β detect disconnect β
ββββββββββββββββββββββββββββββββββββββββββ
fibersse/
βββ hub.go Core hub β New(), Publish(), Handler(), Shutdown()
βββ invalidation.go Kill polling β Invalidate(), Signal(), InvalidateForTenant()
βββ domain_event.go One-line publish β DomainEvent(), Progress(), Complete()
βββ event.go Event struct, Priority constants, SSE wire format
βββ connection.go Per-client connection, write loop, backpressure
βββ coalescer.go Batch + last-writer-wins buffers
βββ topic.go NATS-style wildcard topic matching (* and >)
βββ throttle.go Adaptive per-connection flush interval (AIMD)
βββ auth.go JWTAuth, TicketAuth, TicketStore helpers
βββ fanout.go PubSubSubscriber, FanOut(), FanOutMulti()
βββ replayer.go Last-Event-ID replay (pluggable MemoryReplayer)
βββ metrics.go PrometheusHandler, MetricsHandler
βββ stats.go HubStats struct
βββ CLAUDE.md Instructions for AI agents (Claude, Codex, Copilot)
βββ hub_test.go 29 unit tests
βββ integration_test.go 11 integration tests (real Fiber HTTP server)
βββ benchmark_test.go 42 benchmarks (publish, coalesce, topic match, etc.)
The canonical pattern for bridging fibersse events to your React data layer:
import { useQueryClient } from '@tanstack/react-query';
import { useEffect } from 'react';
function useSSEInvalidation(topics: string[]) {
const queryClient = useQueryClient();
useEffect(() => {
const es = new EventSource(`/events?topics=${topics.join(',')}`);
// Single resource invalidation
es.addEventListener('invalidate', (e) => {
const { resource, resource_id, action, hint } = JSON.parse(e.data);
// Invalidate the collection
queryClient.invalidateQueries({ queryKey: [resource] });
// Invalidate the specific item
if (resource_id) {
queryClient.invalidateQueries({ queryKey: [resource, resource_id] });
}
// Optional: update cache directly from hint (skip refetch)
if (hint && resource_id) {
queryClient.setQueryData([resource, resource_id], (old) =>
old ? { ...old, ...hint } : old
);
}
});
// Batch invalidation (multiple resources in one event)
es.addEventListener('batch', (e) => {
const events = JSON.parse(e.data);
const resources = new Set(events.map(e => e.resource));
resources.forEach(resource => {
queryClient.invalidateQueries({ queryKey: [resource] });
});
});
// Progress tracking
es.addEventListener('progress', (e) => {
const { resource_id, pct } = JSON.parse(e.data);
// Update local state for progress bars
});
// Completion
es.addEventListener('complete', (e) => {
const { resource_id, status } = JSON.parse(e.data);
if (status === 'completed') {
queryClient.invalidateQueries(); // refetch everything
}
});
return () => es.close();
}, [topics, queryClient]);
}
// Usage in any page:
function OrdersPage() {
useSSEInvalidation(['orders', 'dashboard']);
const { data } = useQuery({ queryKey: ['orders'], queryFn: fetchOrders });
// β Automatically refetches when server publishes hub.Invalidate("orders", ...)
}import { useSWRConfig } from 'swr';
function useSSEInvalidation(topics: string[]) {
const { mutate } = useSWRConfig();
useEffect(() => {
const es = new EventSource(`/events?topics=${topics.join(',')}`);
es.addEventListener('invalidate', (e) => {
const { resource, resource_id } = JSON.parse(e.data);
mutate(`/api/${resource}`);
if (resource_id) mutate(`/api/${resource}/${resource_id}`);
});
return () => es.close();
}, [topics, mutate]);
}This project follows Semantic Versioning:
- v0.x.y β Pre-1.0 development. API may change between minor versions.
- v1.0.0 β Stable API. Breaking changes only in major versions.
Current: v0.5.0.
- Redis Streams Replayer (durable replay across server restarts)
- React SDK (
fibersse-react) βuseSSE()anduseSSEInvalidation()hooks - Admin Dashboard (web UI for live connection monitoring)
- WebSocket fallback transport
- Load testing CLI (
fibersse-bench) - OpenTelemetry tracing integration
- TanStack Query integration example
Runnable examples in the examples/ directory:
| Example | What it demonstrates | Run |
|---|---|---|
| basic | Minimal hub setup, periodic publisher, browser client | cd examples/basic && go run main.go |
| chat | Multi-room chat with topic wildcards and metadata | cd examples/chat && go run main.go |
| polling-replacement | Side-by-side polling vs SSE comparison | cd examples/polling-replacement && go run main.go |
Contributions are welcome! See CONTRIBUTING.md for development workflow, code style, and PR process.
MIT - Vinod Morya
Vinod Morya β @vinod-morya
Built at PersonaCart β the creator commerce platform. fibersse powers all real-time features in PersonaCart: notifications, live analytics, media processing, curriculum generation progress, and more.