๐ What I built
A production-ready cache stampede prevention library in Go with 3 strategies,
full Prometheus metrics, and a live Grafana dashboard.
๐ฅ The problem
When a hot cache key expires, thousands of concurrent requests all miss at the same
time and hammer the database. This is a cache stampede โ and it can take down your DB.
๐ก๏ธ 3 strategies implemented
- Hard Lock + Stale-While-Revalidate
Only one process fetches the new value (Redis SETNX + Lua atomic script)
Everyone else gets served the stale value immediately โ zero wait time
2, XFetch (probabilistic early refresh)
Formula: P = exp(-ฮฒ ร timeRemaining / ฮด)
Starts refreshing in the background before the key expires
No thundering herd because refreshes are spread out probabilistically
- Request Coalescing (singleflight)
In-process deduplication: 50 concurrent goroutines โ 1 real DB call
~13:1 dedup ratio visible in Grafana
๐ Monitoring stack
Prometheus metrics for every decision point (hit/miss/stale/lock_contention)
Grafana dashboard with 9 panels auto-provisioned via docker-compose
Circuit Breaker state transitions (Closed โ Open โ Half-Open) tracked as gauges
Load test drives 1 000 req/s of sustained traffic
๐งฑ Tech stack
Go 1.2xยท Redis 7 ยท Prometheus ยท Grafana ยท Docker Compose
Please give me a star for my repository if it is useful for you. Thanks
๐ GitHub: [https://github.com/0x48core/go-latency/tree/main/cache-stampede]
Top comments (0)