A few weeks ago I wired a visitor counter on my personal site: a tiny API route, a browser-generated id, and one integer stored in Redis, hosted by Upstash, not on a VM I have to patch at midnight.
That small feature turned into a good excuse to write down what Upstash actually is, how it differs from “Redis on a box”, and where Redis earns its place when systems grow past a single database and a few API routes.
First: What Is Redis?
Redis is an in-memory data store. People call it a database, a cache, a message broker, because it can wear all of those hats depending on how you use it.
At its core you get:
- Very fast reads and writes (memory-first, optional persistence)
- Simple, flexible data structures: strings, hashes, lists, sets, sorted sets, streams…
- Atomic operations, think
INCR,SETNX, compare-and-set, handy when many clients touch the same key
So far, that sounds like “fast storage.” The interesting part is where Redis runs and how your app talks to it.
What Upstash Adds
Upstash hosts managed Redis (and other products) with a model that fits serverless and edge-style deployments:
- You don’t run the server, no provisioning, no Redis version upgrades on your weekend.
- You talk to it over HTTPS (REST), ideal when your code runs in short-lived functions (e.g. Vercel, AWS Lambda) that don’t hold long-lived TCP connections.
- Pricing and limits that map to per-request usage rather than “always-on instance size.”
- Global / regional options so data can live close to users or close to your API, depending on product.
So: Redis semantics, managed operations, HTTP-friendly access, that’s the bundle people mean when they say “Upstash Redis” in a Next.js or serverless context.
How It Works on My Site (Concrete, Not Hand-Wavy)
On anandthakkar.com, the flow looks like this:
- The browser generates (and stores) an anonymous visitor id in
localStorage. - On load, the client calls
POST /api/visitorswith that id. - The route handler uses Upstash’s Redis client to:
SETa dedupe key once per id (so refreshes don’t inflate the count)INCRa global counter when that id is new
- The UI shows one number, same idea in the hero and footer.
No Postgres table for “visits,” no cron jobs, just one durable integer and dedupe keys. That’s Redis doing what it’s good at: state that must be fast and consistent at small granularity.
Why This Pattern Scales Up (Complex Projects)
In larger systems, you rarely stop at a counter. You start needing cross-request state that would be awkward or slow to put only in a relational DB. Redis, often via Upstash or a self-managed cluster, shows up for things like:
1) Caching
Cache computed responses, database query results, or HTML fragments. TTLs and eviction policies keep memory under control. This is the bread-and-butter use case: take load off Postgres and cut latency for hot paths.
2) Rate limiting
Track requests per IP, per user, or per API key using sliding windows or fixed windows in Redis. Atomic increments make it hard to overshoot limits under concurrency, important for public APIs and login endpoints.
3) Sessions and short-lived tokens
Store session blobs, refresh token rotation, or OAuth state with expiry (EX). Faster than hitting the primary DB on every request when designed carefully.
4) Leaderboards and rankings
Sorted sets are a natural fit for scores, rankings, and “top N” queries without heavy SQL.
5) Job queues and backpressure
Lists or streams back simple work queues: enqueue jobs, workers BLPOP or consume streams. For massive scale you might graduate to dedicated queue systems, but Redis gets you far.
6) Pub/Sub and real-time fan-out
Notify many subscribers when something changes, useful for live dashboards, game lobbies, or collaborative features (often paired with WebSockets elsewhere).
7) Distributed locks
Coordinate who runs a cron-like job or who migrates a shard using Redlock-style patterns (with care, distributed locks have edge cases, but Redis is the usual hammer).
8) Feature flags and experiments
Fast reads for “is this flag on for this user segment?” with low latency at the edge of the request path.
Upstash vs “Just Use Postgres”
Postgres is durable, relational, and correct for your source of truth. Redis is usually not where you store the only copy of financial ledger data.
The split people aim for:
- Postgres (or another OLTP store): canonical data, transactions, reporting
- Redis: speed layer, ephemeral state, coordination, or data that can be rebuilt from Postgres if lost
Upstash makes Redis easy to add next to a serverless front door. That’s why it pairs naturally with Next.js API routes, Edge Functions, and Vercel-style deploys.
Trade-Offs to Respect
- Memory is finite, design keys, TTLs, and eviction so you don’t silently lose data you still need.
- Consistency models, understand single-region vs multi-region and what your provider guarantees.
- Cost at scale, many small requests add up; batch or pipeline when possible.
- Secrets, REST tokens belong in environment variables, never in client-side code.
Closing Thought
My site’s counter is intentionally tiny, but it sits on the same building blocks teams use for caching, rate limits, and coordination in much bigger systems. Upstash + Redis doesn’t replace solid architecture; it removes friction for the kind of state that should be fast, atomic, and boring to operate.
If you’re already on serverless and need one more tool between “static site” and “full distributed system,” it’s worth a serious look.
If you want to compare notes or talk about how you’re using Redis in production:
- Email: anand.thakkar@outlook.com
- Site: anandthakkar.com
