กลับไปที่บทความ
Redis Cache Beginner Backend In-Memory

Redis for Beginners: What It Is, When to Use It, and Client Examples in Java, Python, Node.js, and Go

พลากร วรมงคล
15 เมษายน 2569 14 นาที

“A ground-up introduction to Redis — what it actually is, the handful of data structures that matter, the use cases it's genuinely good at, and working client snippets in four languages so you can run your first cache, queue, and pub/sub in under ten minutes.”

Redis gets described as “a cache” so often that people miss how much else it can do. It is a cache — and it’s the best cache most of us will ever use — but it’s also a queue, a leaderboard, a session store, a rate limiter, a pub/sub bus, a geospatial index, and a stream. That breadth is the actual value of Redis, and it’s why “just add Redis” is often the correct answer.

This post is the minimum you need to go from “never used Redis” to “comfortable reaching for Redis when a problem looks like it fits.” We’ll end with working client code in four languages — cache, counter, queue, and pub/sub — so you can try it instead of just reading.

TL;DR

  • Redis is an in-memory data-structure server — not a key/value cache with bells on, but a server that understands strings, hashes, lists, sets, sorted sets, streams, and more.
  • Typical latency is sub-millisecond on localhost; expect single-digit ms across a VPC.
  • Core data types you actually need to know: String, Hash, List, Set, Sorted Set, Stream, plus pub/sub.
  • Great fit for: caching, session storage, rate limiting, counters, leaderboards, lightweight queues, pub/sub, locks.
  • Not a fit for: primary relational storage, anything bigger than your RAM budget, strict ACID transactions across many keys.
  • Quickest way to learn it: run docker run -p 6379:6379 redis:7 and poke around with redis-cli — we’ll do that below.

What Redis Actually Is

The one-line description: Redis is a single-threaded, in-memory, data-structure server with pluggable persistence.

Break that down:

  • In-memory. The dataset lives in RAM. That’s why it’s fast. Disk is used only for durability (snapshots + optional append-only log).
  • Data-structure server. The API isn’t just GET/SET on opaque blobs. Redis understands what you’re storing. You push onto a list, atomically increment an integer, add to a sorted set, and the server does the work — not the client.
  • Single-threaded command execution. One command runs at a time, in order, per instance. That sounds limiting but it’s the thing that makes Redis operations atomic by default and trivial to reason about.
  • Pluggable persistence. RDB snapshots (periodic dumps), AOF (append-only log of every write), or both. Or neither — pure cache mode.

You communicate over a simple text protocol (RESP), so there are clients for every language.

The Data Types You Actually Need

Redis has ~10 data types. You only need to know a handful to get 90% of the value.

1. Strings (SET / GET / INCR)

The workhorse. Despite the name, a “string” in Redis is just a byte blob — text, JSON, binary, or an integer.

SET user:42:name "Alice"
GET user:42:name
SET counter 0
INCR counter          # returns 1
INCRBY counter 10     # returns 11
SET cache:foo "bar" EX 60   # expire in 60 seconds

The EX / PX / EXAT options make it a cache in one line.

2. Hashes (HSET / HGET)

A key that contains a dictionary of fields. Great for storing objects without JSON-encoding.

HSET user:42 name "Alice" age 30 plan "pro"
HGET user:42 plan         # "pro"
HGETALL user:42           # {name, age, plan}
HINCRBY user:42 age 1     # atomic increment one field

Memory-efficient for small objects — much cheaper than many top-level string keys.

3. Lists (LPUSH / RPOP / BLPOP)

Doubly-linked lists. Push to either end, pop from either end. A list with BLPOP is a working FIFO job queue in three commands.

LPUSH jobs:email "send-welcome-to-42"
LPUSH jobs:email "send-reminder-to-99"
BRPOP jobs:email 0     # blocks a worker until a job arrives

4. Sets (SADD / SISMEMBER / SINTER)

Unordered unique-member collections. Perfect for tags, permissions, and dedup.

SADD user:42:roles "admin" "billing"
SISMEMBER user:42:roles "admin"   # 1
SINTER user:42:roles user:99:roles # roles both share

5. Sorted Sets (ZADD / ZRANGE / ZINCRBY)

A set with a score. Members stay sorted by score. This is the Swiss army knife — leaderboards, priority queues, time-based indexes, rate limiters.

ZADD leaderboard 1200 "alice" 950 "bob" 1450 "carol"
ZREVRANGE leaderboard 0 9 WITHSCORES     # top 10
ZINCRBY leaderboard 50 "bob"             # bob gains 50 points

6. Streams (XADD / XREADGROUP)

An append-only log with consumer groups — Redis’s answer to a lightweight Kafka. Keep this in mind for event processing inside a single app; reach for Kafka when cross-team or huge scale is involved.

XADD events * type "signup" user "42"
XREADGROUP GROUP workers worker-1 COUNT 10 BLOCK 5000 STREAMS events >

7. Pub/Sub (PUBLISH / SUBSCRIBE)

Fire-and-forget messaging — no persistence, no history. Great for cache-invalidation broadcasts, realtime notifications, and presence. Streams are what you want if you need durability.

SUBSCRIBE room:general
PUBLISH room:general "hello"

Things to know about everything above

  • Keys expire. EXPIRE, PEXPIRE, EXPIREAT on any key. TTL is first-class. A cache with automatic invalidation is the default, not a feature.
  • All commands are atomic. Because Redis is single-threaded per instance, INCR, HSET, SADD, etc. never race. For multi-command atomicity, use MULTI/EXEC or a Lua script.
  • No schema. You pick key naming conventions (user:42:name is common). Keys are just bytes.
  • Namespacing by database number (SELECT 0..15) exists but is discouraged. Use key prefixes or separate instances.

Features Worth Knowing (But Not on Day One)

  • Persistence: RDB (snapshots) vs AOF (append-only file) vs both. Default is reasonable; tune for your durability needs.
  • Replication: Primary/replica with async replication. Good for read scaling and failover.
  • Sentinel: Automated failover for a single-primary deployment.
  • Cluster: Horizontal sharding across nodes by hash slot. Use when RAM per node is the bottleneck, not before.
  • Lua scripting: EVAL runs a Lua script atomically on the server. Handy for complex compound ops.
  • Modules: RedisJSON (JSONPath queries on JSON docs), RediSearch (full-text + vector search), RedisBloom, RedisTimeSeries. Powerful, but pick them only when the built-in types can’t do the job.
  • ACL: Per-user permission lists — production deployments should use ACLs rather than the single requirepass password.

Skip all of these on your first project. Strings + hashes + lists + sorted sets get you surprisingly far.

When to Use Redis — and When Not To

Good fits

  • Caching anything expensive. Database query results, rendered HTML, API responses, computed derivations. Set a TTL and forget it.
  • Session storage. Fast reads, automatic expiry, easy to share across app instances.
  • Rate limiting. INCR + EXPIRE gives you a fixed-window limiter in two commands. Sorted sets give you sliding-window.
  • Counters and leaderboards. Atomic increment; sorted sets for rankings.
  • Simple queues. List-based job queues with BLPOP are fine for modest throughput. Streams for more robust patterns.
  • Pub/sub. Cache invalidation broadcasts, realtime notifications, presence.
  • Distributed locks. SET key NX EX 10 with a unique value is the building block (see Redlock for the full pattern).

Bad fits

  • Your primary database. Redis is durable enough for many use cases, but it’s not what you want as the canonical record for money, users, or anything regulated. Use Postgres; cache reads in Redis.
  • Datasets larger than RAM. You can push to disk with options, but the design target is “fits in memory.” If your working set is terabytes, look elsewhere.
  • Cross-key transactions at scale. MULTI/EXEC works within one instance but gets awkward in a cluster where keys are sharded.
  • Complex queries. Redis is not a relational engine. If you need joins, use a database.

A Mental Model: Redis as a Shared Memory

Imagine every process in your system shares one big in-memory namespace — not just strings, but lists, sets, counters, and dictionaries — and every operation on that shared state happens atomically in a deterministic order.

That’s Redis. Most Redis designs become obvious once you think “if this state lived in a single Python dict for the whole cluster, how would I do this?” — and then translate.

flowchart LR
  A[App Server 1] --> R[(Redis)]
  B[App Server 2] --> R
  C[App Server 3] --> R
  R --> D[Cache]
  R --> Q[Queue]
  R --> L[Leaderboard]
  R --> P[Pub/Sub]
  D -. TTL expiry .-> R

One Redis; many shared use cases served by different data-structure patterns.

Getting Redis Running in 30 Seconds

docker run -d --name redis -p 6379:6379 redis:7
docker exec -it redis redis-cli

In the redis-cli prompt:

127.0.0.1:6379> SET greeting "hello world" EX 30
OK
127.0.0.1:6379> GET greeting
"hello world"
127.0.0.1:6379> TTL greeting
(integer) 27

You’re done. Any client below points at localhost:6379.

Four Languages — One Pattern Each

Each snippet below does four things: (1) cache a value with TTL, (2) increment a counter, (3) push/pop a job from a list queue, and (4) publish a pub/sub message. Different shapes; same patterns.

Java (Jedis)

<!-- pom.xml -->
<dependency>
  <groupId>redis.clients</groupId>
  <artifactId>jedis</artifactId>
  <version>5.2.0</version>
</dependency>
import redis.clients.jedis.Jedis;
import redis.clients.jedis.params.SetParams;

public class RedisDemo {
  public static void main(String[] args) {
    try (Jedis r = new Jedis("localhost", 6379)) {
      // 1. Cache with 60s TTL
      r.set("cache:user:42", "{\"name\":\"Alice\"}", SetParams.setParams().ex(60));
      System.out.println("cached: " + r.get("cache:user:42"));

      // 2. Counter
      long hits = r.incr("page:home:hits");
      System.out.println("hits: " + hits);

      // 3. Queue (producer)
      r.lpush("jobs:email", "send-welcome-42");

      // 3. Queue (consumer, blocking 2s)
      var job = r.brpop(2, "jobs:email");
      if (job != null) System.out.println("processed: " + job.get(1));

      // 4. Pub/sub publish (subscribe in a separate thread/app)
      r.publish("notifications", "user 42 signed up");
    }
  }
}

Python (redis-py)

pip install redis
import redis
import threading
import time

r = redis.Redis(host="localhost", port=6379, decode_responses=True)

# 1. Cache with 60s TTL
r.set("cache:user:42", '{"name":"Alice"}', ex=60)
print("cached:", r.get("cache:user:42"))

# 2. Counter
print("hits:", r.incr("page:home:hits"))

# 3. Queue
r.lpush("jobs:email", "send-welcome-42")
job = r.brpop("jobs:email", timeout=2)
if job:
    print("processed:", job[1])

# 4. Pub/sub — subscriber in a background thread
def listen():
    pubsub = r.pubsub()
    pubsub.subscribe("notifications")
    for msg in pubsub.listen():
        if msg["type"] == "message":
            print("received:", msg["data"])
            break

t = threading.Thread(target=listen, daemon=True)
t.start()
time.sleep(0.2)  # let the subscriber connect
r.publish("notifications", "user 42 signed up")
t.join(timeout=2)

Node.js (ioredis)

npm install ioredis
import Redis from "ioredis";

const r = new Redis("redis://localhost:6379");
const sub = new Redis("redis://localhost:6379"); // pub/sub needs a dedicated conn

// 4. Set up subscriber first so we don't miss the publish
await sub.subscribe("notifications");
sub.on("message", (channel, msg) => {
  console.log(`received on ${channel}:`, msg);
  sub.disconnect();
});

// 1. Cache with 60s TTL
await r.set("cache:user:42", JSON.stringify({ name: "Alice" }), "EX", 60);
console.log("cached:", await r.get("cache:user:42"));

// 2. Counter
console.log("hits:", await r.incr("page:home:hits"));

// 3. Queue
await r.lpush("jobs:email", "send-welcome-42");
const job = await r.brpop("jobs:email", 2);
if (job) console.log("processed:", job[1]);

// 4. Publish
await r.publish("notifications", "user 42 signed up");

setTimeout(() => r.disconnect(), 500);

Go (go-redis/v9)

go get github.com/redis/go-redis/v9
package main

import (
  "context"
  "encoding/json"
  "fmt"
  "time"

  "github.com/redis/go-redis/v9"
)

func main() {
  ctx := context.Background()
  r := redis.NewClient(&redis.Options{Addr: "localhost:6379"})
  defer r.Close()

  // 1. Cache with 60s TTL
  body, _ := json.Marshal(map[string]string{"name": "Alice"})
  r.Set(ctx, "cache:user:42", body, 60*time.Second)
  val, _ := r.Get(ctx, "cache:user:42").Result()
  fmt.Println("cached:", val)

  // 2. Counter
  hits, _ := r.Incr(ctx, "page:home:hits").Result()
  fmt.Println("hits:", hits)

  // 3. Queue
  r.LPush(ctx, "jobs:email", "send-welcome-42")
  job, _ := r.BRPop(ctx, 2*time.Second, "jobs:email").Result()
  if len(job) == 2 {
    fmt.Println("processed:", job[1])
  }

  // 4. Pub/sub (subscribe in a goroutine; publish from main)
  sub := r.Subscribe(ctx, "notifications")
  defer sub.Close()
  go func() {
    msg, err := sub.ReceiveMessage(ctx)
    if err == nil {
      fmt.Println("received:", msg.Payload)
    }
  }()
  time.Sleep(100 * time.Millisecond)
  r.Publish(ctx, "notifications", "user 42 signed up")
  time.Sleep(300 * time.Millisecond)
}

Common Mistakes Beginners Make

  • Treating Redis as persistent by default. It can be, but configure it consciously — know whether RDB, AOF, or both match your durability needs.
  • Huge keys. A HASH with a million fields or a LIST with millions of entries becomes a performance cliff. Keep collections bounded; shard if needed.
  • KEYS * in production. It blocks the server. Use SCAN.
  • Storing JSON blobs when a HASH would do. HGET user:42 name is cheaper than GET user:42 → parse JSON → pluck name.
  • Using pub/sub as a durable queue. Subscribers that weren’t connected at publish time never see the message. Use a list or stream for durable patterns.
  • Assuming pipelines are transactions. MULTI/EXEC is atomic; pipelines are just batched round-trips. Know which you need.
  • Sharing connections carelessly. Subscriptions, blocking reads, and long-running operations need dedicated connections — the pool design matters.

What to Learn Next

  1. TTL patterns. Cache-aside, write-through, and write-behind — which fits your app?
  2. Rate limiting. Fixed window with INCR+EXPIRE, sliding window with sorted sets, token bucket with Lua.
  3. Locks. SET NX EX and the pitfalls of fencing tokens; Redlock for multi-node.
  4. Streams. XADD, consumer groups, acknowledgement, XAUTOCLAIM for stuck messages.
  5. Lua scripts. When atomicity across several commands matters.
  6. Persistence and replication. RDB vs AOF trade-offs; replica promotion; Sentinel basics.
  7. Cluster and sharding. Hash slots, hash tags ({curlybraces}) to keep related keys together.

Further Reading

The same rule as Kafka applies: you’ll learn Redis faster by running one locally and writing a dumb cache/queue/counter than by reading another “what is Redis” post. Do that, then come back.

Comments powered by Giscus are not yet configured. Set PUBLIC_GISCUS_REPO_ID and PUBLIC_GISCUS_CATEGORY_ID in apps/web/.env to enable.

PV

เขียนโดย พลากร วรมงคล

Software Engineer Specialist ประสบการณ์กว่า 20 ปี เขียนเกี่ยวกับ Architecture, Performance และการสร้างระบบ Production

เพิ่มเติมเกี่ยวกับผม

บทความที่เกี่ยวข้อง