RedCouch
A Redis module that bridges memcached protocol clients to Redis 8+
RedCouch provides a memcached-compatible TCP endpoint backed by Redis data structures. It allows existing memcached clients — including Couchbase SDK clients that speak the memcached binary protocol — to connect to a Redis 8+ server and use it as a drop-in data store, without any application-level code changes.
Who Is This For?
RedCouch is designed for teams in one of these situations:
- Migrating from Couchbase to Redis — Your applications use Couchbase SDKs or memcached clients. RedCouch lets you point them at a Redis 8+ server and keep running while you plan and execute a gradual migration to native Redis clients.
- Consolidating data infrastructure — You want to reduce operational overhead by replacing a standalone memcached or Couchbase deployment with Redis, which you may already be running for other workloads.
- Evaluating Redis as a memcached replacement — You want to test Redis with your existing memcached workload before committing to a full client migration.
RedCouch is not a general-purpose memcached server. It is a protocol-translation bridge: it speaks memcached on the wire but stores data in Redis. The intended end state is migrating your clients to speak Redis natively, at which point RedCouch is no longer needed.
What RedCouch Does
RedCouch runs inside Redis as a loaded module. On startup, it opens a TCP listener (default 127.0.0.1:11210) that accepts memcached protocol connections. Each incoming request is parsed, translated into Redis operations, and the response is sent back in the memcached protocol format the client expects.
┌─────────────────────┐ TCP :11210 ┌──────────────────────────────────┐
│ Memcached Client │ ◄───────────────► │ RedCouch Module (in Redis) │
│ (binary or ASCII) │ │ parse → Redis ops → respond │
└─────────────────────┘ └──────────────────────────────────┘
The translation is transparent: clients see standard memcached responses, and Redis sees standard hash operations. Data stored through RedCouch is visible via redis-cli under the rc: key prefix.
The Migration Path
RedCouch supports a three-phase migration:
-
Bridge phase — Deploy RedCouch on your Redis 8+ server. Point your memcached/Couchbase clients at port 11210. Your data now lives in Redis, accessible through both the memcached protocol (via RedCouch) and native Redis commands.
-
Dual-access phase — Begin migrating application code from memcached clients to native Redis clients. Both access paths work simultaneously against the same data. You can migrate one service at a time.
-
Native phase — Once all clients speak Redis natively, unload the RedCouch module. The memcached protocol endpoint shuts down; Redis continues serving your data directly with no translation overhead.
This migration path is validated by benchmark comparisons showing that RedCouch imposes modest overhead (~18% on GET hits) compared to native Redis, and performs competitively with Couchbase under identical conditions. See Benchmarks & Performance for measured data.
Supported Protocols
RedCouch automatically detects the protocol from the first byte of each connection:
| Protocol | Detection | Coverage |
|---|---|---|
| Binary (Couchbase memcached) | First byte 0x80 | All 34 opcodes — GET, SET, DELETE, INCR/DECR, APPEND, TOUCH, FLUSH, SASL, STAT, and more |
| ASCII text | Printable ASCII | All 19 standard commands — set, get, delete, incr, cas, flush_all, etc. |
| Meta | mg/ms/md/ma/mn/me prefixes | Flag-based meta get/set/delete/arithmetic/noop/debug |
Protocol is detected once per connection and fixed for its lifetime. A single RedCouch listener serves all three protocols concurrently.
Key Design Points
- Hash-per-item storage — Each memcached item is stored as a Redis hash with fields for value (
v), flags (f), and CAS (c). This makes items inspectable via standard Redis tools. - Namespaced keys — Client keys are prefixed with
rc:to avoid collisions with other Redis data. System keys live underredcouch:sys:*. - Atomic mutations — All CAS-sensitive operations use server-side Lua scripts, ensuring atomicity without client-side check-then-set races.
- Binary-safe values — Full binary round-trip via Lua hex encode/decode. Non-UTF-8 payloads are preserved exactly.
- Safe defaults — Loopback-only bind, 1024 connection limit, read/write timeouts, and a 20 MiB frame cap protect against accidental exposure and resource exhaustion.
Platform Support
| Target | OS | Architecture | Artifact |
|---|---|---|---|
x86_64-unknown-linux-gnu | Linux | x86_64 | libred_couch.so |
aarch64-unknown-linux-gnu | Linux | ARM64 | libred_couch.so |
x86_64-apple-darwin | macOS | x86_64 | libred_couch.dylib |
aarch64-apple-darwin | macOS | ARM64 | libred_couch.dylib |
Windows is not supported. Redis modules require a Unix-like environment.
How This Book Is Organized
- Getting Started — Install RedCouch, load it into Redis, and run your first commands.
- User Guide — Walkthrough examples for each protocol: ASCII, meta, and binary.
- Tutorials & Examples — Step-by-step tutorials, multi-language client examples, migration guide, and real-world use cases.
- Reference — Complete protocol compatibility tables, architecture details, configuration reference, and known limitations.
- Operations — Performance benchmarks, release process, and operational guidance.
- Development — How to contribute, test architecture, and coding standards.
- API Reference — Rust API documentation generated from source.
License
MIT — see LICENSE for details.
Source Code
RedCouch is open source: https://github.com/fcenedes/RedCouch
Installation
Prerequisites
- Redis 8.x (Open Source). Verified on Redis 8.4.0.
- Rust 1.85+ (stable) — only needed if building from source.
Option 1: Install from GitHub Release
Pre-built artifacts are attached to GitHub Releases as .tar.gz archives with SHA-256 checksums.
# Download the release for your platform (example: Linux x86_64, version v0.1.0)
curl -LO https://github.com/fcenedes/RedCouch/releases/download/v0.1.0/redcouch-v0.1.0-x86_64-unknown-linux-gnu.tar.gz
curl -LO https://github.com/fcenedes/RedCouch/releases/download/v0.1.0/redcouch-v0.1.0-x86_64-unknown-linux-gnu.tar.gz.sha256
# Verify checksum
sha256sum -c redcouch-v0.1.0-x86_64-unknown-linux-gnu.tar.gz.sha256
# Extract
tar xzf redcouch-v0.1.0-x86_64-unknown-linux-gnu.tar.gz
This extracts libred_couch.so (Linux) or libred_couch.dylib (macOS).
Option 2: Build from Source
git clone https://github.com/fcenedes/RedCouch.git
cd RedCouch
cargo build --release
The compiled module is at:
- macOS:
target/release/libred_couch.dylib - Linux:
target/release/libred_couch.so
Loading the Module
Command Line
redis-server --loadmodule /path/to/libred_couch.so # Linux
redis-server --loadmodule /path/to/libred_couch.dylib # macOS
redis.conf
Add to your Redis configuration file:
loadmodule /path/to/libred_couch.so
Runtime (MODULE LOAD)
redis-cli MODULE LOAD /absolute/path/to/libred_couch.so
Verify
After loading, check that the module is active:
redis-cli MODULE LIST
# Should show "redcouch" in the list
# Verify the memcached endpoint is listening
nc -z 127.0.0.1 11210 && echo "RedCouch listening" || echo "Not listening"
Unloading the Module
redis-cli MODULE UNLOAD redcouch
Note: RedCouch does not implement a module unload/deinit handler. The background TCP listener thread has no graceful shutdown path. Unloading via
MODULE UNLOADis unverified and may leave the listener thread orphaned. The recommended approach is to restart the Redis process to fully stop the module.
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
FATAL: cannot bind 127.0.0.1:11210 | Port already in use | Stop the other process using port 11210 |
| Module loads but port 11210 not reachable | Bind failed silently | Check Redis logs for the bind error message |
MODULE LOAD returns error | Wrong platform artifact | Use .so for Linux, .dylib for macOS |
| Connection refused after 1024 clients | Connection limit reached | Reduce concurrent connections or wait for existing ones to close |
Quick Start
This page walks you through building RedCouch, loading it into Redis, and running your first commands — in about five minutes.
Prerequisites
- Redis 8.x installed and available as
redis-server(verified on 8.4.0) - Rust 1.85+ toolchain (for building from source)
If you don't have Redis 8, see Installation for download options.
1. Build the Module
git clone https://github.com/fcenedes/RedCouch.git
cd RedCouch
cargo build --release
This produces the module library:
- macOS:
target/release/libred_couch.dylib - Linux:
target/release/libred_couch.so
2. Start Redis with RedCouch
# macOS
redis-server --loadmodule ./target/release/libred_couch.dylib
# Linux
redis-server --loadmodule ./target/release/libred_couch.so
You should see Redis start with a log line indicating the module loaded. RedCouch opens a TCP listener on 127.0.0.1:11210.
Verify it's running:
# Check the module is loaded
redis-cli MODULE LIST
# Should show "redcouch"
# Check the memcached endpoint is listening
nc -z 127.0.0.1 11210 && echo "RedCouch is ready"
3. Your First Commands
Connect using telnet (ASCII protocol):
telnet 127.0.0.1 11210
Store and retrieve a value
set greeting 0 0 12
Hello World!
STORED
get greeting
VALUE greeting 0 12
Hello World!
END
The set command syntax is: set <key> <flags> <exptime> <bytes>, followed by the value data on the next line. Here we're setting key greeting with flags=0, no expiry (0), and 12 bytes of data.
Use CAS for safe updates
gets greeting
VALUE greeting 0 12 1
Hello World!
END
cas greeting 0 0 8 1
Hey you!
STORED
The gets command returns the CAS token (the 1 after the byte count). The cas command uses this token to ensure no one else modified the value between the read and write.
Counters
set visits 0 0 1
0
STORED
incr visits 1
1
incr visits 1
2
Delete and verify
delete greeting
DELETED
get greeting
END
Check version and stats
version
VERSION RedCouch 0.1.0
stats
STAT pid 12345
STAT uptime 42
STAT version RedCouch 0.1.0
...
END
quit
4. Inspect Data via Redis
While RedCouch is running, open another terminal and use redis-cli to see the data:
# See all RedCouch keys
redis-cli KEYS 'rc:*'
# Inspect a specific item
redis-cli HGETALL rc:visits
# Returns: v (hex-encoded value), f (flags), c (CAS token)
This dual-access is one of RedCouch's key features: data is simultaneously accessible through the memcached protocol and native Redis commands.
5. Run the Tests
# Unit and protocol tests (221 tests — no Redis required)
cargo test
# Integration tests (requires Redis 8+ with module loaded)
cd tests/integration && bash run_e2e.sh
Next Steps
- Python Client Tutorial — Step-by-step Python walkthrough with pymemcache
- Multi-Language Examples — Node.js, Go, PHP, and CLI examples
- Migration Guide — Three-phase migration from memcached to native Redis
- ASCII Protocol — Full walkthrough of all 19 ASCII commands
- Meta Protocol — Flag-based meta commands with fine-grained control
- Binary Protocol — Machine-oriented protocol for SDK clients
- Architecture — How RedCouch works under the hood
- Configuration — Runtime defaults and tuning guidance
ASCII Protocol
The ASCII text protocol is the simplest way to interact with RedCouch. It uses human-readable commands over a plain TCP connection, making it easy to test and debug with standard tools like telnet or nc.
RedCouch supports all 19 standard memcached ASCII commands. For the complete compatibility table with syntax details, see Protocol Compatibility Reference.
Connecting
telnet 127.0.0.1 11210
# or
nc 127.0.0.1 11210
RedCouch auto-detects ASCII protocol when the first byte is a printable ASCII character (not 0x80, which routes to binary protocol).
Basic Key-Value Operations
# Store a value
set mykey 0 0 5
hello
STORED
# Retrieve it
get mykey
VALUE mykey 0 5
hello
END
# Store with flags and TTL (60 seconds)
set session:abc 42 60 11
session_data
STORED
# Retrieve with CAS token
gets mykey
VALUE mykey 0 5 1
hello
END
# Compare-and-swap (CAS) update
cas mykey 0 0 5 1
world
STORED
# Delete
delete mykey
DELETED
Counters
# Create a counter via set (store numeric string)
set counter 0 0 1
0
STORED
# Increment
incr counter 1
1
# Increment by 10
incr counter 10
11
# Decrement
decr counter 3
8
Note: In ASCII protocol,
incr/decrreturnNOT_FOUNDfor missing keys. Usesetto initialize counters first.
Append and Prepend
set log 0 0 6
line-1
STORED
append log 7
,line-2
STORED
get log
VALUE log 0 13
line-1,line-2
END
prepend log 8
header:
STORED
Touch and Get-and-Touch
# Update TTL without fetching value
touch mykey 120
TOUCHED
# Get value and update TTL simultaneously
gat 300 mykey
VALUE mykey 0 5
world
END
Stats and Version
version
VERSION RedCouch 0.1.0
stats
STAT pid 12345
STAT uptime 42
STAT version RedCouch 0.1.0
STAT cmd_get 5
STAT cmd_set 3
STAT curr_items 2
...
END
Flush
# Flush all RedCouch items (only rc:* keys, not entire Redis DB)
flush_all
OK
Noreply Mode
Most commands accept a noreply suffix that suppresses the server response. This is useful for fire-and-forget writes:
set background-job 0 60 4 noreply
data
No STORED response is sent. If the command is malformed, a CLIENT_ERROR may still be emitted because noreply cannot always be reliably parsed before the error is detected.
Next Steps
- Meta Protocol — More powerful flag-based commands with richer control
- Binary Protocol — Machine-oriented protocol for high-throughput clients
- Protocol Compatibility Reference — Complete syntax and status tables for all commands
- Known Limitations — Counter precision, append growth, and other caveats
Quick Reference
| Command | Syntax |
|---|---|
set | set <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n |
add | add <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n |
replace | replace <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n |
cas | cas <key> <flags> <exptime> <bytes> <cas_unique> [noreply]\r\n<data>\r\n |
append | append <key> <bytes> [noreply]\r\n<data>\r\n |
prepend | prepend <key> <bytes> [noreply]\r\n<data>\r\n |
get | get <key> [<key> ...] |
gets | gets <key> [<key> ...] |
gat | gat <exptime> <key> [<key> ...] |
gats | gats <exptime> <key> [<key> ...] |
delete | delete <key> [noreply] |
incr | incr <key> <value> [noreply] |
decr | decr <key> <value> [noreply] |
touch | touch <key> <exptime> [noreply] |
flush_all | flush_all [delay] [noreply] |
version | version |
stats | stats [group] |
verbosity | verbosity <level> [noreply] |
quit | quit |
Meta Protocol
The meta protocol is an extension of the ASCII text protocol that provides more control over individual operations through a flag-based system. It uses two-letter command prefixes (mg, ms, md, ma, mn, me) instead of full command words, and flags to select exactly which response fields you want.
Meta commands are routed through the ASCII text-protocol path — they are detected by prefix after ASCII protocol detection. You use the same TCP connection and can mix standard ASCII and meta commands.
For the complete compatibility table, see Protocol Compatibility Reference.
Connecting
telnet 127.0.0.1 11210
Meta Set and Get
# Set a value (ms = meta set, 5 = data length)
ms mykey 5
hello
HD
# Get with value and CAS (mg = meta get, v = value, c = CAS)
mg mykey v c
VA 5 c1
hello
# Get with key echo, flags, size
mg mykey k f s
HD kmykey f0 s5
Meta Set Modes
The M flag controls the set mode:
# Add (only if not exists): ME
ms newkey 3 ME
foo
HD
# Replace (only if exists): MR
ms mykey 3 MR
bar
HD
# Append: MA
ms mykey 4 MA
_end
HD
# Prepend: MP
ms mykey 6 MP
start_
HD
| Mode | Meaning |
|---|---|
S | Set (default) |
E | Add (set if not exists) |
A | Append |
P | Prepend |
R | Replace |
Meta Delete
md mykey
HD
Meta Arithmetic
# Create counter with initial value (J = initial, N = TTL for auto-create)
ma counter J0 N0
HD
# Increment by 5 (D = delta, v = return value)
ma counter D5 v
VA 1
5
# Decrement (MD = decrement mode)
ma counter MD D2 v
VA 1
3
Meta Noop (Pipeline Terminator)
mn
MN
Opaque Token (Request Correlation)
The O flag echoes an opaque token in the response:
mg mykey v Oreq-42
VA 5 Oreq-42
hello
mn Oping
MN Oping
Supported Flags by Command
| Command | Supported Flags |
|---|---|
mg (meta get) | v (value), c (CAS), f (flags), k (key), s (size), O (opaque), q (quiet), t (TTL remaining), T (TTL update) |
ms (meta set) | F (flags), T (TTL), C (CAS), q (quiet), O (opaque), k (key), M (mode) |
md (meta delete) | C (CAS), q (quiet), O (opaque), k (key) |
ma (meta arithmetic) | D (delta), J (initial), N (auto-create TTL), q (quiet), O (opaque), k (key), v (value), c (CAS), M (mode: I/D) |
mn (meta noop) | O (opaque) |
me (meta debug) | O, k, q (stub: always returns EN) |
Unsupported Meta Features
- Stale items (
N/vivify on mg,I/invalidate on md) - Recache (
Rflag on mg) - Win/lose/stale flags (
W,X,Z) - Base64 keys (
bflag) medebug data (stub only — always returnsEN)
Any unsupported flag is rejected with CLIENT_ERROR unsupported meta flag '<flag>'.
Proxy hint flags P and L are silently accepted and ignored on all meta commands.
Next Steps
- ASCII Protocol — Standard text commands for simpler interactions
- Binary Protocol — Machine-oriented protocol for SDK clients
- Protocol Compatibility Reference — Complete flag tables and unsupported behaviors
- Known Limitations — Deferred meta features (stale items, base64 keys)
Binary Protocol
The binary protocol is the machine-oriented protocol used by Couchbase SDKs and some memcached client libraries. It uses a fixed-size 24-byte header followed by variable-length extras, key, and value fields.
Binary protocol clients connect to the same port (11210) as ASCII clients. RedCouch auto-detects binary protocol when the first byte is 0x80 (the binary request magic byte).
For the complete opcode table and compatibility details, see Protocol Compatibility Reference.
Overview
The binary protocol is based on the Couchbase memcached binary protocol. All 34 opcodes (0x00–0x22, excluding 0x1F) are parsed and dispatched. This includes quiet variants (which suppress success responses for pipelining) and key-returning variants.
Example: Raw Socket Binary Protocol (Python)
RedCouch's verified binary protocol test suite (tests/integration/test_binary_protocol.py) uses raw socket framing to exercise all 34 binary opcodes. Below is a simplified example:
import socket, struct
MAGIC_REQ, MAGIC_RES, HDR = 0x80, 0x81, 24
OP_SET, OP_GET = 0x01, 0x00
def build_req(opcode, extras=b"", key=b"", value=b"", cas=0):
bl = len(extras) + len(key) + len(value)
hdr = struct.pack(">BBHBBHIIQ", MAGIC_REQ, opcode, len(key),
len(extras), 0, 0, bl, 0, cas)
return hdr + extras + key + value
def read_resp(sock):
hdr = sock.recv(HDR)
magic, op, kl, el, dt, st, bl, opq, cas = struct.unpack(">BBHBBHIIQ", hdr)
body = b""
while len(body) < bl:
body += sock.recv(bl - len(body))
return st, cas, body[el + kl:]
sock = socket.create_connection(("127.0.0.1", 11210), timeout=3)
# SET key1 = b"hello" with flags=0, expiry=0
extras = struct.pack(">II", 0, 0) # flags (4 bytes) + expiry (4 bytes)
sock.sendall(build_req(OP_SET, extras=extras, key=b"key1", value=b"hello"))
status, cas, _ = read_resp(sock)
assert status == 0 # success
# GET key1
sock.sendall(build_req(OP_GET, key=b"key1"))
status, cas, value = read_resp(sock)
assert status == 0 and value == b"hello"
sock.close()
Supported Binary Operations
| Opcode Family | Opcodes | Notes |
|---|---|---|
| GET | GET, GETQ, GETK, GETKQ | Quiet variants suppress success. Key variants echo key. |
| SET/ADD/REPLACE | SET, SETQ, ADD, ADDQ, REPLACE, REPLACEQ | CAS-checked. Flags, expiry, binary-safe values preserved. |
| DELETE | DELETE, DELETEQ | CAS-checked. |
| INCREMENT/DECREMENT | INCR, INCRQ, DECR, DECRQ | Unsigned 64-bit with initial-value and miss rules. |
| APPEND/PREPEND | APPEND, APPENDQ, PREPEND, PREPENDQ | Requires existing item. |
| TOUCH | TOUCH | Updates TTL on existing items. |
| GAT/GATQ | GAT, GATQ | Get-and-touch with TTL update. |
| FLUSH | FLUSH, FLUSHQ | Namespace-isolated: only rc:* keys. |
| NOOP | NOOP | Pipeline terminator. |
| QUIT | QUIT, QUITQ | Graceful close. |
| VERSION | VERSION | Returns RedCouch 0.1.0. |
| STAT | STAT | General stats. |
| VERBOSITY | VERBOSITY | Accepted, no effect. |
| SASL AUTH | SASL_LIST_MECHS, SASL_AUTH, SASL_STEP | Stub: auth always succeeds. |
SASL Authentication
Binary protocol clients (especially Couchbase SDKs) often require SASL authentication before sending data commands. RedCouch supports the SASL handshake but with stub-only authentication — all credentials are accepted:
SASL_LIST_MECHS→ ReturnsPLAINSASL_AUTH→ Always succeeds regardless of username/passwordSASL_STEP→ Always succeeds
This allows SASL-requiring clients to connect without code changes. See Known Limitations for security implications.
Quiet Commands and Pipelining
Quiet variants (e.g., GETQ, SETQ, DELETEQ) suppress success responses, enabling efficient pipelining. Send a batch of quiet operations followed by a NOOP — the NOOP response signals that all preceding quiet operations have been processed.
RedCouch batches all responses from a read cycle into a single write_all() call for efficiency.
Next Steps
- ASCII Protocol — Human-readable protocol for debugging and simple clients
- Meta Protocol — Flag-based meta commands with fine-grained control
- Protocol Compatibility Reference — Complete specification-level command tables
Protocol Compatibility Reference
This is the definitive reference for RedCouch's protocol support. For tutorial-style examples, see the User Guide.
RedCouch implements three memcached protocol surfaces over a single TCP listener (port 11210). Protocol detection is automatic: the first byte of each connection determines the protocol.
| First Byte | Protocol |
|---|---|
0x80 | Binary (Couchbase memcached) |
| Printable ASCII | Text (ASCII or meta commands) |
\r or \n | Skipped; next byte determines protocol |
Binary Protocol
Based on the Couchbase memcached binary protocol. All 34 opcodes (0x00–0x22, excluding 0x1F) are parsed and dispatched.
Supported Operations
| Opcode Family | Opcodes | Status | Notes |
|---|---|---|---|
| GET | GET (0x00), GETQ (0x09), GETK (0x0C), GETKQ (0x0D) | ✅ Supported | Returns value, flags, CAS. Quiet variants suppress success responses. Key-inclusive variants echo key. |
| SET/ADD/REPLACE | SET (0x01), SETQ (0x11), ADD (0x02), ADDQ (0x12), REPLACE (0x03), REPLACEQ (0x13) | ✅ Supported | CAS-checked mutations. Flags, expiry, and binary-safe values preserved. |
| DELETE | DELETE (0x04), DELETEQ (0x14) | ✅ Supported | CAS-checked. Returns item CAS on success. |
| INCREMENT/DECREMENT | INCR (0x05), INCRQ (0x15), DECR (0x06), DECRQ (0x16) | ✅ Supported | Unsigned 64-bit semantics with initial-value and miss rules. See Limitations. |
| APPEND/PREPEND | APPEND (0x0E), APPENDQ (0x19), PREPEND (0x0F), PREPENDQ (0x1A) | ✅ Supported | Requires existing item. |
| TOUCH | TOUCH (0x1C) | ✅ Supported | Updates TTL on existing items. |
| GAT/GATQ | GAT (0x1D), GATQ (0x1E) | ✅ Supported | Get-and-touch with TTL update. Key included in response. |
| FLUSH | FLUSH (0x08), FLUSHQ (0x18) | ✅ Supported | Namespace-isolated: flushes only RedCouch keys (rc:*), never FLUSHDB. |
| NOOP | NOOP (0x0A) | ✅ Supported | Pipeline terminator. |
| QUIT | QUIT (0x07), QUITQ (0x17) | ✅ Supported | Graceful connection close. |
| VERSION | VERSION (0x0B) | ✅ Supported | Returns RedCouch 0.1.0. |
| STAT | STAT (0x10) | ✅ Supported | General stats (pid, uptime, version, cmd_get, cmd_set, curr_items). |
| VERBOSITY | VERBOSITY (0x1B) | ✅ Supported | Accepted and acknowledged; no runtime effect. |
| SASL AUTH | SASL_LIST_MECHS (0x20), SASL_AUTH (0x21), SASL_STEP (0x22) | ⚠️ Stub | Lists "PLAIN". Auth always succeeds — no credential enforcement. |
| Unknown | Any unrecognized opcode | ✅ Handled | Returns Unknown command (status 0x0081). |
Unsupported Binary Behaviors
| Feature | Status | Reason |
|---|---|---|
| STAT groups (settings, items, slabs, conns) | ❌ Not supported | Returns empty terminator for sub-groups |
| Dynamic SASL credential enforcement | ❌ Not implemented | Stub-only: auth always succeeds |
| UDP transport | ❌ Not supported | TCP only |
| Couchbase bucket/vbucket management | ❌ Not supported | Outside bridge scope |
ASCII Text Protocol
Based on the memcached ASCII text protocol. All 19 standard commands are implemented.
Supported Commands
| Command | Syntax | Status |
|---|---|---|
set | set <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n | ✅ |
add | add <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n | ✅ |
replace | replace <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n | ✅ |
cas | cas <key> <flags> <exptime> <bytes> <cas_unique> [noreply]\r\n<data>\r\n | ✅ |
append | append <key> <bytes> [noreply]\r\n<data>\r\n | ✅ |
prepend | prepend <key> <bytes> [noreply]\r\n<data>\r\n | ✅ |
get | get <key> [<key> ...] | ✅ |
gets | gets <key> [<key> ...] | ✅ |
gat | gat <exptime> <key> [<key> ...] | ✅ |
gats | gats <exptime> <key> [<key> ...] | ✅ |
delete | delete <key> [noreply] | ✅ |
incr | incr <key> <value> [noreply] | ✅ |
decr | decr <key> <value> [noreply] | ✅ |
touch | touch <key> <exptime> [noreply] | ✅ |
flush_all | flush_all [delay] [noreply] | ✅ |
version | version | ✅ |
stats | stats [group] | ✅ |
verbosity | verbosity <level> [noreply] | ✅ |
quit | quit | ✅ |
Unsupported ASCII Behaviors
| Feature | Status | Reason |
|---|---|---|
| Authentication | ❌ Not supported | No SASL/auth in ASCII text mode (per memcached spec) |
flush_all delay | ⚠️ Accepted, not honored | Delay parameter parsed but flush is immediate |
noreply on malformed input | ⚠️ Partial | CLIENT_ERROR may still be emitted if noreply cannot be parsed before the error |
Meta Protocol
Meta commands use two-letter prefixes and a flag-based system, routed through the ASCII text-protocol path.
Supported Meta Commands
| Command | Syntax | Status | Supported Flags |
|---|---|---|---|
mg (meta get) | mg <key> [flags] | ✅ | v, c, f, k, s, O, q, t, T |
ms (meta set) | ms <key> <datalen> [flags]\r\n<data>\r\n | ✅ | F, T, C, q, O, k, M (mode: S/E/A/P/R) |
md (meta delete) | md <key> [flags] | ✅ | C, q, O, k |
ma (meta arithmetic) | ma <key> [flags] | ✅ | D, J, N, q, O, k, v, c, M (mode: I/D) |
mn (meta noop) | mn [flags] | ✅ | O |
me (meta debug) | me <key> [flags] | ⚠️ Stub | Returns EN. Flags O, k, q accepted. |
Unsupported Meta Behaviors
| Feature | Status | Reason |
|---|---|---|
| Stale items (vivify/invalidate) | ❌ | Requires stale item concept not in item model |
Recache (R flag on mg) | ❌ | Requires stale item concept |
Win/lose/stale flags (W, X, Z) | ❌ | Requires stale item concept |
Base64 keys (b flag) | ❌ | Not implemented |
me debug data | ❌ Stub | Always returns EN (not found) |
Item Model
All three protocols share the same underlying item model stored in Redis:
| Property | Implementation |
|---|---|
| Storage shape | Hash-per-item: HSET <redis_key> v <value> f <flags> c <cas> |
| Key namespace | Client key foo → Redis key rc:foo |
| Reserved keys | System keys under redcouch:sys:* (e.g., redcouch:sys:cas_counter) |
| CAS | Monotonic counter via INCR redcouch:sys:cas_counter |
| Binary-safe values | Full binary round-trip via Lua hex encode/decode |
| Flags | 32-bit unsigned, stored as decimal string |
| Expiry | 0 = no expiry, ≤2592000 = relative seconds, >2592000 = absolute Unix timestamp |
| Atomic mutations | All CAS-sensitive operations use server-side Lua scripts |
| Flush scope | FLUSH operates only on rc:* keys, never FLUSHDB |
Architecture
RedCouch is a Redis module (cdylib) that exposes a memcached-compatible TCP endpoint backed by Redis data structures. It runs inside the Redis process as a loaded module, sharing the same address space and data access as Redis itself.
This chapter explains the architectural decisions behind RedCouch, how requests flow through the system, and why the design makes the trade-offs it does.
High-Level Data Flow
┌─────────────────────┐ TCP :11210 ┌──────────────────────────────────┐
│ Memcached Client │ ◄───────────────► │ RedCouch Module (in Redis) │
│ (binary or ASCII) │ │ │
└─────────────────────┘ │ ┌───────────────────────────┐ │
│ │ TCP Listener Thread │ │
│ │ → accept() │ │
│ │ → spawn handler thread │ │
│ └───────────────────────────┘ │
│ │
│ ┌───────────────────────────┐ │
│ │ Connection Handler Thread │ │
│ │ → protocol detection │ │
│ │ → parse request │ │
│ │ → execute via Redis API │ │
│ │ → encode response │ │
│ └───────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────────┐ │
│ │ Redis Data (hashes) │ │
│ │ rc:<key> → {v, f, c} │ │
│ │ redcouch:sys:* │ │
│ └───────────────────────────┘ │
└──────────────────────────────────┘
Request Lifecycle
A typical SET request follows this path:
- Accept — The listener thread accepts the TCP connection and spawns a handler thread.
- Detect — The handler reads the first byte:
0x80routes to binary, printable ASCII routes to text/meta. - Parse — The protocol-specific parser decodes the request into an internal representation (opcode, key, value, flags, extras).
- Namespace — The client key is prefixed with
rc:to form the Redis key. - Execute — A Lua script runs atomically on the Redis side: it increments the CAS counter, hex-encodes the value, and stores the hash fields (
v,f,c). If the request has a CAS token, the script checks it before mutating. - Respond — The handler builds a protocol-specific response (with the new CAS token) and writes it to the socket.
- Batch — For binary protocol, multiple responses from a single read cycle are buffered and flushed in one
write_all()call.
Module Structure
| File | Purpose |
|---|---|
src/lib.rs | Module entry point, TCP listener, connection handler, Redis command dispatch, Lua scripts, stats tracking |
src/protocol.rs | Binary protocol types: opcode enum, request parser, response encoder, frame builder |
src/ascii.rs | ASCII text protocol parser and command types (19 commands), meta prefix routing |
src/meta.rs | Meta protocol parser, flag validation, command types (mg/ms/md/ma/mn/me) |
The crate compiles as a cdylib — a C-compatible dynamic library that Redis loads at runtime. The #[redis_module] macro registers the module with Redis and triggers redcouch_init(), which spawns the TCP listener.
Threading Model
RedCouch uses a thread-per-connection model:
- Main thread — The Redis server thread. Module init registers the module and spawns the listener. RedCouch does not block or interfere with the main Redis event loop.
- Listener thread — A single background thread that calls
accept()on127.0.0.1:11210in a loop. Each accepted connection is handed off to a new thread. - Connection threads — One OS thread per accepted connection (up to
MAX_CONNECTIONS = 1024). Each thread owns its socket and processes requests sequentially — there is no async I/O or event multiplexing within a connection.
Why thread-per-connection?
The thread-per-connection model was chosen for simplicity and correctness:
- Simple ownership — Each thread owns its socket, read buffer, and write buffer. No shared mutable state between connections.
- Sequential request processing — Memcached protocol requests on a single connection are processed in order, which matches the protocol's expectation.
- Bounded resource usage — The 1,024 connection limit caps thread count. Connections beyond this limit are immediately dropped with no response.
The trade-off is that each connection consumes an OS thread's stack (~8 MiB default on Linux). At the 1,024 connection limit, this is ~8 GiB of virtual memory (though actual resident memory is much lower). For RedCouch's intended use case as a migration bridge, this is acceptable.
Redis access serialization
Each connection thread acquires a ThreadSafeContext lock to execute Redis commands. This lock serializes access to the Redis data structures across all connection threads — only one thread can execute a Redis command at a time. This is the primary concurrency bottleneck: benchmark data shows throughput plateaus around 4 concurrent clients (~60k ops/s) and reaches a ceiling of ~35k ops/s at 16+ clients for contended workloads.
Protocol Detection
On each new connection, the first byte determines the protocol:
0x80→ Binary protocol path (handle_binary_conn)- Printable ASCII → Text protocol path (
handle_ascii_conn), which internally routes meta commands (mg/ms/md/ma/mn/meprefixes) to the meta handler \r/\n→ Skipped; next byte re-evaluated
Protocol is fixed for the lifetime of the connection. A single connection cannot switch between binary and ASCII mode.
Storage Model
Each memcached item is stored as a Redis hash with three fields:
| Field | Content | Example |
|---|---|---|
v | Item value (hex-encoded for binary safety) | 48656c6c6f ("Hello") |
f | Flags (32-bit unsigned, decimal string) | 0 |
c | CAS token (monotonic counter value) | 42 |
Key mapping: Client key foo → Redis key rc:foo. This prefix-based namespace prevents collisions with other Redis data. You can inspect RedCouch items directly:
redis-cli HGETALL rc:foo
# Returns: v, <hex-encoded value>, f, <flags>, c, <cas>
System keys: The monotonic CAS counter lives at redcouch:sys:cas_counter. Flush operations scan only rc:* keys, leaving system keys and all non-RedCouch data untouched.
Why hashes instead of strings?
A memcached item has three properties: value, flags, and CAS token. Using a Redis hash stores all three atomically under one key. The alternative — separate keys for each property — would require multi-key transactions and complicate expiry handling. The hash approach also makes items self-describing and inspectable via standard Redis tools.
Why hex encoding?
The redis-module crate's RedisString type requires valid UTF-8 for string operations. Memcached values are arbitrary bytes — a JPEG, a Protocol Buffer, or a compressed payload may contain any byte sequence. Hex encoding guarantees the value stored in Redis is valid ASCII, avoiding panics on non-UTF-8 data. The cost is 2× storage for values and CPU time for encode/decode, but it ensures correctness for all payloads.
Atomicity and Lua Scripts
All CAS-sensitive and read-modify-write operations use server-side Lua scripts executed via redis.call(). This includes:
- Store with CAS check — SET/ADD/REPLACE with a CAS token verify the current CAS before mutating
- Counter operations — INCREMENT/DECREMENT read the current value, compute the new value, and store it atomically
- Append/Prepend — Read the existing value, concatenate, and store back in one script
- Delete with CAS — Verify CAS before removing the key
Each Lua script executes atomically on the Redis side — no other command can interleave. This eliminates the class of check-then-set race conditions that would arise from multi-step operations using separate Redis commands.
The CAS counter itself is a simple INCR redcouch:sys:cas_counter within each Lua script. Every mutation generates a new, globally unique CAS value.
Response Batching
Binary protocol clients often send multiple requests before reading responses (pipelining). RedCouch collects all responses from a single read cycle into a write buffer and flushes them in a single write_all() call. This reduces syscall overhead from O(responses) to O(1) per batch, which is measurable at high throughput.
ASCII protocol responses are written individually since ASCII clients typically send one command at a time.
Dependencies
| Crate | Version | Purpose |
|---|---|---|
redis-module | 2.0.7 | Redis module API bindings — provides Context, ThreadSafeContext, module registration macros, and the RedisString type |
bytes | 1 | Byte buffer management for protocol parsing — used for zero-copy request body handling |
byteorder | 1 | Big-endian integer parsing for binary protocol header fields |
thiserror | 2.0.12 | Derive macro for error type definitions |
The dependency set is intentionally minimal. No async runtime (tokio, async-std) is used — the thread-per-connection model with blocking I/O keeps the dependency tree small and the build fast.
Configuration
All runtime parameters in this release are compile-time constants defined in src/lib.rs. There are no dynamic configuration options, no config file, and no Redis module arguments. To change a parameter, modify the constant in the source code and rebuild.
This is a deliberate simplification for the initial release. Future versions may introduce MODULE LOAD arguments or Redis config directives.
Runtime Defaults
| Parameter | Value | Constant | Rationale |
|---|---|---|---|
| Bind address | 127.0.0.1:11210 | DEFAULT_BIND_ADDR | Loopback-only prevents accidental network exposure |
| Max connections | 1,024 | MAX_CONNECTIONS | Caps thread count; each connection uses one OS thread |
| Read timeout | 30 seconds | SOCKET_READ_TIMEOUT | Prevents idle connections from consuming threads indefinitely |
| Write timeout | 10 seconds | SOCKET_WRITE_TIMEOUT | Detects unresponsive clients |
| Max frame body | 20 MiB | MAX_BODY_LEN | Prevents memory exhaustion from oversized requests |
| Max key length | 250 bytes | MAX_KEY_LEN | Matches memcached specification limit |
| Max command line (ASCII) | 2,048 bytes | — | Prevents unbounded line reads |
| Key prefix | rc: | KEY_PREFIX | Namespaces RedCouch data within Redis |
| CAS counter key | redcouch:sys:cas_counter | CAS_COUNTER_KEY | Global monotonic counter for CAS tokens |
Changing defaults
To change a parameter, edit the corresponding constant in src/lib.rs and rebuild:
# Example: change bind address to all interfaces
# Edit src/lib.rs: const DEFAULT_BIND_ADDR: &str = "0.0.0.0:11210";
cargo build --release
Warning: Binding to
0.0.0.0exposes the memcached endpoint to the network. RedCouch has no authentication enforcement (SASL is stub-only). Use Redis ACLs, firewall rules, or a reverse proxy if you need network-accessible memcached protocol access.
Storage Keys
| Key Pattern | Purpose | Example |
|---|---|---|
rc:<key> | User data items (hash with v, f, c fields) | rc:session:abc |
redcouch:sys:cas_counter | Monotonic CAS counter | Value: 42 |
redcouch:sys:* | Reserved system namespace | — |
Inspecting data via redis-cli
RedCouch data is standard Redis data. You can inspect it directly:
# List all RedCouch keys
redis-cli KEYS 'rc:*'
# Inspect a specific item
redis-cli HGETALL rc:mykey
# Returns: v <hex-value> f <flags> c <cas>
# Check the CAS counter
redis-cli GET redcouch:sys:cas_counter
# Count RedCouch items
redis-cli EVAL "return #redis.call('KEYS', 'rc:*')" 0
Note: The value field (
v) is hex-encoded. To see the actual value, decode it: a value of48656c6c6fis the hex encoding ofHello.
Security Considerations
RedCouch's security model relies on network-level access control, not application-level authentication:
| Layer | Status | Recommendation |
|---|---|---|
| Bind address | Loopback only by default | Safe for single-host deployments. Change only if you have network-level controls. |
| SASL authentication | Stub — always succeeds | Do not rely on SASL for access control. Any client that can reach port 11210 can read and write data. |
| TLS/SSL | Not supported | Use a TLS-terminating proxy (e.g., stunnel) if you need encrypted transport. |
| Redis ACLs | Not applicable | RedCouch uses ThreadSafeContext to execute commands, bypassing Redis ACLs. |
| Connection limit | 1,024 max | Protects against resource exhaustion but is not rate limiting. |
Production deployment recommendations
- Keep the loopback bind unless your clients run on separate hosts.
- Use firewall rules (iptables, security groups) to restrict access to port 11210 if binding to
0.0.0.0. - Monitor connection count — approaching 1,024 connections indicates you may need to scale out or optimize client connection pooling.
- Use Redis persistence (RDB/AOF) if RedCouch data needs to survive restarts. RedCouch data is standard Redis data and is included in Redis persistence snapshots.
Tuning Guidance
Connection limits
The 1,024 connection limit is per-module-instance. If your workload exceeds this, consider:
- Client-side connection pooling — Most memcached client libraries support connection pools. A pool of 10-50 connections per application instance is typical.
- Reducing idle connections — The 30-second read timeout automatically closes idle connections. Clients should reconnect transparently.
Timeout tuning
| Scenario | Recommendation |
|---|---|
| Low-latency workloads | Default timeouts (30s/10s) are conservative. Most operations complete in <1ms. |
| Long-running bulk loads | Default timeouts are fine — they apply per-read, not per-connection. |
| Unreliable network | Consider increasing read timeout if clients have intermittent connectivity. |
Memory considerations
RedCouch's memory usage has two components:
- Redis data — Hash-per-item storage with hex-encoded values. Hex encoding doubles the storage cost of values compared to raw bytes. A 1 KB value consumes ~2 KB of Redis memory.
- Thread stacks — Each connection thread uses ~8 MiB of virtual memory (OS default). At 1,024 connections, this is ~8 GiB virtual but typically <100 MiB resident.
Use Redis's maxmemory setting to bound data storage. RedCouch respects Redis's eviction policies for rc:* keys.
Known Limitations
This chapter documents the known limitations, behavioral differences from standard memcached, and areas intentionally deferred from the current release. Each section explains the cause, the impact on your workload, and any available workarounds.
Counter Precision (post-2^53)
Impact: Counter values are exact only for the range [0, 2^53). Beyond 2^53 (9,007,199,254,740,992), the behavior is precision loss / rounding rather than reliable wraparound.
Cause: Redis Lua scripts use IEEE 754 double-precision floats for numeric operations. Double-precision floats can represent integers exactly up to 2^53, but above that threshold, consecutive integers are no longer representable. The memcached binary protocol specifies unsigned 64-bit counter semantics with wraparound at 2^64; RedCouch cannot match that behavior exactly above 2^53.
Who is affected: Only workloads that increment counters past ~9 quadrillion. Typical use cases (rate limiters, hit counters, sequence numbers) will never reach this threshold.
Workaround: If you need exact 64-bit counter semantics, migrate the counter to a native Redis INCR key (which uses native 64-bit integers) and access it via a Redis client instead of through the memcached protocol.
Append/Prepend Value Growth
Impact: APPEND and PREPEND operations have cost proportional to the existing value size. Each operation reads the entire existing value (hex-encoded), concatenates the new data, and writes the result back.
Cause: The hex-encoding storage model means an append to a 100 KB value requires reading ~200 KB of hex data, concatenating, and writing ~200 KB + new data. This is inherent to the hash-per-item storage model.
Measured behavior: In stress testing, 10 keys reached ~61 KB each after ~950 appends of 64-byte chunks, with no errors or instability.
Workaround: For append-heavy workloads with large accumulated values, consider:
- Periodic key rotation — Start writing to a new key periodically and merge when needed.
- Size monitoring — Track value sizes and set alerts if they grow beyond expected bounds.
- Native Redis migration — Use Redis's native
APPENDcommand (which operates on raw bytes without hex encoding) by migrating the relevant keys to direct Redis access.
Performance Hot Paths
The following are identified performance costs that remain in the current release. They are documented here so operators and contributors can understand where time is spent:
-
Lua hex encode/decode — Every GET and every binary-value mutation passes through Lua
string.format('%02x')for encoding and manual hex decode in Rust for decoding. This is the largest single overhead compared to native Redis operations. Benchmark data shows ~18% throughput reduction on GET hits compared to native RedisGET. This is the correctness-first design: it avoidsredis-moduleUTF-8 panics on arbitrary binary payloads. -
Per-request
ThreadSafeContextlock — Each Redis command acquires theThreadSafeContextlock, which serializes Redis access across all connection threads. This is the primary concurrency bottleneck. Benchmark data shows throughput plateaus at ~4 clients (~60k ops/s) and hits a ceiling of ~35k ops/s at 16+ clients for contended workloads. See Benchmarks & Performance for measured data. -
Per-request allocations —
Vecallocations for key namespacing (rc:prefix), hex conversion buffers, and response assembly. These are small compared to the Lua and lock overhead but contribute to the total per-request cost.
Startup / Bind Caveat
Impact: The background TCP listener thread may log readiness before the bind has definitively succeeded.
Cause: The listener thread starts, logs its intent to bind, and then calls bind(). If another process holds port 11210, the bind fails and the listener thread exits — but Redis itself continues running normally without the memcached endpoint.
Detection:
# After loading the module, verify the port is reachable
nc -z 127.0.0.1 11210 && echo "OK" || echo "FAILED"
# Check Redis logs for bind errors
redis-cli INFO ALL | grep -i redcouch
Workaround: Ensure no other process is using port 11210 before loading the module. If you need to change the port, modify DEFAULT_BIND_ADDR in src/lib.rs and rebuild.
SASL Authentication
Impact: SASL auth is stub-only. Any credentials are accepted. There is no access control on the memcached protocol endpoint.
Cause: The SASL stub exists solely to allow Couchbase SDK clients (which require a SASL handshake) to connect. Implementing real credential enforcement would require a credential store and a policy for credential management, which is outside the scope of a protocol bridge.
Who is affected: Anyone exposing port 11210 to untrusted networks.
Workaround: Rely on network-level access control (firewall rules, security groups, loopback-only bind) rather than application-level authentication. See Configuration — Security Considerations.
Malformed Traffic Behavior
RedCouch handles malformed requests with clean disconnects or error responses — not crashes or hangs:
| Scenario | Behavior | Connection |
|---|---|---|
| Bad magic byte | Connection closed (EOF) | Closed |
| Truncated header | Read timeout (30s), then close | Closed |
| Body length mismatch | Read timeout, then close | Closed |
| Zero-key GET | Error response (status 0x0001) | Stays open |
| Garbage then valid | Connection closed (EOF) | Closed |
| Oversized key (>250 bytes) | Error response (status 0x0004) | Stays open |
| Oversized frame (>20 MiB body) | Error response | Closed |
This behavior was validated in stress testing with all six malformed scenarios producing the expected clean handling with zero crashes.
Maximum Sizes
| Limit | Value | Source |
|---|---|---|
| Max key length | 250 bytes | Memcached spec limit |
| Max frame body | 20 MiB | MAX_BODY_LEN constant |
| Max command line (ASCII) | 2,048 bytes | Hardcoded parser limit |
| Max concurrent connections | 1,024 | MAX_CONNECTIONS constant |
Module Unload
RedCouch does not implement a module unload/deinit handler. The MODULE UNLOAD redcouch command is unverified and may leave the listener thread orphaned. The recommended approach to fully stop the module is to restart the Redis process.
Deferred Surfaces
The following features are explicitly not in the current release scope. They may be added in future versions:
| Feature | Reason for deferral |
|---|---|
Meta protocol stale items (N/vivify, I/invalidate, R/recache, W/X/Z flags) | Requires a stale item concept not in the current item model |
Base64 keys (b flag) | Not implemented; standard ASCII keys cover most use cases |
| UDP transport | TCP only; memcached UDP is rarely used in practice |
| Couchbase bucket/vbucket management | Outside the scope of a protocol bridge |
| Dynamic STAT groups (settings, items, slabs, conns) | Returns empty terminator; general stats are supported |
| Windows support | Redis modules require a Unix-like environment |
| Dynamic configuration | All parameters are compile-time constants |
| TLS/SSL | Use a TLS-terminating proxy for encrypted transport |
Tutorial: Python Client
This tutorial walks through using RedCouch from Python, starting with simple key-value operations and building up to CAS workflows, counters, and pipelining. All examples use the standard pymemcache library.
Prerequisites
- Redis 8+ running with RedCouch loaded (see Installation)
- Python 3.10+
- Install pymemcache:
pip install pymemcache
Note: RedCouch listens on port 11210, not the default memcached port (11211).
Step 1: Connect and Store a Value
from pymemcache.client.base import Client
# Connect to RedCouch (port 11210, not 11211)
client = Client(("127.0.0.1", 11210))
# Store a value
client.set("greeting", "Hello from Python!")
# Retrieve it
value = client.get("greeting")
print(value) # b'Hello from Python!'
The get() method returns bytes by default. To decode to a string, call .decode():
value = client.get("greeting")
print(value.decode("utf-8")) # 'Hello from Python!'
Step 2: Flags and Expiration with a Serde
Memcached flags are a 32-bit integer stored alongside the value. They're commonly used to indicate serialization format. In pymemcache, flags are managed by a serde (serializer/deserializer) object that you pass to the Client constructor.
Expiration is in seconds (up to 30 days) or a Unix timestamp (for longer durations).
import json
class JSONSerde:
"""Serialize non-string values as JSON, using flags to track the format."""
def serialize(self, key, value):
if isinstance(value, str):
return value.encode("utf-8"), 0 # flag 0 = raw string
return json.dumps(value).encode("utf-8"), 1 # flag 1 = JSON
def deserialize(self, key, value, flags):
if flags == 0:
return value.decode("utf-8")
if flags == 1:
return json.loads(value)
return value # fallback: return raw bytes
# Create a client with the JSON serde
client = Client(("127.0.0.1", 11210), serde=JSONSerde())
# Store a Python dict — the serde serializes it as JSON with flag=1
data = {"user": "alice", "role": "admin"}
client.set("session:abc", data, expire=60)
# Retrieve — the serde deserializes based on the stored flag
result = client.get("session:abc")
print(result["user"]) # 'alice'
# Store a plain string — the serde uses flag=0
client.set("greeting", "hello")
print(client.get("greeting")) # 'hello' (str, not bytes)
You can also pass flags directly to set() to override the serde's flag value, but this is only needed for advanced use cases.
Note: The remaining steps use a plain client without a serde (as created in Step 1), so
get()returns rawbytes. If you're using the serde client from this step, the returned values will be deserialized strings/objects instead.
Step 3: Add and Replace (Conditional Stores)
# add() only succeeds if the key does NOT exist
client.add("new-key", "first-write") # True — key created
client.add("new-key", "second-write") # False — key already exists
# replace() only succeeds if the key DOES exist
client.replace("new-key", "updated") # True
client.replace("missing", "value") # False — key not found
Step 4: Compare-and-Swap (CAS)
CAS prevents lost updates when multiple clients write to the same key. The workflow is: read the current CAS token, then write only if the token hasn't changed.
# gets() returns (value, cas_token)
value, cas = client.gets("greeting")
print(f"Value: {value}, CAS: {cas}")
# cas() writes only if the CAS token matches
success = client.cas("greeting", "Updated value", cas)
print(f"CAS update succeeded: {success}") # True
# A second CAS with the old token fails
success = client.cas("greeting", "Stale update", cas)
print(f"Stale CAS update: {success}") # False — token changed
Step 5: Counters
# Initialize a counter
client.set("page-views", "0")
# Increment
new_val = client.incr("page-views", 1)
print(f"Views: {new_val}") # 1
# Increment by 10
new_val = client.incr("page-views", 10)
print(f"Views: {new_val}") # 11
# Decrement
new_val = client.decr("page-views", 3)
print(f"Views: {new_val}") # 8
Note: ASCII protocol
incr/decrreturnNOT_FOUNDfor missing keys. Always initialize counters withsetfirst.
Step 6: Append and Prepend
client.set("log", "entry-1")
client.append("log", ",entry-2")
client.append("log", ",entry-3")
print(client.get("log")) # b'entry-1,entry-2,entry-3'
client.prepend("log", "header:")
print(client.get("log")) # b'header:entry-1,entry-2,entry-3'
Step 7: Multi-Get
# Store several keys
for i in range(5):
client.set(f"item:{i}", f"value-{i}")
# Fetch multiple keys in one round trip
results = client.get_many([f"item:{i}" for i in range(5)])
for key, value in results.items():
print(f"{key}: {value}")
Step 8: Touch and Get-and-Touch
# Extend TTL without fetching the value
client.touch("session:abc", 300) # Reset to 5 minutes
# Get value AND reset TTL in one operation
value = client.gat("session:abc", 600) # Get + set TTL to 10 minutes
Step 9: Verify Data in Redis
Because RedCouch stores data in Redis hashes under the rc: prefix, you can inspect the data directly while it's still live:
# From redis-cli (on the Redis port, not 11210)
redis-cli
# List RedCouch keys created by the steps above
KEYS rc:*
# rc:greeting, rc:session:abc, rc:log, rc:item:0, ...
# Inspect a specific item's internal structure
HGETALL rc:greeting
# 1) "v" ← hex-encoded value
# 2) "48656c6c6f2066726f6d20507974686f6e21"
# 3) "f" ← flags (32-bit integer)
# 4) "0"
# 5) "c" ← CAS token
# 6) "2"
This dual-access capability is the foundation of RedCouch's migration story — see the Migration Guide.
Step 10: Delete and Flush
Once you've finished inspecting the data, clean up:
# Delete a single key
client.delete("greeting")
# Flush all RedCouch keys (only rc:* keys, not entire Redis DB)
client.flush_all()
Runnable Example
A complete, runnable version of this tutorial is available at examples/python/basic_operations.py.
Next Steps
- Multi-Language Examples — Node.js, Go, and CLI examples
- Migration Guide — Step-by-step migration from memcached to Redis
- Use Cases — Real-world scenarios and patterns
- Binary Protocol — Machine-oriented protocol for SDK clients
Multi-Language Examples
RedCouch speaks standard memcached protocol, so any memcached client library works. This chapter shows working examples in several languages, all connecting to RedCouch on port 11210.
Prerequisite: Redis 8+ with RedCouch loaded and listening on
127.0.0.1:11210. See Installation.
Python (pymemcache)
The full Python walkthrough is in the Python Client Tutorial. Here's the quick version:
# pip install pymemcache
from pymemcache.client.base import Client
client = Client(("127.0.0.1", 11210))
client.set("py-key", "hello from python")
print(client.get("py-key")) # b'hello from python'
# CAS workflow
value, cas = client.gets("py-key")
client.cas("py-key", "updated", cas)
# Counters
client.set("hits", "0")
client.incr("hits", 1)
client.close()
Node.js (memjs)
memjs is a popular Node.js memcached client that supports binary protocol with SASL authentication — ideal for RedCouch since it exercises the binary protocol path including SASL.
// npm install memjs
const memjs = require('memjs');
// memjs uses binary protocol with SASL auth by default.
// RedCouch's SASL is stub-only, so any username/password works.
const client = memjs.Client.create('127.0.0.1:11210', {
username: 'any',
password: 'any'
});
async function main() {
// Store a value
await client.set('node-key', 'hello from node.js', { expires: 60 });
// Retrieve it
const { value, flags } = await client.get('node-key');
console.log(value.toString()); // 'hello from node.js'
// Delete
await client.delete('node-key');
client.close();
}
main().catch(console.error);
Note: memjs uses the memcached binary protocol, so this example exercises RedCouch's binary protocol path including SASL authentication handshake.
Go (gomemcache)
gomemcache by Brad Fitzpatrick (original memcached author) is the standard Go memcached client. It uses the ASCII text protocol.
// go get github.com/bradfitz/gomemcache/memcache
package main
import (
"fmt"
"github.com/bradfitz/gomemcache/memcache"
)
func main() {
// Connect to RedCouch
mc := memcache.New("127.0.0.1:11210")
// Store a value with 60-second TTL
mc.Set(&memcache.Item{
Key: "go-key",
Value: []byte("hello from go"),
Expiration: 60,
})
// Retrieve it
item, err := mc.Get("go-key")
if err != nil {
panic(err)
}
fmt.Println(string(item.Value)) // "hello from go"
// CAS workflow
item, _ = mc.Get("go-key")
item.Value = []byte("updated from go")
err = mc.CompareAndSwap(item)
fmt.Printf("CAS update: %v\n", err == nil)
// Counters
mc.Set(&memcache.Item{Key: "go-counter", Value: []byte("0")})
newVal, _ := mc.Increment("go-counter", 5)
fmt.Printf("Counter: %d\n", newVal) // 5
// Delete
mc.Delete("go-key")
}
PHP (ext-memcached)
PHP's Memcached extension uses the binary protocol via libmemcached. It works with RedCouch out of the box:
<?php
$mc = new Memcached();
$mc->addServer('127.0.0.1', 11210);
// Enable binary protocol (recommended for RedCouch)
$mc->setOption(Memcached::OPT_BINARY_PROTOCOL, true);
// Store and retrieve
$mc->set('php-key', 'hello from php', 60);
echo $mc->get('php-key') . "\n"; // 'hello from php'
// CAS workflow
$cas = null;
$value = $mc->get('php-key', null, Memcached::GET_EXTENDED);
$mc->cas($value['cas'], 'php-key', 'updated from php');
// Counters
$mc->set('php-counter', 0);
$mc->increment('php-counter', 5);
echo $mc->get('php-counter') . "\n"; // 5
CLI Tools (telnet / netcat)
The fastest way to test RedCouch is with telnet or nc. These connect via ASCII protocol:
# Connect
telnet 127.0.0.1 11210
# Store a value (set <key> <flags> <exptime> <bytes>)
set cli-key 0 0 9
cli-value
STORED
# Retrieve
get cli-key
VALUE cli-key 0 9
cli-value
END
# Use meta protocol
ms meta-key 5
hello
HD
mg meta-key v c
VA 5 c1
hello
# Quit
quit
See the ASCII Protocol and Meta Protocol guides for comprehensive command references.
Choosing a Client Library
| Language | Library | Protocol | Auth | Notes |
|---|---|---|---|---|
| Python | pymemcache | ASCII | No | Simple, well-maintained, pure Python |
| Node.js | memjs | Binary | SASL | Exercises binary+SASL path |
| Go | gomemcache | ASCII | No | By the original memcached author |
| PHP | ext-memcached | Binary | SASL | Via libmemcached; widely deployed |
| CLI | telnet / nc | ASCII | N/A | Quick testing and debugging |
| Any | Raw sockets | Binary | Optional | Full control; see Binary Protocol |
Tip: All these libraries connect to RedCouch exactly as they would to a standard memcached server — just change the port to 11210. No RedCouch-specific client code is needed.
Next Steps
- Python Client Tutorial — Deep-dive Python walkthrough with CAS, counters, pipelining
- Migration Guide — Step-by-step path from memcached to native Redis
- Use Cases — Real-world scenarios and patterns
Tutorial: Migration from Memcached to Redis
RedCouch is designed as a bridge — not a permanent proxy. This tutorial walks through the three-phase migration from a memcached-based architecture to native Redis, with RedCouch providing the zero-downtime transition layer.
The Three Phases
Phase 1: Bridge Phase 2: Dual-Access Phase 3: Native
┌──────────┐ ┌──────────┐ ┌──────────┐
│ App │─memcached──▶│ App │─memcached──▶ │ App │
│ (old) │ protocol │ (mixed) │ + redis │ (new) │─redis──▶ Redis
└──────────┘ └──────────┘ └──────────┘
│ │
▼ ▼
RedCouch ──▶ Redis RedCouch ──▶ Redis
Phase 1: Bridge — Drop-in Replacement
Step 1: Deploy RedCouch
Load RedCouch into your Redis 8+ server:
redis-server --loadmodule /path/to/libred_couch.so
RedCouch opens a memcached-compatible endpoint on port 11210.
Step 2: Repoint Your Clients
Change your memcached client configuration to point at RedCouch instead of your memcached server. The only change needed is the host and port:
Python (before):
client = Client(("memcached-host", 11211))
Python (after):
client = Client(("redis-host", 11210))
Go (before):
mc := memcache.New("memcached-host:11211")
Go (after):
mc := memcache.New("redis-host:11210")
No other code changes needed. Your application continues using its existing memcached client library.
Step 3: Verify
# Check RedCouch is responding
echo "version" | nc redis-host 11210
# VERSION RedCouch 0.1.0
# Check data is flowing through to Redis
redis-cli -h redis-host KEYS 'rc:*'
# Shows your memcached keys with rc: prefix
What You Get in Phase 1
- All memcached operations work transparently
- Data is stored in Redis hashes under the
rc:key prefix - You can inspect data via
redis-clialongside memcached access - Redis persistence (RDB/AOF) now protects your cache data
- Redis replication can provide high availability
Phase 2: Dual-Access — Gradual Migration
In this phase, you migrate application code service-by-service from memcached clients to native Redis clients. Both access paths work simultaneously against the same data.
Understanding the Storage Model
RedCouch stores each memcached key as a Redis hash:
redis-cli HGETALL rc:session:abc
# 1) "v" ← hex-encoded value
# 2) "68656c6c6f"
# 3) "f" ← flags (32-bit integer)
# 4) "0"
# 5) "c" ← CAS token
# 6) "42"
Reading Data from Redis
To read RedCouch data natively, decode the hex value from the hash:
import redis
r = redis.Redis(host='redis-host', port=6379)
# Read a value stored by a memcached client
hex_value = r.hget("rc:session:abc", "v")
if hex_value:
value = bytes.fromhex(hex_value.decode())
print(value) # b'hello'
Migration Strategy: Service by Service
- Pick a service to migrate (start with read-heavy, non-critical services)
- Add a Redis client alongside the existing memcached client
- Read from Redis (via
rc:hashes) while writes still go through memcached - Switch writes to Redis once reads are verified
- Remove the memcached client from that service
- Repeat for the next service
Example: Migrating a Session Store
Before (memcached client):
from pymemcache.client.base import Client
mc = Client(("redis-host", 11210))
def get_session(session_id):
return mc.get(f"session:{session_id}")
def set_session(session_id, data, ttl=3600):
mc.set(f"session:{session_id}", data, expire=ttl)
After (native Redis client):
import redis
r = redis.Redis(host='redis-host', port=6379)
def get_session(session_id):
return r.get(f"session:{session_id}")
def set_session(session_id, data, ttl=3600):
r.setex(f"session:{session_id}", ttl, data)
Note: Once you switch to native Redis, you no longer need the
rc:prefix or hex encoding — you're using Redis directly with full access to all Redis data structures.
Phase 3: Native — Remove RedCouch
Once all services have migrated to native Redis clients:
- Verify no memcached protocol traffic on port 11210
- Unload the module: restart Redis without the
--loadmoduleargument - Clean up any remaining
rc:*keys if desired:
redis-cli --scan --pattern 'rc:*' | xargs redis-cli DEL
What You Gain
- Full access to all Redis data structures (lists, sets, sorted sets, streams, etc.)
- Native Redis performance without translation overhead
- Redis Cluster support
- Redis pub/sub, Lua scripting, modules
- No memcached protocol parsing overhead
Timeline Expectations
| Phase | Typical Duration | Risk Level |
|---|---|---|
| Bridge | Hours to days | Low — transparent swap |
| Dual-Access | Weeks to months | Medium — requires code changes |
| Native | Minutes | Low — config change + cleanup |
Next Steps
- Python Client Tutorial — Detailed Python examples
- Multi-Language Examples — Node.js, Go, PHP, CLI
- Architecture — How RedCouch translates protocols
- Known Limitations — What to watch for during migration
Use Cases
This chapter describes real-world scenarios where RedCouch solves a concrete problem. Each case explains the setup, why RedCouch fits, and links to relevant reference material.
Session Store Migration
Scenario: Your web application stores sessions in memcached. You want to move to Redis for persistence and replication, but can't change all services at once.
Solution: Deploy RedCouch on Redis 8+. Repoint your memcached clients to port 11210. Sessions are now stored in Redis hashes — surviving restarts and replicating to replicas — while your application code stays unchanged.
Key operations used: set (store session), get (retrieve session), touch (extend TTL), delete (logout)
Why RedCouch fits:
- Zero application code changes during Phase 1
- Redis persistence (RDB/AOF) eliminates cold-cache risk after restarts
- Gradual migration to native Redis clients is possible per-service
See: Migration Guide for the step-by-step process.
Couchbase-to-Redis Migration
Scenario: You're migrating from Couchbase Server to Redis. Your applications use Couchbase SDKs that speak the memcached binary protocol for key-value operations.
Solution: RedCouch implements the Couchbase memcached binary protocol (all 34 opcodes, including SASL handshake). Point your Couchbase SDKs at RedCouch on port 11210. The SASL stub accepts any credentials, so no auth configuration changes are needed.
Key operations used: Binary GET/SET/DELETE with CAS, SASL auth handshake, quiet variants for pipelining
Why RedCouch fits:
- Full binary protocol compatibility with Couchbase SDK wire format
- SASL authentication stub lets SDKs connect without credential changes
- CAS tokens are real and atomically enforced via Redis Lua scripts
Caveat: Couchbase-specific features (buckets, vbuckets, views, N1QL) are not supported — only key-value operations. See Known Limitations.
Rate Limiter with Dual Access
Scenario: You have a rate limiter using memcached counters (incr/decr). You want to add Redis-based analytics that reads the same counter data in real time.
Solution: Use RedCouch as the bridge. Your rate limiter writes through memcached protocol (incr/decr on port 11210). Your analytics service reads the same counters via native Redis commands on port 6379.
# Rate limiter (memcached client, unchanged)
from pymemcache.client.base import Client
mc = Client(("redis-host", 11210))
def check_rate(client_ip, limit=100):
key = f"rate:{client_ip}"
try:
count = mc.incr(key, 1)
except Exception:
mc.set(key, "1", expire=60)
count = 1
return int(count) <= limit
# Analytics service (native Redis, new)
import redis
r = redis.Redis(host='redis-host', port=6379)
def get_rate_counts():
keys = r.keys("rc:rate:*")
counts = {}
for key in keys:
hex_val = r.hget(key, "v")
if hex_val:
counts[key.decode()] = bytes.fromhex(hex_val.decode()).decode()
return counts
Why RedCouch fits:
- Counters work correctly through the memcached protocol
- Same data is readable via native Redis for analytics
- No changes needed to the rate limiter code
Cache Warm-Up Testing
Scenario: You want to validate that your application handles cache misses correctly after a restart, but your memcached setup doesn't support scripted warm-up.
Solution: Use RedCouch with Redis persistence. Load test data via redis-cli or a script, then verify your application reads it correctly through the memcached protocol.
# Pre-load test data directly into Redis
redis-cli HSET rc:config:feature-flags v "$(echo -n '{"dark_mode":true}' | xxd -p)" f 0 c 1
# Verify through memcached protocol
echo -e "get config:feature-flags\r" | nc 127.0.0.1 11210
Why RedCouch fits:
- Redis persistence means cache data survives restarts
- Data can be loaded via Redis commands or memcached protocol
- Useful for integration testing and staging environments
Protocol Debugging and Monitoring
Scenario: You need to debug what your memcached clients are actually sending and receiving, or monitor cache hit rates.
Solution: RedCouch exposes statistics through the memcached stats command. You can also inspect the underlying Redis data directly.
# Check stats via ASCII protocol
echo "stats" | nc 127.0.0.1 11210
# STAT cmd_get 1523
# STAT cmd_set 847
# STAT curr_items 312
# Inspect specific keys via redis-cli
redis-cli HGETALL rc:problematic-key
# Count total cached items
redis-cli --scan --pattern 'rc:*' | wc -l
Why RedCouch fits:
- Memcached stats are available through standard protocol
- Redis gives you additional visibility (key inspection, memory usage, slow log)
- Dual access makes debugging transparent
Summary: When to Use RedCouch
| Scenario | Fit | Key Benefit |
|---|---|---|
| Migrating from memcached to Redis | ✅ Ideal | Zero-downtime, gradual migration |
| Migrating from Couchbase KV to Redis | ✅ Ideal | Binary protocol + SASL compatibility |
| Adding persistence to a memcached cache | ✅ Good | Redis RDB/AOF protects cache data |
| Dual-access (memcached + Redis) during migration | ✅ Good | Both protocols hit the same data |
| Long-term production memcached proxy | ⚠️ Acceptable | Works, but native Redis is faster |
| High-throughput, latency-critical cache | ❌ Not ideal | Translation overhead adds ~18% latency |
Next Steps
- Python Client Tutorial — Hands-on Python walkthrough
- Multi-Language Examples — Node.js, Go, PHP, CLI
- Benchmarks & Performance — Measured throughput data
- Architecture — How the translation layer works
Benchmarks & Performance
This chapter presents RedCouch's measured performance characteristics, explains what the numbers mean for your workload, and documents how to reproduce the benchmarks.
How to Read These Numbers
All benchmarks measure end-to-end operation latency — the time from sending a memcached protocol request to receiving the complete response, including network round-trip, protocol parsing, Lua script execution, and response encoding.
- ops/sec — Operations per second (higher is better)
- p50 µs — Median latency in microseconds (50th percentile)
- p95/p99 µs — Tail latency (95th/99th percentile)
- Errors — Number of failed operations (should be 0)
The benchmark harness uses Python asyncio clients sending individual operations in a tight loop. Throughput numbers reflect single-system capacity, not network-bound scenarios.
Throughput Baselines (Single Client)
Source: benchmarks/results/bench_20260402_144029.json (Redis 8.4.0, macOS arm64, local — no Docker).
| Workload | ops/sec | p50 µs | p95 µs | p99 µs | Errors |
|---|---|---|---|---|---|
| SET 64B | 31,694 | 29 | 39 | 77 | 0 |
| SET 1KB | 29,899 | 31 | 41 | 58 | 0 |
| SET 64KB | 8,074 | 118 | 151 | 181 | 0 |
| GET (hit) | 26,814 | 35 | 45 | 57 | 0 |
| GET (miss) | 36,782 | 26 | 33 | 41 | 0 |
| DELETE | 40,038 | 24 | 31 | 40 | 0 |
| INCREMENT | 31,883 | 30 | 39 | 56 | 0 |
| Mixed R/W | 14,635 | 65 | 80 | 95 | 0 |
| APPEND | 19,428 | 51 | 73 | 86 | 0 |
| TOUCH | 33,856 | 28 | 36 | 50 | 0 |
What this tells you: A single client can sustain ~30k ops/s for SET and ~27k ops/s for GET at sub-100µs p99 latency. GET misses are faster than hits because misses skip the Lua hex-decode step. APPEND is the slowest mutation because it reads, concatenates, and re-encodes the existing value.
Concurrency Scaling (4 Clients)
| Workload | ops/sec | p50 µs | p95 µs |
|---|---|---|---|
| SET 64B | 62,574 | 60 | 100 |
| SET 1KB | 60,439 | 63 | 104 |
| GET (hit) | 50,929 | 76 | 120 |
| Mixed R/W | 30,138 | 129 | 184 |
What this tells you: Throughput roughly doubles from 1 to 4 clients, but latency also increases due to ThreadSafeContext lock contention. The optimal concurrency is ~4 clients; beyond that, the lock becomes the bottleneck and throughput plateaus (see Stress/Soak below).
Cross-System Comparison
To answer "how does RedCouch compare to Couchbase and native Redis?", all three systems were benchmarked under identical conditions using Docker containers with the same resource limits (256 MB maxmemory, no persistence).
Source: benchmarks/results/cross_system_20260403_092550.json (symmetric Docker topology).
Single Client (c=1)
| Operation | RedCouch ops/s | Redis OSS ops/s | Couchbase ops/s |
|---|---|---|---|
| SET 64B | 7,873 | 8,039 | 8,666 |
| GET (hit) | 6,776 | 8,294 | 8,360 |
| GET (miss) | 8,264 | 8,265 | 8,249 |
| DELETE | 8,421 | 8,402 | 8,471 |
Four Clients (c=4)
| Operation | RedCouch ops/s | Redis OSS ops/s | Couchbase ops/s |
|---|---|---|---|
| SET 64B | 20,710 | 19,236 | 19,757 |
| GET (hit) | 15,098 | 18,351 | 21,985 |
| GET (miss) | 21,992 | 19,903 | 22,376 |
| DELETE | 22,986 | 21,236 | 22,090 |
Interpretation
- All three systems are closely matched — at c=1, all systems fall within ±10% of each other for SET, DELETE, and GET miss. The Docker networking layer dominates single-client latency.
- GET hit is RedCouch's most expensive operation — ~18% slower than Redis native at c=1 due to the Lua hex-decode overhead. This is the cost of binary-safe value storage.
- RedCouch is a viable bridge — The protocol translation layer does not introduce order-of-magnitude penalties. Migration from memcached protocol to native Redis removes the bridge overhead entirely.
Note: The absolute numbers in the Docker comparison (~8k ops/s at c=1) are lower than the local baselines (~30k ops/s) because Docker networking adds latency. The relative comparisons between systems are what matter.
Stress/Soak Validation
A 7-phase stress suite validated RedCouch's behavior under sustained load:
| Finding | Value | What it means |
|---|---|---|
| Performance sweet spot | 4 clients (~61k ops/s SET) | Optimal concurrency for throughput |
| Post-saturation ceiling | ~35k ops/s at c≥16 | ThreadSafeContext lock serialization caps throughput |
| Soak stability | 175,036 ops / 5s, 0 errors | Stable under sustained mixed workload |
| Memory growth (soak) | 742 KB over 175k ops | No memory leaks detected |
| Connection churn | 169 conn/s, 0 failures | Thread-per-connection model handles rapid connect/disconnect |
Running Benchmarks
Prerequisites
- Redis 8+ with RedCouch module loaded (for single-system benchmarks)
- Python 3.10+ with
asynciosupport - Docker (for cross-system comparison only)
Commands
# Single-system benchmark (requires Redis 8+ with module loaded)
cd benchmarks && bash run_benchmarks.sh
# Stress/soak validation
cd benchmarks && bash run_stress_soak.sh
# Cross-system three-way comparison (requires Docker)
bash benchmarks/run_cross_system.sh
Benchmark artifacts
Results are stored as JSON files in benchmarks/results/. The latest results are symlinked:
| Symlink | Points to |
|---|---|
latest.json | Most recent single-system benchmark |
stress_latest.json | Most recent stress/soak result |
Reproducing the cross-system comparison
The cross-system benchmark uses Docker Compose to run all three systems:
# Start all containers (Couchbase, Redis OSS, Redis + RedCouch)
cd benchmarks && docker compose up --build --wait
# Run the benchmark
python3 bench_cross_system.py
# Tear down
docker compose down
See benchmarks/docker-compose.yml for container configuration and benchmarks/Dockerfile.redcouch for the RedCouch container build.
Release Process
This chapter covers how RedCouch releases are built, published, and verified. It serves as the maintainer runbook for cutting releases.
Distribution Channels
| Channel | Status | Output |
|---|---|---|
| GitHub Releases | Primary | Pre-built .tar.gz archives with SHA-256 checksums for 4 targets |
| crates.io | Secondary, policy-gated | Source crate (requires explicit opt-in) |
GitHub Releases is the primary distribution channel. Users download pre-built module binaries. crates.io publication is optional and not required.
Release Artifacts
Each release produces artifacts for all supported platforms:
| Target | Artifact | OS | Architecture |
|---|---|---|---|
x86_64-unknown-linux-gnu | libred_couch.so | Linux | x86_64 |
aarch64-unknown-linux-gnu | libred_couch.so | Linux | ARM64 |
x86_64-apple-darwin | libred_couch.dylib | macOS | x86_64 |
aarch64-apple-darwin | libred_couch.dylib | macOS | ARM64 |
Each artifact is packaged as a .tar.gz archive with a matching .tar.gz.sha256 checksum file.
Cutting a Release (Maintainer Runbook)
Pre-release checklist
-
Latest
maincommit passes CI (check, test, clippy, fmt, doc) -
Version in
Cargo.tomlis updated to the new version -
cargo checksucceeds (updatesCargo.lock) -
CHANGELOGor release notes are prepared (GitHub auto-generates from PR titles)
Steps
-
Verify CI is green on the latest
maincommit:# Check CI status on GitHub, or run locally: cargo check --all-targets cargo test cargo clippy --all-targets -- -D warnings cargo fmt --check RUSTDOCFLAGS="-D warnings" cargo doc --no-deps -
Set the version in
Cargo.tomland commit:# Edit Cargo.toml: version = "0.2.0" cargo check # Updates Cargo.lock git add Cargo.toml Cargo.lock git commit -m "chore: bump version to 0.2.0" git push origin main -
Create and push the tag (must match
Cargo.tomlversion):git tag v0.2.0 git push origin v0.2.0 -
The release workflow runs automatically (
.github/workflows/release.yml):Job What it does Failure behavior validate-tagConfirms tag version matches Cargo.tomlFails fast — prevents mismatched releases testRuns cargo test, clippy, fmtBlocks build and publish buildCross-compiles for all 4 targets Blocks publish publish-githubCreates GitHub Release with artifacts — publish-cratePublishes to crates.io (if enabled) Non-blocking — GitHub Release still succeeds -
Verify the release at
https://github.com/fcenedes/RedCouch/releases:- All 4 target archives present
- SHA-256 checksum files present
- Release notes auto-generated
What the workflow does NOT do
- It does not deploy the module to any running Redis instance.
- It does not publish to crates.io unless
PUBLISH_CRATEis explicitly enabled (see below). - It does not run integration tests or benchmarks — only unit tests, clippy, and fmt.
crates.io Publication (Optional)
crates.io publication is policy-gated and disabled by default. To enable:
- Set repository variable
PUBLISH_CRATEtotruein GitHub Settings → Variables. - Add a
CARGO_REGISTRY_TOKENsecret with a valid crates.io API token. - The
publish-cratejob runs automatically after a successful GitHub Release.
This gate exists because crates.io publication is irreversible — once a version is published, it cannot be unpublished (only yanked).
CI Pipeline
The CI workflow (.github/workflows/ci.yml) runs on every push to main and on pull requests:
| Check | Platforms | Command |
|---|---|---|
| Build check | Ubuntu + macOS | cargo check --all-targets |
| Tests | Ubuntu + macOS | cargo test |
| Linting | Ubuntu + macOS | cargo clippy --all-targets -- -D warnings |
| Formatting | Ubuntu + macOS | cargo fmt --check |
| Documentation | Ubuntu | RUSTDOCFLAGS="-D warnings" cargo doc --no-deps |
All checks must pass before a PR can be merged.
Verification Commands
# Build check
cargo check --all-targets
# Unit/protocol tests (221 tests — no Redis required)
cargo test
# Integration tests (requires Redis 8+ with module loaded)
cd tests/integration && bash run_e2e.sh
# Benchmark suite
cd benchmarks && bash run_benchmarks.sh
# Stress/soak validation
cd benchmarks && bash run_stress_soak.sh
# Cross-system comparison (requires Docker)
bash benchmarks/run_cross_system.sh
Contributing
Thank you for your interest in contributing to RedCouch!
Prerequisites
- Rust 1.85+ (stable toolchain)
- Redis 8.x (for integration testing; verified on 8.4.0)
- Python 3.10+ (for E2E and benchmark scripts)
- Git
Getting Started
git clone https://github.com/fcenedes/RedCouch.git
cd RedCouch
cargo build --release
cargo test
cargo fmt --check
cargo clippy --all-targets -- -D warnings
Development Workflow
1. Build and Test Locally
cargo check --all-targets # Quick check
cargo test # All 221 tests
cargo clippy --all-targets -- -D warnings
cargo fmt --check
2. Integration Testing (requires Redis 8+)
redis-server --loadmodule ./target/release/libred_couch.dylib # macOS
redis-server --loadmodule ./target/release/libred_couch.so # Linux
# In a separate terminal:
cd tests/integration && bash run_e2e.sh
3. Benchmarks
cd benchmarks && bash run_benchmarks.sh
cd benchmarks && bash run_stress_soak.sh
Code Structure
| File | Responsibility |
|---|---|
src/lib.rs | Module entry, TCP listener, connection handling, Redis dispatch |
src/protocol.rs | Binary protocol parser/encoder, types, constants |
src/ascii.rs | ASCII text protocol parser, meta prefix routing |
src/meta.rs | Meta protocol parser, flag validation |
See the Architecture chapter for detailed architecture documentation.
Coding Standards
- No
unsafecode: The crate uses#![forbid(unsafe_code)]. - Formatting: Run
cargo fmtbefore committing. CI enforcescargo fmt --check. - Linting: Run
cargo clippy --all-targets -- -D warnings. CI enforces zero warnings. - Documentation: Run
cargo doc --no-depswithRUSTDOCFLAGS="-D warnings". CI enforces clean doc builds. - Tests: Add tests for new protocol commands or behavior changes. Tests should run without a live Redis instance (use
#[cfg(not(test))]guards for Redis-dependent code).
Pull Request Process
- Fork and branch: Create a feature branch from
main. - Make changes: Keep changes focused and minimal.
- Test locally: Run
cargo test,cargo clippy, andcargo fmt --check. - Write tests: Add or update tests to cover your changes.
- Submit PR: Target the
mainbranch. Describe what changed and why. - CI checks: Your PR must pass all CI checks on both Ubuntu and macOS.
What to Contribute
- Bug fixes and protocol conformance improvements
- Test coverage expansion
- Performance improvements (with benchmark evidence)
- Documentation improvements
Please open an issue first for larger changes or new features.
License
By contributing, you agree that your contributions will be licensed under the MIT License.
Test Architecture
RedCouch has a layered test strategy: fast unit tests that run without Redis, integration tests that exercise the full protocol stack against a live Redis instance, and benchmark/stress suites for performance validation.
Test Summary
| Category | Count | Location | Requires Redis? |
|---|---|---|---|
| Binary protocol unit tests | 76 | src/protocol.rs | No |
| ASCII protocol unit tests | 97 | src/ascii.rs | No |
| Meta protocol unit tests | 48 | src/meta.rs | No |
| Integration/E2E tests | Suite | tests/integration/test_binary_protocol.py | Yes (Redis 8+) |
| Benchmark workloads | 10+ profiles | benchmarks/bench_binary_protocol.py | Yes (Redis 8+) |
| Stress/soak workloads | 7 phases | benchmarks/stress_soak_validation.py | Yes (Redis 8+) |
Total unit test count: 221 tests (76 + 97 + 48) — all run via cargo test without Redis.
Host-Process Testing (No Redis Required)
The key design decision in RedCouch's test architecture is that protocol parsing and encoding tests run without Redis. This is achieved through #[cfg(not(test))] guards that exclude Redis allocator and module dependencies during cargo test. The test binary runs as a normal host process, not inside Redis.
# Run all 221 unit tests (no Redis needed)
cargo test
What the unit tests cover
| Area | Examples |
|---|---|
| Parser round-trips | Encode a request → parse it → verify fields match |
| Opcode coverage | Every supported opcode has at least one test |
| Quiet/base mapping | Quiet opcodes map to correct base opcodes |
| Frame building | Response frames have correct header fields, body length, CAS |
| Malformed handling | Truncated headers, oversized bodies, invalid opcodes |
| Binary-safe payloads | Non-UTF-8 bytes preserved through encode/decode |
| CAS preservation | CAS tokens round-trip through request/response |
| Key validation | Empty keys, oversized keys, boundary lengths |
| Meta flag validation | Unsupported flags rejected, proxy hints accepted |
| Property-based sweeps | Exhaustive opcode and size combinations |
Writing new unit tests
New protocol tests follow a consistent pattern:
- Construct a request using the protocol builder functions
- Parse it using the protocol parser
- Assert the parsed fields match expectations
Tests live in #[cfg(test)] mod tests blocks within each protocol module. Example pattern from src/protocol.rs:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_parse_set_request() { // Build a SET request with known key, value, flags, expiry let request = build_request(Opcode::Set, key, extras, value); // Parse it back let parsed = parse_request(&request).unwrap(); // Verify all fields assert_eq!(parsed.opcode, Opcode::Set); assert_eq!(parsed.key, key); // ... } } }
Integration Testing (Requires Redis 8+)
Integration tests exercise the full stack: TCP connection → protocol parsing → Redis execution → response encoding.
Setup
# Build the module
cargo build --release
# Start Redis with the module loaded
redis-server --loadmodule ./target/release/libred_couch.dylib # macOS
redis-server --loadmodule ./target/release/libred_couch.so # Linux
# Verify the module is loaded and listening
redis-cli MODULE LIST
nc -z 127.0.0.1 11210 && echo "RedCouch listening"
Running E2E tests
# In a separate terminal (Redis must be running with module)
cd tests/integration && bash run_e2e.sh
The E2E test suite (tests/integration/test_binary_protocol.py) uses raw socket connections to send binary protocol frames and verify responses. It covers all 34 binary opcodes with correct and error cases.
Benchmark and Stress Testing
Benchmarks and stress tests are separate from the correctness test suite. They require Redis 8+ with the module loaded and produce JSON result files.
# Performance benchmark (10 workload profiles, 2 concurrency levels)
cd benchmarks && bash run_benchmarks.sh
# Stress/soak validation (7 phases — concurrency scaling, soak, connection churn)
cd benchmarks && bash run_stress_soak.sh
# Three-way cross-system comparison (requires Docker)
bash benchmarks/run_cross_system.sh
Results are stored in benchmarks/results/ as timestamped JSON files. See Benchmarks & Performance for interpretation and measured data.
CI Integration
The CI pipeline runs unit tests automatically on every push and PR:
# What CI runs (Ubuntu + macOS):
cargo check --all-targets
cargo test
cargo clippy --all-targets -- -D warnings
cargo fmt --check
RUSTDOCFLAGS="-D warnings" cargo doc --no-deps
Integration tests and benchmarks are not run in CI — they require a live Redis instance with the module loaded. Run them locally before submitting performance-sensitive changes.
API Reference
The full Rust API reference is auto-generated from source-code doc comments by rustdoc and published alongside this book. On the published documentation site, navigate directly to the links below.
Browse Online
Key Entry Points
| Module | Description | Link |
|---|---|---|
red_couch | Crate root — module registration, TCP listener, connection handling | red_couch |
red_couch::protocol | Binary protocol types, opcode enum, request parser, response encoder | protocol |
red_couch::ascii | ASCII text protocol parser and command dispatch | ascii |
red_couch::meta | Meta protocol parser, flag validation, command types | meta |
Commonly Referenced Items
Opcode— all supported binary-protocol opcodesRequest— parsed binary requestHeader— parsed request headerParseResult— parse outcome (Ok / Incomplete / error)try_parse_request— primary request parserwrite_response— binary response encoderResponseMeta— response metadata fields
Generating Locally
# Build and open in your browser
RUSTDOCFLAGS="-D warnings" cargo doc --no-deps --open
The generated docs cover all public types, functions, and modules in the red_couch crate. They are rebuilt from source on every change, so they always reflect the current code.
How It Works
The GitHub Actions Pages workflow builds both the mdBook site and the rustdoc output in a single job:
mdbook buildproduces the narrative documentation inbook/output/.cargo doc --no-depsproduces the API reference intarget/doc/.- The workflow copies
target/doc/intobook/output/api/, so the final Pages artifact has the structure:book/output/ ├── index.html ← mdBook site root ├── api/ │ └── red_couch/ ← rustdoc API reference │ ├── index.html │ ├── protocol/ │ ├── ascii/ │ └── meta/ └── ... ← other book chapters
The links on this page point into the api/ subtree, so they work on the published site.
CI Validation
The CI pipeline validates that both the API documentation and the book build cleanly on every push to main and on pull requests:
RUSTDOCFLAGS="-D warnings" cargo doc --no-deps
mdbook build
This ensures documentation stays in sync with the code and catches broken links or build errors before merging.