RedCouch

A Redis module that bridges memcached protocol clients to Redis 8+

RedCouch provides a memcached-compatible TCP endpoint backed by Redis data structures. It allows existing memcached clients — including Couchbase SDK clients that speak the memcached binary protocol — to connect to a Redis 8+ server and use it as a drop-in data store, without any application-level code changes.

Who Is This For?

RedCouch is designed for teams in one of these situations:

  • Migrating from Couchbase to Redis — Your applications use Couchbase SDKs or memcached clients. RedCouch lets you point them at a Redis 8+ server and keep running while you plan and execute a gradual migration to native Redis clients.
  • Consolidating data infrastructure — You want to reduce operational overhead by replacing a standalone memcached or Couchbase deployment with Redis, which you may already be running for other workloads.
  • Evaluating Redis as a memcached replacement — You want to test Redis with your existing memcached workload before committing to a full client migration.

RedCouch is not a general-purpose memcached server. It is a protocol-translation bridge: it speaks memcached on the wire but stores data in Redis. The intended end state is migrating your clients to speak Redis natively, at which point RedCouch is no longer needed.

What RedCouch Does

RedCouch runs inside Redis as a loaded module. On startup, it opens a TCP listener (default 127.0.0.1:11210) that accepts memcached protocol connections. Each incoming request is parsed, translated into Redis operations, and the response is sent back in the memcached protocol format the client expects.

┌─────────────────────┐     TCP :11210     ┌──────────────────────────────────┐
│  Memcached Client   │ ◄───────────────► │  RedCouch Module (in Redis)      │
│  (binary or ASCII)  │                    │  parse → Redis ops → respond     │
└─────────────────────┘                    └──────────────────────────────────┘

The translation is transparent: clients see standard memcached responses, and Redis sees standard hash operations. Data stored through RedCouch is visible via redis-cli under the rc: key prefix.

The Migration Path

RedCouch supports a three-phase migration:

  1. Bridge phase — Deploy RedCouch on your Redis 8+ server. Point your memcached/Couchbase clients at port 11210. Your data now lives in Redis, accessible through both the memcached protocol (via RedCouch) and native Redis commands.

  2. Dual-access phase — Begin migrating application code from memcached clients to native Redis clients. Both access paths work simultaneously against the same data. You can migrate one service at a time.

  3. Native phase — Once all clients speak Redis natively, unload the RedCouch module. The memcached protocol endpoint shuts down; Redis continues serving your data directly with no translation overhead.

This migration path is validated by benchmark comparisons showing that RedCouch imposes modest overhead (~18% on GET hits) compared to native Redis, and performs competitively with Couchbase under identical conditions. See Benchmarks & Performance for measured data.

Supported Protocols

RedCouch automatically detects the protocol from the first byte of each connection:

ProtocolDetectionCoverage
Binary (Couchbase memcached)First byte 0x80All 34 opcodes — GET, SET, DELETE, INCR/DECR, APPEND, TOUCH, FLUSH, SASL, STAT, and more
ASCII textPrintable ASCIIAll 19 standard commands — set, get, delete, incr, cas, flush_all, etc.
Metamg/ms/md/ma/mn/me prefixesFlag-based meta get/set/delete/arithmetic/noop/debug

Protocol is detected once per connection and fixed for its lifetime. A single RedCouch listener serves all three protocols concurrently.

Key Design Points

  • Hash-per-item storage — Each memcached item is stored as a Redis hash with fields for value (v), flags (f), and CAS (c). This makes items inspectable via standard Redis tools.
  • Namespaced keys — Client keys are prefixed with rc: to avoid collisions with other Redis data. System keys live under redcouch:sys:*.
  • Atomic mutations — All CAS-sensitive operations use server-side Lua scripts, ensuring atomicity without client-side check-then-set races.
  • Binary-safe values — Full binary round-trip via Lua hex encode/decode. Non-UTF-8 payloads are preserved exactly.
  • Safe defaults — Loopback-only bind, 1024 connection limit, read/write timeouts, and a 20 MiB frame cap protect against accidental exposure and resource exhaustion.

Platform Support

TargetOSArchitectureArtifact
x86_64-unknown-linux-gnuLinuxx86_64libred_couch.so
aarch64-unknown-linux-gnuLinuxARM64libred_couch.so
x86_64-apple-darwinmacOSx86_64libred_couch.dylib
aarch64-apple-darwinmacOSARM64libred_couch.dylib

Windows is not supported. Redis modules require a Unix-like environment.

How This Book Is Organized

  • Getting Started — Install RedCouch, load it into Redis, and run your first commands.
  • User Guide — Walkthrough examples for each protocol: ASCII, meta, and binary.
  • Tutorials & Examples — Step-by-step tutorials, multi-language client examples, migration guide, and real-world use cases.
  • Reference — Complete protocol compatibility tables, architecture details, configuration reference, and known limitations.
  • Operations — Performance benchmarks, release process, and operational guidance.
  • Development — How to contribute, test architecture, and coding standards.
  • API Reference — Rust API documentation generated from source.

License

MIT — see LICENSE for details.

Source Code

RedCouch is open source: https://github.com/fcenedes/RedCouch

Installation

Prerequisites

  • Redis 8.x (Open Source). Verified on Redis 8.4.0.
  • Rust 1.85+ (stable) — only needed if building from source.

Option 1: Install from GitHub Release

Pre-built artifacts are attached to GitHub Releases as .tar.gz archives with SHA-256 checksums.

# Download the release for your platform (example: Linux x86_64, version v0.1.0)
curl -LO https://github.com/fcenedes/RedCouch/releases/download/v0.1.0/redcouch-v0.1.0-x86_64-unknown-linux-gnu.tar.gz
curl -LO https://github.com/fcenedes/RedCouch/releases/download/v0.1.0/redcouch-v0.1.0-x86_64-unknown-linux-gnu.tar.gz.sha256

# Verify checksum
sha256sum -c redcouch-v0.1.0-x86_64-unknown-linux-gnu.tar.gz.sha256

# Extract
tar xzf redcouch-v0.1.0-x86_64-unknown-linux-gnu.tar.gz

This extracts libred_couch.so (Linux) or libred_couch.dylib (macOS).

Option 2: Build from Source

git clone https://github.com/fcenedes/RedCouch.git
cd RedCouch
cargo build --release

The compiled module is at:

  • macOS: target/release/libred_couch.dylib
  • Linux: target/release/libred_couch.so

Loading the Module

Command Line

redis-server --loadmodule /path/to/libred_couch.so      # Linux
redis-server --loadmodule /path/to/libred_couch.dylib    # macOS

redis.conf

Add to your Redis configuration file:

loadmodule /path/to/libred_couch.so

Runtime (MODULE LOAD)

redis-cli MODULE LOAD /absolute/path/to/libred_couch.so

Verify

After loading, check that the module is active:

redis-cli MODULE LIST
# Should show "redcouch" in the list

# Verify the memcached endpoint is listening
nc -z 127.0.0.1 11210 && echo "RedCouch listening" || echo "Not listening"

Unloading the Module

redis-cli MODULE UNLOAD redcouch

Note: RedCouch does not implement a module unload/deinit handler. The background TCP listener thread has no graceful shutdown path. Unloading via MODULE UNLOAD is unverified and may leave the listener thread orphaned. The recommended approach is to restart the Redis process to fully stop the module.

Troubleshooting

SymptomCauseFix
FATAL: cannot bind 127.0.0.1:11210Port already in useStop the other process using port 11210
Module loads but port 11210 not reachableBind failed silentlyCheck Redis logs for the bind error message
MODULE LOAD returns errorWrong platform artifactUse .so for Linux, .dylib for macOS
Connection refused after 1024 clientsConnection limit reachedReduce concurrent connections or wait for existing ones to close

Quick Start

This page walks you through building RedCouch, loading it into Redis, and running your first commands — in about five minutes.

Prerequisites

  • Redis 8.x installed and available as redis-server (verified on 8.4.0)
  • Rust 1.85+ toolchain (for building from source)

If you don't have Redis 8, see Installation for download options.

1. Build the Module

git clone https://github.com/fcenedes/RedCouch.git
cd RedCouch
cargo build --release

This produces the module library:

  • macOS: target/release/libred_couch.dylib
  • Linux: target/release/libred_couch.so

2. Start Redis with RedCouch

# macOS
redis-server --loadmodule ./target/release/libred_couch.dylib

# Linux
redis-server --loadmodule ./target/release/libred_couch.so

You should see Redis start with a log line indicating the module loaded. RedCouch opens a TCP listener on 127.0.0.1:11210.

Verify it's running:

# Check the module is loaded
redis-cli MODULE LIST
# Should show "redcouch"

# Check the memcached endpoint is listening
nc -z 127.0.0.1 11210 && echo "RedCouch is ready"

3. Your First Commands

Connect using telnet (ASCII protocol):

telnet 127.0.0.1 11210

Store and retrieve a value

set greeting 0 0 12
Hello World!
STORED

get greeting
VALUE greeting 0 12
Hello World!
END

The set command syntax is: set <key> <flags> <exptime> <bytes>, followed by the value data on the next line. Here we're setting key greeting with flags=0, no expiry (0), and 12 bytes of data.

Use CAS for safe updates

gets greeting
VALUE greeting 0 12 1
Hello World!
END

cas greeting 0 0 8 1
Hey you!
STORED

The gets command returns the CAS token (the 1 after the byte count). The cas command uses this token to ensure no one else modified the value between the read and write.

Counters

set visits 0 0 1
0
STORED

incr visits 1
1

incr visits 1
2

Delete and verify

delete greeting
DELETED

get greeting
END

Check version and stats

version
VERSION RedCouch 0.1.0

stats
STAT pid 12345
STAT uptime 42
STAT version RedCouch 0.1.0
...
END

quit

4. Inspect Data via Redis

While RedCouch is running, open another terminal and use redis-cli to see the data:

# See all RedCouch keys
redis-cli KEYS 'rc:*'

# Inspect a specific item
redis-cli HGETALL rc:visits
# Returns: v (hex-encoded value), f (flags), c (CAS token)

This dual-access is one of RedCouch's key features: data is simultaneously accessible through the memcached protocol and native Redis commands.

5. Run the Tests

# Unit and protocol tests (221 tests — no Redis required)
cargo test

# Integration tests (requires Redis 8+ with module loaded)
cd tests/integration && bash run_e2e.sh

Next Steps

ASCII Protocol

The ASCII text protocol is the simplest way to interact with RedCouch. It uses human-readable commands over a plain TCP connection, making it easy to test and debug with standard tools like telnet or nc.

RedCouch supports all 19 standard memcached ASCII commands. For the complete compatibility table with syntax details, see Protocol Compatibility Reference.

Connecting

telnet 127.0.0.1 11210
# or
nc 127.0.0.1 11210

RedCouch auto-detects ASCII protocol when the first byte is a printable ASCII character (not 0x80, which routes to binary protocol).

Basic Key-Value Operations

# Store a value
set mykey 0 0 5
hello
STORED

# Retrieve it
get mykey
VALUE mykey 0 5
hello
END

# Store with flags and TTL (60 seconds)
set session:abc 42 60 11
session_data
STORED

# Retrieve with CAS token
gets mykey
VALUE mykey 0 5 1
hello
END

# Compare-and-swap (CAS) update
cas mykey 0 0 5 1
world
STORED

# Delete
delete mykey
DELETED

Counters

# Create a counter via set (store numeric string)
set counter 0 0 1
0
STORED

# Increment
incr counter 1
1

# Increment by 10
incr counter 10
11

# Decrement
decr counter 3
8

Note: In ASCII protocol, incr/decr return NOT_FOUND for missing keys. Use set to initialize counters first.

Append and Prepend

set log 0 0 6
line-1
STORED

append log 7
,line-2
STORED

get log
VALUE log 0 13
line-1,line-2
END

prepend log 8
header: 
STORED

Touch and Get-and-Touch

# Update TTL without fetching value
touch mykey 120
TOUCHED

# Get value and update TTL simultaneously
gat 300 mykey
VALUE mykey 0 5
world
END

Stats and Version

version
VERSION RedCouch 0.1.0

stats
STAT pid 12345
STAT uptime 42
STAT version RedCouch 0.1.0
STAT cmd_get 5
STAT cmd_set 3
STAT curr_items 2
...
END

Flush

# Flush all RedCouch items (only rc:* keys, not entire Redis DB)
flush_all
OK

Noreply Mode

Most commands accept a noreply suffix that suppresses the server response. This is useful for fire-and-forget writes:

set background-job 0 60 4 noreply
data

No STORED response is sent. If the command is malformed, a CLIENT_ERROR may still be emitted because noreply cannot always be reliably parsed before the error is detected.

Next Steps

Quick Reference

CommandSyntax
setset <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n
addadd <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n
replacereplace <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n
cascas <key> <flags> <exptime> <bytes> <cas_unique> [noreply]\r\n<data>\r\n
appendappend <key> <bytes> [noreply]\r\n<data>\r\n
prependprepend <key> <bytes> [noreply]\r\n<data>\r\n
getget <key> [<key> ...]
getsgets <key> [<key> ...]
gatgat <exptime> <key> [<key> ...]
gatsgats <exptime> <key> [<key> ...]
deletedelete <key> [noreply]
incrincr <key> <value> [noreply]
decrdecr <key> <value> [noreply]
touchtouch <key> <exptime> [noreply]
flush_allflush_all [delay] [noreply]
versionversion
statsstats [group]
verbosityverbosity <level> [noreply]
quitquit

Meta Protocol

The meta protocol is an extension of the ASCII text protocol that provides more control over individual operations through a flag-based system. It uses two-letter command prefixes (mg, ms, md, ma, mn, me) instead of full command words, and flags to select exactly which response fields you want.

Meta commands are routed through the ASCII text-protocol path — they are detected by prefix after ASCII protocol detection. You use the same TCP connection and can mix standard ASCII and meta commands.

For the complete compatibility table, see Protocol Compatibility Reference.

Connecting

telnet 127.0.0.1 11210

Meta Set and Get

# Set a value (ms = meta set, 5 = data length)
ms mykey 5
hello
HD

# Get with value and CAS (mg = meta get, v = value, c = CAS)
mg mykey v c
VA 5 c1
hello

# Get with key echo, flags, size
mg mykey k f s
HD kmykey f0 s5

Meta Set Modes

The M flag controls the set mode:

# Add (only if not exists): ME
ms newkey 3 ME
foo
HD

# Replace (only if exists): MR
ms mykey 3 MR
bar
HD

# Append: MA
ms mykey 4 MA
_end
HD

# Prepend: MP
ms mykey 6 MP
start_
HD
ModeMeaning
SSet (default)
EAdd (set if not exists)
AAppend
PPrepend
RReplace

Meta Delete

md mykey
HD

Meta Arithmetic

# Create counter with initial value (J = initial, N = TTL for auto-create)
ma counter J0 N0
HD

# Increment by 5 (D = delta, v = return value)
ma counter D5 v
VA 1
5

# Decrement (MD = decrement mode)
ma counter MD D2 v
VA 1
3

Meta Noop (Pipeline Terminator)

mn
MN

Opaque Token (Request Correlation)

The O flag echoes an opaque token in the response:

mg mykey v Oreq-42
VA 5 Oreq-42
hello

mn Oping
MN Oping

Supported Flags by Command

CommandSupported Flags
mg (meta get)v (value), c (CAS), f (flags), k (key), s (size), O (opaque), q (quiet), t (TTL remaining), T (TTL update)
ms (meta set)F (flags), T (TTL), C (CAS), q (quiet), O (opaque), k (key), M (mode)
md (meta delete)C (CAS), q (quiet), O (opaque), k (key)
ma (meta arithmetic)D (delta), J (initial), N (auto-create TTL), q (quiet), O (opaque), k (key), v (value), c (CAS), M (mode: I/D)
mn (meta noop)O (opaque)
me (meta debug)O, k, q (stub: always returns EN)

Unsupported Meta Features

  • Stale items (N/vivify on mg, I/invalidate on md)
  • Recache (R flag on mg)
  • Win/lose/stale flags (W, X, Z)
  • Base64 keys (b flag)
  • me debug data (stub only — always returns EN)

Any unsupported flag is rejected with CLIENT_ERROR unsupported meta flag '<flag>'.

Proxy hint flags P and L are silently accepted and ignored on all meta commands.

Next Steps

Binary Protocol

The binary protocol is the machine-oriented protocol used by Couchbase SDKs and some memcached client libraries. It uses a fixed-size 24-byte header followed by variable-length extras, key, and value fields.

Binary protocol clients connect to the same port (11210) as ASCII clients. RedCouch auto-detects binary protocol when the first byte is 0x80 (the binary request magic byte).

For the complete opcode table and compatibility details, see Protocol Compatibility Reference.

Overview

The binary protocol is based on the Couchbase memcached binary protocol. All 34 opcodes (0x00–0x22, excluding 0x1F) are parsed and dispatched. This includes quiet variants (which suppress success responses for pipelining) and key-returning variants.

Example: Raw Socket Binary Protocol (Python)

RedCouch's verified binary protocol test suite (tests/integration/test_binary_protocol.py) uses raw socket framing to exercise all 34 binary opcodes. Below is a simplified example:

import socket, struct

MAGIC_REQ, MAGIC_RES, HDR = 0x80, 0x81, 24
OP_SET, OP_GET = 0x01, 0x00

def build_req(opcode, extras=b"", key=b"", value=b"", cas=0):
    bl = len(extras) + len(key) + len(value)
    hdr = struct.pack(">BBHBBHIIQ", MAGIC_REQ, opcode, len(key),
                      len(extras), 0, 0, bl, 0, cas)
    return hdr + extras + key + value

def read_resp(sock):
    hdr = sock.recv(HDR)
    magic, op, kl, el, dt, st, bl, opq, cas = struct.unpack(">BBHBBHIIQ", hdr)
    body = b""
    while len(body) < bl:
        body += sock.recv(bl - len(body))
    return st, cas, body[el + kl:]

sock = socket.create_connection(("127.0.0.1", 11210), timeout=3)

# SET key1 = b"hello" with flags=0, expiry=0
extras = struct.pack(">II", 0, 0)  # flags (4 bytes) + expiry (4 bytes)
sock.sendall(build_req(OP_SET, extras=extras, key=b"key1", value=b"hello"))
status, cas, _ = read_resp(sock)
assert status == 0  # success

# GET key1
sock.sendall(build_req(OP_GET, key=b"key1"))
status, cas, value = read_resp(sock)
assert status == 0 and value == b"hello"

sock.close()

Supported Binary Operations

Opcode FamilyOpcodesNotes
GETGET, GETQ, GETK, GETKQQuiet variants suppress success. Key variants echo key.
SET/ADD/REPLACESET, SETQ, ADD, ADDQ, REPLACE, REPLACEQCAS-checked. Flags, expiry, binary-safe values preserved.
DELETEDELETE, DELETEQCAS-checked.
INCREMENT/DECREMENTINCR, INCRQ, DECR, DECRQUnsigned 64-bit with initial-value and miss rules.
APPEND/PREPENDAPPEND, APPENDQ, PREPEND, PREPENDQRequires existing item.
TOUCHTOUCHUpdates TTL on existing items.
GAT/GATQGAT, GATQGet-and-touch with TTL update.
FLUSHFLUSH, FLUSHQNamespace-isolated: only rc:* keys.
NOOPNOOPPipeline terminator.
QUITQUIT, QUITQGraceful close.
VERSIONVERSIONReturns RedCouch 0.1.0.
STATSTATGeneral stats.
VERBOSITYVERBOSITYAccepted, no effect.
SASL AUTHSASL_LIST_MECHS, SASL_AUTH, SASL_STEPStub: auth always succeeds.

SASL Authentication

Binary protocol clients (especially Couchbase SDKs) often require SASL authentication before sending data commands. RedCouch supports the SASL handshake but with stub-only authentication — all credentials are accepted:

  • SASL_LIST_MECHS → Returns PLAIN
  • SASL_AUTH → Always succeeds regardless of username/password
  • SASL_STEP → Always succeeds

This allows SASL-requiring clients to connect without code changes. See Known Limitations for security implications.

Quiet Commands and Pipelining

Quiet variants (e.g., GETQ, SETQ, DELETEQ) suppress success responses, enabling efficient pipelining. Send a batch of quiet operations followed by a NOOP — the NOOP response signals that all preceding quiet operations have been processed.

RedCouch batches all responses from a read cycle into a single write_all() call for efficiency.

Next Steps

Protocol Compatibility Reference

This is the definitive reference for RedCouch's protocol support. For tutorial-style examples, see the User Guide.

RedCouch implements three memcached protocol surfaces over a single TCP listener (port 11210). Protocol detection is automatic: the first byte of each connection determines the protocol.

First ByteProtocol
0x80Binary (Couchbase memcached)
Printable ASCIIText (ASCII or meta commands)
\r or \nSkipped; next byte determines protocol

Binary Protocol

Based on the Couchbase memcached binary protocol. All 34 opcodes (0x00–0x22, excluding 0x1F) are parsed and dispatched.

Supported Operations

Opcode FamilyOpcodesStatusNotes
GETGET (0x00), GETQ (0x09), GETK (0x0C), GETKQ (0x0D)✅ SupportedReturns value, flags, CAS. Quiet variants suppress success responses. Key-inclusive variants echo key.
SET/ADD/REPLACESET (0x01), SETQ (0x11), ADD (0x02), ADDQ (0x12), REPLACE (0x03), REPLACEQ (0x13)✅ SupportedCAS-checked mutations. Flags, expiry, and binary-safe values preserved.
DELETEDELETE (0x04), DELETEQ (0x14)✅ SupportedCAS-checked. Returns item CAS on success.
INCREMENT/DECREMENTINCR (0x05), INCRQ (0x15), DECR (0x06), DECRQ (0x16)✅ SupportedUnsigned 64-bit semantics with initial-value and miss rules. See Limitations.
APPEND/PREPENDAPPEND (0x0E), APPENDQ (0x19), PREPEND (0x0F), PREPENDQ (0x1A)✅ SupportedRequires existing item.
TOUCHTOUCH (0x1C)✅ SupportedUpdates TTL on existing items.
GAT/GATQGAT (0x1D), GATQ (0x1E)✅ SupportedGet-and-touch with TTL update. Key included in response.
FLUSHFLUSH (0x08), FLUSHQ (0x18)✅ SupportedNamespace-isolated: flushes only RedCouch keys (rc:*), never FLUSHDB.
NOOPNOOP (0x0A)✅ SupportedPipeline terminator.
QUITQUIT (0x07), QUITQ (0x17)✅ SupportedGraceful connection close.
VERSIONVERSION (0x0B)✅ SupportedReturns RedCouch 0.1.0.
STATSTAT (0x10)✅ SupportedGeneral stats (pid, uptime, version, cmd_get, cmd_set, curr_items).
VERBOSITYVERBOSITY (0x1B)✅ SupportedAccepted and acknowledged; no runtime effect.
SASL AUTHSASL_LIST_MECHS (0x20), SASL_AUTH (0x21), SASL_STEP (0x22)⚠️ StubLists "PLAIN". Auth always succeeds — no credential enforcement.
UnknownAny unrecognized opcode✅ HandledReturns Unknown command (status 0x0081).

Unsupported Binary Behaviors

FeatureStatusReason
STAT groups (settings, items, slabs, conns)❌ Not supportedReturns empty terminator for sub-groups
Dynamic SASL credential enforcement❌ Not implementedStub-only: auth always succeeds
UDP transport❌ Not supportedTCP only
Couchbase bucket/vbucket management❌ Not supportedOutside bridge scope

ASCII Text Protocol

Based on the memcached ASCII text protocol. All 19 standard commands are implemented.

Supported Commands

CommandSyntaxStatus
setset <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n
addadd <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n
replacereplace <key> <flags> <exptime> <bytes> [noreply]\r\n<data>\r\n
cascas <key> <flags> <exptime> <bytes> <cas_unique> [noreply]\r\n<data>\r\n
appendappend <key> <bytes> [noreply]\r\n<data>\r\n
prependprepend <key> <bytes> [noreply]\r\n<data>\r\n
getget <key> [<key> ...]
getsgets <key> [<key> ...]
gatgat <exptime> <key> [<key> ...]
gatsgats <exptime> <key> [<key> ...]
deletedelete <key> [noreply]
incrincr <key> <value> [noreply]
decrdecr <key> <value> [noreply]
touchtouch <key> <exptime> [noreply]
flush_allflush_all [delay] [noreply]
versionversion
statsstats [group]
verbosityverbosity <level> [noreply]
quitquit

Unsupported ASCII Behaviors

FeatureStatusReason
Authentication❌ Not supportedNo SASL/auth in ASCII text mode (per memcached spec)
flush_all delay⚠️ Accepted, not honoredDelay parameter parsed but flush is immediate
noreply on malformed input⚠️ PartialCLIENT_ERROR may still be emitted if noreply cannot be parsed before the error

Meta Protocol

Meta commands use two-letter prefixes and a flag-based system, routed through the ASCII text-protocol path.

Supported Meta Commands

CommandSyntaxStatusSupported Flags
mg (meta get)mg <key> [flags]v, c, f, k, s, O, q, t, T
ms (meta set)ms <key> <datalen> [flags]\r\n<data>\r\nF, T, C, q, O, k, M (mode: S/E/A/P/R)
md (meta delete)md <key> [flags]C, q, O, k
ma (meta arithmetic)ma <key> [flags]D, J, N, q, O, k, v, c, M (mode: I/D)
mn (meta noop)mn [flags]O
me (meta debug)me <key> [flags]⚠️ StubReturns EN. Flags O, k, q accepted.

Unsupported Meta Behaviors

FeatureStatusReason
Stale items (vivify/invalidate)Requires stale item concept not in item model
Recache (R flag on mg)Requires stale item concept
Win/lose/stale flags (W, X, Z)Requires stale item concept
Base64 keys (b flag)Not implemented
me debug data❌ StubAlways returns EN (not found)

Item Model

All three protocols share the same underlying item model stored in Redis:

PropertyImplementation
Storage shapeHash-per-item: HSET <redis_key> v <value> f <flags> c <cas>
Key namespaceClient key foo → Redis key rc:foo
Reserved keysSystem keys under redcouch:sys:* (e.g., redcouch:sys:cas_counter)
CASMonotonic counter via INCR redcouch:sys:cas_counter
Binary-safe valuesFull binary round-trip via Lua hex encode/decode
Flags32-bit unsigned, stored as decimal string
Expiry0 = no expiry, ≤2592000 = relative seconds, >2592000 = absolute Unix timestamp
Atomic mutationsAll CAS-sensitive operations use server-side Lua scripts
Flush scopeFLUSH operates only on rc:* keys, never FLUSHDB

Architecture

RedCouch is a Redis module (cdylib) that exposes a memcached-compatible TCP endpoint backed by Redis data structures. It runs inside the Redis process as a loaded module, sharing the same address space and data access as Redis itself.

This chapter explains the architectural decisions behind RedCouch, how requests flow through the system, and why the design makes the trade-offs it does.

High-Level Data Flow

┌─────────────────────┐     TCP :11210     ┌──────────────────────────────────┐
│  Memcached Client   │ ◄───────────────► │  RedCouch Module (in Redis)      │
│  (binary or ASCII)  │                    │                                  │
└─────────────────────┘                    │  ┌───────────────────────────┐   │
                                           │  │ TCP Listener Thread       │   │
                                           │  │  → accept()              │   │
                                           │  │  → spawn handler thread  │   │
                                           │  └───────────────────────────┘   │
                                           │                                  │
                                           │  ┌───────────────────────────┐   │
                                           │  │ Connection Handler Thread │   │
                                           │  │  → protocol detection    │   │
                                           │  │  → parse request         │   │
                                           │  │  → execute via Redis API │   │
                                           │  │  → encode response       │   │
                                           │  └───────────────────────────┘   │
                                           │              │                   │
                                           │              ▼                   │
                                           │  ┌───────────────────────────┐   │
                                           │  │ Redis Data (hashes)      │   │
                                           │  │  rc:<key> → {v, f, c}   │   │
                                           │  │  redcouch:sys:*          │   │
                                           │  └───────────────────────────┘   │
                                           └──────────────────────────────────┘

Request Lifecycle

A typical SET request follows this path:

  1. Accept — The listener thread accepts the TCP connection and spawns a handler thread.
  2. Detect — The handler reads the first byte: 0x80 routes to binary, printable ASCII routes to text/meta.
  3. Parse — The protocol-specific parser decodes the request into an internal representation (opcode, key, value, flags, extras).
  4. Namespace — The client key is prefixed with rc: to form the Redis key.
  5. Execute — A Lua script runs atomically on the Redis side: it increments the CAS counter, hex-encodes the value, and stores the hash fields (v, f, c). If the request has a CAS token, the script checks it before mutating.
  6. Respond — The handler builds a protocol-specific response (with the new CAS token) and writes it to the socket.
  7. Batch — For binary protocol, multiple responses from a single read cycle are buffered and flushed in one write_all() call.

Module Structure

FilePurpose
src/lib.rsModule entry point, TCP listener, connection handler, Redis command dispatch, Lua scripts, stats tracking
src/protocol.rsBinary protocol types: opcode enum, request parser, response encoder, frame builder
src/ascii.rsASCII text protocol parser and command types (19 commands), meta prefix routing
src/meta.rsMeta protocol parser, flag validation, command types (mg/ms/md/ma/mn/me)

The crate compiles as a cdylib — a C-compatible dynamic library that Redis loads at runtime. The #[redis_module] macro registers the module with Redis and triggers redcouch_init(), which spawns the TCP listener.

Threading Model

RedCouch uses a thread-per-connection model:

  • Main thread — The Redis server thread. Module init registers the module and spawns the listener. RedCouch does not block or interfere with the main Redis event loop.
  • Listener thread — A single background thread that calls accept() on 127.0.0.1:11210 in a loop. Each accepted connection is handed off to a new thread.
  • Connection threads — One OS thread per accepted connection (up to MAX_CONNECTIONS = 1024). Each thread owns its socket and processes requests sequentially — there is no async I/O or event multiplexing within a connection.

Why thread-per-connection?

The thread-per-connection model was chosen for simplicity and correctness:

  • Simple ownership — Each thread owns its socket, read buffer, and write buffer. No shared mutable state between connections.
  • Sequential request processing — Memcached protocol requests on a single connection are processed in order, which matches the protocol's expectation.
  • Bounded resource usage — The 1,024 connection limit caps thread count. Connections beyond this limit are immediately dropped with no response.

The trade-off is that each connection consumes an OS thread's stack (~8 MiB default on Linux). At the 1,024 connection limit, this is ~8 GiB of virtual memory (though actual resident memory is much lower). For RedCouch's intended use case as a migration bridge, this is acceptable.

Redis access serialization

Each connection thread acquires a ThreadSafeContext lock to execute Redis commands. This lock serializes access to the Redis data structures across all connection threads — only one thread can execute a Redis command at a time. This is the primary concurrency bottleneck: benchmark data shows throughput plateaus around 4 concurrent clients (~60k ops/s) and reaches a ceiling of ~35k ops/s at 16+ clients for contended workloads.

Protocol Detection

On each new connection, the first byte determines the protocol:

  1. 0x80 → Binary protocol path (handle_binary_conn)
  2. Printable ASCII → Text protocol path (handle_ascii_conn), which internally routes meta commands (mg/ms/md/ma/mn/me prefixes) to the meta handler
  3. \r/\n → Skipped; next byte re-evaluated

Protocol is fixed for the lifetime of the connection. A single connection cannot switch between binary and ASCII mode.

Storage Model

Each memcached item is stored as a Redis hash with three fields:

FieldContentExample
vItem value (hex-encoded for binary safety)48656c6c6f ("Hello")
fFlags (32-bit unsigned, decimal string)0
cCAS token (monotonic counter value)42

Key mapping: Client key foo → Redis key rc:foo. This prefix-based namespace prevents collisions with other Redis data. You can inspect RedCouch items directly:

redis-cli HGETALL rc:foo
# Returns: v, <hex-encoded value>, f, <flags>, c, <cas>

System keys: The monotonic CAS counter lives at redcouch:sys:cas_counter. Flush operations scan only rc:* keys, leaving system keys and all non-RedCouch data untouched.

Why hashes instead of strings?

A memcached item has three properties: value, flags, and CAS token. Using a Redis hash stores all three atomically under one key. The alternative — separate keys for each property — would require multi-key transactions and complicate expiry handling. The hash approach also makes items self-describing and inspectable via standard Redis tools.

Why hex encoding?

The redis-module crate's RedisString type requires valid UTF-8 for string operations. Memcached values are arbitrary bytes — a JPEG, a Protocol Buffer, or a compressed payload may contain any byte sequence. Hex encoding guarantees the value stored in Redis is valid ASCII, avoiding panics on non-UTF-8 data. The cost is 2× storage for values and CPU time for encode/decode, but it ensures correctness for all payloads.

Atomicity and Lua Scripts

All CAS-sensitive and read-modify-write operations use server-side Lua scripts executed via redis.call(). This includes:

  • Store with CAS check — SET/ADD/REPLACE with a CAS token verify the current CAS before mutating
  • Counter operations — INCREMENT/DECREMENT read the current value, compute the new value, and store it atomically
  • Append/Prepend — Read the existing value, concatenate, and store back in one script
  • Delete with CAS — Verify CAS before removing the key

Each Lua script executes atomically on the Redis side — no other command can interleave. This eliminates the class of check-then-set race conditions that would arise from multi-step operations using separate Redis commands.

The CAS counter itself is a simple INCR redcouch:sys:cas_counter within each Lua script. Every mutation generates a new, globally unique CAS value.

Response Batching

Binary protocol clients often send multiple requests before reading responses (pipelining). RedCouch collects all responses from a single read cycle into a write buffer and flushes them in a single write_all() call. This reduces syscall overhead from O(responses) to O(1) per batch, which is measurable at high throughput.

ASCII protocol responses are written individually since ASCII clients typically send one command at a time.

Dependencies

CrateVersionPurpose
redis-module2.0.7Redis module API bindings — provides Context, ThreadSafeContext, module registration macros, and the RedisString type
bytes1Byte buffer management for protocol parsing — used for zero-copy request body handling
byteorder1Big-endian integer parsing for binary protocol header fields
thiserror2.0.12Derive macro for error type definitions

The dependency set is intentionally minimal. No async runtime (tokio, async-std) is used — the thread-per-connection model with blocking I/O keeps the dependency tree small and the build fast.

Configuration

All runtime parameters in this release are compile-time constants defined in src/lib.rs. There are no dynamic configuration options, no config file, and no Redis module arguments. To change a parameter, modify the constant in the source code and rebuild.

This is a deliberate simplification for the initial release. Future versions may introduce MODULE LOAD arguments or Redis config directives.

Runtime Defaults

ParameterValueConstantRationale
Bind address127.0.0.1:11210DEFAULT_BIND_ADDRLoopback-only prevents accidental network exposure
Max connections1,024MAX_CONNECTIONSCaps thread count; each connection uses one OS thread
Read timeout30 secondsSOCKET_READ_TIMEOUTPrevents idle connections from consuming threads indefinitely
Write timeout10 secondsSOCKET_WRITE_TIMEOUTDetects unresponsive clients
Max frame body20 MiBMAX_BODY_LENPrevents memory exhaustion from oversized requests
Max key length250 bytesMAX_KEY_LENMatches memcached specification limit
Max command line (ASCII)2,048 bytesPrevents unbounded line reads
Key prefixrc:KEY_PREFIXNamespaces RedCouch data within Redis
CAS counter keyredcouch:sys:cas_counterCAS_COUNTER_KEYGlobal monotonic counter for CAS tokens

Changing defaults

To change a parameter, edit the corresponding constant in src/lib.rs and rebuild:

# Example: change bind address to all interfaces
# Edit src/lib.rs: const DEFAULT_BIND_ADDR: &str = "0.0.0.0:11210";
cargo build --release

Warning: Binding to 0.0.0.0 exposes the memcached endpoint to the network. RedCouch has no authentication enforcement (SASL is stub-only). Use Redis ACLs, firewall rules, or a reverse proxy if you need network-accessible memcached protocol access.

Storage Keys

Key PatternPurposeExample
rc:<key>User data items (hash with v, f, c fields)rc:session:abc
redcouch:sys:cas_counterMonotonic CAS counterValue: 42
redcouch:sys:*Reserved system namespace

Inspecting data via redis-cli

RedCouch data is standard Redis data. You can inspect it directly:

# List all RedCouch keys
redis-cli KEYS 'rc:*'

# Inspect a specific item
redis-cli HGETALL rc:mykey
# Returns: v <hex-value> f <flags> c <cas>

# Check the CAS counter
redis-cli GET redcouch:sys:cas_counter

# Count RedCouch items
redis-cli EVAL "return #redis.call('KEYS', 'rc:*')" 0

Note: The value field (v) is hex-encoded. To see the actual value, decode it: a value of 48656c6c6f is the hex encoding of Hello.

Security Considerations

RedCouch's security model relies on network-level access control, not application-level authentication:

LayerStatusRecommendation
Bind addressLoopback only by defaultSafe for single-host deployments. Change only if you have network-level controls.
SASL authenticationStub — always succeedsDo not rely on SASL for access control. Any client that can reach port 11210 can read and write data.
TLS/SSLNot supportedUse a TLS-terminating proxy (e.g., stunnel) if you need encrypted transport.
Redis ACLsNot applicableRedCouch uses ThreadSafeContext to execute commands, bypassing Redis ACLs.
Connection limit1,024 maxProtects against resource exhaustion but is not rate limiting.

Production deployment recommendations

  1. Keep the loopback bind unless your clients run on separate hosts.
  2. Use firewall rules (iptables, security groups) to restrict access to port 11210 if binding to 0.0.0.0.
  3. Monitor connection count — approaching 1,024 connections indicates you may need to scale out or optimize client connection pooling.
  4. Use Redis persistence (RDB/AOF) if RedCouch data needs to survive restarts. RedCouch data is standard Redis data and is included in Redis persistence snapshots.

Tuning Guidance

Connection limits

The 1,024 connection limit is per-module-instance. If your workload exceeds this, consider:

  • Client-side connection pooling — Most memcached client libraries support connection pools. A pool of 10-50 connections per application instance is typical.
  • Reducing idle connections — The 30-second read timeout automatically closes idle connections. Clients should reconnect transparently.

Timeout tuning

ScenarioRecommendation
Low-latency workloadsDefault timeouts (30s/10s) are conservative. Most operations complete in <1ms.
Long-running bulk loadsDefault timeouts are fine — they apply per-read, not per-connection.
Unreliable networkConsider increasing read timeout if clients have intermittent connectivity.

Memory considerations

RedCouch's memory usage has two components:

  1. Redis data — Hash-per-item storage with hex-encoded values. Hex encoding doubles the storage cost of values compared to raw bytes. A 1 KB value consumes ~2 KB of Redis memory.
  2. Thread stacks — Each connection thread uses ~8 MiB of virtual memory (OS default). At 1,024 connections, this is ~8 GiB virtual but typically <100 MiB resident.

Use Redis's maxmemory setting to bound data storage. RedCouch respects Redis's eviction policies for rc:* keys.

Known Limitations

This chapter documents the known limitations, behavioral differences from standard memcached, and areas intentionally deferred from the current release. Each section explains the cause, the impact on your workload, and any available workarounds.

Counter Precision (post-2^53)

Impact: Counter values are exact only for the range [0, 2^53). Beyond 2^53 (9,007,199,254,740,992), the behavior is precision loss / rounding rather than reliable wraparound.

Cause: Redis Lua scripts use IEEE 754 double-precision floats for numeric operations. Double-precision floats can represent integers exactly up to 2^53, but above that threshold, consecutive integers are no longer representable. The memcached binary protocol specifies unsigned 64-bit counter semantics with wraparound at 2^64; RedCouch cannot match that behavior exactly above 2^53.

Who is affected: Only workloads that increment counters past ~9 quadrillion. Typical use cases (rate limiters, hit counters, sequence numbers) will never reach this threshold.

Workaround: If you need exact 64-bit counter semantics, migrate the counter to a native Redis INCR key (which uses native 64-bit integers) and access it via a Redis client instead of through the memcached protocol.

Append/Prepend Value Growth

Impact: APPEND and PREPEND operations have cost proportional to the existing value size. Each operation reads the entire existing value (hex-encoded), concatenates the new data, and writes the result back.

Cause: The hex-encoding storage model means an append to a 100 KB value requires reading ~200 KB of hex data, concatenating, and writing ~200 KB + new data. This is inherent to the hash-per-item storage model.

Measured behavior: In stress testing, 10 keys reached ~61 KB each after ~950 appends of 64-byte chunks, with no errors or instability.

Workaround: For append-heavy workloads with large accumulated values, consider:

  • Periodic key rotation — Start writing to a new key periodically and merge when needed.
  • Size monitoring — Track value sizes and set alerts if they grow beyond expected bounds.
  • Native Redis migration — Use Redis's native APPEND command (which operates on raw bytes without hex encoding) by migrating the relevant keys to direct Redis access.

Performance Hot Paths

The following are identified performance costs that remain in the current release. They are documented here so operators and contributors can understand where time is spent:

  1. Lua hex encode/decode — Every GET and every binary-value mutation passes through Lua string.format('%02x') for encoding and manual hex decode in Rust for decoding. This is the largest single overhead compared to native Redis operations. Benchmark data shows ~18% throughput reduction on GET hits compared to native Redis GET. This is the correctness-first design: it avoids redis-module UTF-8 panics on arbitrary binary payloads.

  2. Per-request ThreadSafeContext lock — Each Redis command acquires the ThreadSafeContext lock, which serializes Redis access across all connection threads. This is the primary concurrency bottleneck. Benchmark data shows throughput plateaus at ~4 clients (~60k ops/s) and hits a ceiling of ~35k ops/s at 16+ clients for contended workloads. See Benchmarks & Performance for measured data.

  3. Per-request allocationsVec allocations for key namespacing (rc: prefix), hex conversion buffers, and response assembly. These are small compared to the Lua and lock overhead but contribute to the total per-request cost.

Startup / Bind Caveat

Impact: The background TCP listener thread may log readiness before the bind has definitively succeeded.

Cause: The listener thread starts, logs its intent to bind, and then calls bind(). If another process holds port 11210, the bind fails and the listener thread exits — but Redis itself continues running normally without the memcached endpoint.

Detection:

# After loading the module, verify the port is reachable
nc -z 127.0.0.1 11210 && echo "OK" || echo "FAILED"

# Check Redis logs for bind errors
redis-cli INFO ALL | grep -i redcouch

Workaround: Ensure no other process is using port 11210 before loading the module. If you need to change the port, modify DEFAULT_BIND_ADDR in src/lib.rs and rebuild.

SASL Authentication

Impact: SASL auth is stub-only. Any credentials are accepted. There is no access control on the memcached protocol endpoint.

Cause: The SASL stub exists solely to allow Couchbase SDK clients (which require a SASL handshake) to connect. Implementing real credential enforcement would require a credential store and a policy for credential management, which is outside the scope of a protocol bridge.

Who is affected: Anyone exposing port 11210 to untrusted networks.

Workaround: Rely on network-level access control (firewall rules, security groups, loopback-only bind) rather than application-level authentication. See Configuration — Security Considerations.

Malformed Traffic Behavior

RedCouch handles malformed requests with clean disconnects or error responses — not crashes or hangs:

ScenarioBehaviorConnection
Bad magic byteConnection closed (EOF)Closed
Truncated headerRead timeout (30s), then closeClosed
Body length mismatchRead timeout, then closeClosed
Zero-key GETError response (status 0x0001)Stays open
Garbage then validConnection closed (EOF)Closed
Oversized key (>250 bytes)Error response (status 0x0004)Stays open
Oversized frame (>20 MiB body)Error responseClosed

This behavior was validated in stress testing with all six malformed scenarios producing the expected clean handling with zero crashes.

Maximum Sizes

LimitValueSource
Max key length250 bytesMemcached spec limit
Max frame body20 MiBMAX_BODY_LEN constant
Max command line (ASCII)2,048 bytesHardcoded parser limit
Max concurrent connections1,024MAX_CONNECTIONS constant

Module Unload

RedCouch does not implement a module unload/deinit handler. The MODULE UNLOAD redcouch command is unverified and may leave the listener thread orphaned. The recommended approach to fully stop the module is to restart the Redis process.

Deferred Surfaces

The following features are explicitly not in the current release scope. They may be added in future versions:

FeatureReason for deferral
Meta protocol stale items (N/vivify, I/invalidate, R/recache, W/X/Z flags)Requires a stale item concept not in the current item model
Base64 keys (b flag)Not implemented; standard ASCII keys cover most use cases
UDP transportTCP only; memcached UDP is rarely used in practice
Couchbase bucket/vbucket managementOutside the scope of a protocol bridge
Dynamic STAT groups (settings, items, slabs, conns)Returns empty terminator; general stats are supported
Windows supportRedis modules require a Unix-like environment
Dynamic configurationAll parameters are compile-time constants
TLS/SSLUse a TLS-terminating proxy for encrypted transport

Tutorial: Python Client

This tutorial walks through using RedCouch from Python, starting with simple key-value operations and building up to CAS workflows, counters, and pipelining. All examples use the standard pymemcache library.

Prerequisites

  • Redis 8+ running with RedCouch loaded (see Installation)
  • Python 3.10+
  • Install pymemcache: pip install pymemcache

Note: RedCouch listens on port 11210, not the default memcached port (11211).

Step 1: Connect and Store a Value

from pymemcache.client.base import Client

# Connect to RedCouch (port 11210, not 11211)
client = Client(("127.0.0.1", 11210))

# Store a value
client.set("greeting", "Hello from Python!")

# Retrieve it
value = client.get("greeting")
print(value)  # b'Hello from Python!'

The get() method returns bytes by default. To decode to a string, call .decode():

value = client.get("greeting")
print(value.decode("utf-8"))  # 'Hello from Python!'

Step 2: Flags and Expiration with a Serde

Memcached flags are a 32-bit integer stored alongside the value. They're commonly used to indicate serialization format. In pymemcache, flags are managed by a serde (serializer/deserializer) object that you pass to the Client constructor.

Expiration is in seconds (up to 30 days) or a Unix timestamp (for longer durations).

import json

class JSONSerde:
    """Serialize non-string values as JSON, using flags to track the format."""
    def serialize(self, key, value):
        if isinstance(value, str):
            return value.encode("utf-8"), 0  # flag 0 = raw string
        return json.dumps(value).encode("utf-8"), 1  # flag 1 = JSON

    def deserialize(self, key, value, flags):
        if flags == 0:
            return value.decode("utf-8")
        if flags == 1:
            return json.loads(value)
        return value  # fallback: return raw bytes

# Create a client with the JSON serde
client = Client(("127.0.0.1", 11210), serde=JSONSerde())

# Store a Python dict — the serde serializes it as JSON with flag=1
data = {"user": "alice", "role": "admin"}
client.set("session:abc", data, expire=60)

# Retrieve — the serde deserializes based on the stored flag
result = client.get("session:abc")
print(result["user"])  # 'alice'

# Store a plain string — the serde uses flag=0
client.set("greeting", "hello")
print(client.get("greeting"))  # 'hello' (str, not bytes)

You can also pass flags directly to set() to override the serde's flag value, but this is only needed for advanced use cases.

Note: The remaining steps use a plain client without a serde (as created in Step 1), so get() returns raw bytes. If you're using the serde client from this step, the returned values will be deserialized strings/objects instead.

Step 3: Add and Replace (Conditional Stores)

# add() only succeeds if the key does NOT exist
client.add("new-key", "first-write")   # True — key created
client.add("new-key", "second-write")  # False — key already exists

# replace() only succeeds if the key DOES exist
client.replace("new-key", "updated")   # True
client.replace("missing", "value")     # False — key not found

Step 4: Compare-and-Swap (CAS)

CAS prevents lost updates when multiple clients write to the same key. The workflow is: read the current CAS token, then write only if the token hasn't changed.

# gets() returns (value, cas_token)
value, cas = client.gets("greeting")
print(f"Value: {value}, CAS: {cas}")

# cas() writes only if the CAS token matches
success = client.cas("greeting", "Updated value", cas)
print(f"CAS update succeeded: {success}")  # True

# A second CAS with the old token fails
success = client.cas("greeting", "Stale update", cas)
print(f"Stale CAS update: {success}")  # False — token changed

Step 5: Counters

# Initialize a counter
client.set("page-views", "0")

# Increment
new_val = client.incr("page-views", 1)
print(f"Views: {new_val}")  # 1

# Increment by 10
new_val = client.incr("page-views", 10)
print(f"Views: {new_val}")  # 11

# Decrement
new_val = client.decr("page-views", 3)
print(f"Views: {new_val}")  # 8

Note: ASCII protocol incr/decr return NOT_FOUND for missing keys. Always initialize counters with set first.

Step 6: Append and Prepend

client.set("log", "entry-1")
client.append("log", ",entry-2")
client.append("log", ",entry-3")

print(client.get("log"))  # b'entry-1,entry-2,entry-3'

client.prepend("log", "header:")
print(client.get("log"))  # b'header:entry-1,entry-2,entry-3'

Step 7: Multi-Get

# Store several keys
for i in range(5):
    client.set(f"item:{i}", f"value-{i}")

# Fetch multiple keys in one round trip
results = client.get_many([f"item:{i}" for i in range(5)])
for key, value in results.items():
    print(f"{key}: {value}")

Step 8: Touch and Get-and-Touch

# Extend TTL without fetching the value
client.touch("session:abc", 300)  # Reset to 5 minutes

# Get value AND reset TTL in one operation
value = client.gat("session:abc", 600)  # Get + set TTL to 10 minutes

Step 9: Verify Data in Redis

Because RedCouch stores data in Redis hashes under the rc: prefix, you can inspect the data directly while it's still live:

# From redis-cli (on the Redis port, not 11210)
redis-cli

# List RedCouch keys created by the steps above
KEYS rc:*
# rc:greeting, rc:session:abc, rc:log, rc:item:0, ...

# Inspect a specific item's internal structure
HGETALL rc:greeting
# 1) "v"    ← hex-encoded value
# 2) "48656c6c6f2066726f6d20507974686f6e21"
# 3) "f"    ← flags (32-bit integer)
# 4) "0"
# 5) "c"    ← CAS token
# 6) "2"

This dual-access capability is the foundation of RedCouch's migration story — see the Migration Guide.

Step 10: Delete and Flush

Once you've finished inspecting the data, clean up:

# Delete a single key
client.delete("greeting")

# Flush all RedCouch keys (only rc:* keys, not entire Redis DB)
client.flush_all()

Runnable Example

A complete, runnable version of this tutorial is available at examples/python/basic_operations.py.

Next Steps

Multi-Language Examples

RedCouch speaks standard memcached protocol, so any memcached client library works. This chapter shows working examples in several languages, all connecting to RedCouch on port 11210.

Prerequisite: Redis 8+ with RedCouch loaded and listening on 127.0.0.1:11210. See Installation.


Python (pymemcache)

The full Python walkthrough is in the Python Client Tutorial. Here's the quick version:

# pip install pymemcache
from pymemcache.client.base import Client

client = Client(("127.0.0.1", 11210))
client.set("py-key", "hello from python")
print(client.get("py-key"))  # b'hello from python'

# CAS workflow
value, cas = client.gets("py-key")
client.cas("py-key", "updated", cas)

# Counters
client.set("hits", "0")
client.incr("hits", 1)

client.close()

Node.js (memjs)

memjs is a popular Node.js memcached client that supports binary protocol with SASL authentication — ideal for RedCouch since it exercises the binary protocol path including SASL.

// npm install memjs

const memjs = require('memjs');

// memjs uses binary protocol with SASL auth by default.
// RedCouch's SASL is stub-only, so any username/password works.
const client = memjs.Client.create('127.0.0.1:11210', {
  username: 'any',
  password: 'any'
});

async function main() {
  // Store a value
  await client.set('node-key', 'hello from node.js', { expires: 60 });

  // Retrieve it
  const { value, flags } = await client.get('node-key');
  console.log(value.toString()); // 'hello from node.js'

  // Delete
  await client.delete('node-key');

  client.close();
}

main().catch(console.error);

Note: memjs uses the memcached binary protocol, so this example exercises RedCouch's binary protocol path including SASL authentication handshake.


Go (gomemcache)

gomemcache by Brad Fitzpatrick (original memcached author) is the standard Go memcached client. It uses the ASCII text protocol.

// go get github.com/bradfitz/gomemcache/memcache
package main

import (
    "fmt"
    "github.com/bradfitz/gomemcache/memcache"
)

func main() {
    // Connect to RedCouch
    mc := memcache.New("127.0.0.1:11210")

    // Store a value with 60-second TTL
    mc.Set(&memcache.Item{
        Key:        "go-key",
        Value:      []byte("hello from go"),
        Expiration: 60,
    })

    // Retrieve it
    item, err := mc.Get("go-key")
    if err != nil {
        panic(err)
    }
    fmt.Println(string(item.Value)) // "hello from go"

    // CAS workflow
    item, _ = mc.Get("go-key")
    item.Value = []byte("updated from go")
    err = mc.CompareAndSwap(item)
    fmt.Printf("CAS update: %v\n", err == nil)

    // Counters
    mc.Set(&memcache.Item{Key: "go-counter", Value: []byte("0")})
    newVal, _ := mc.Increment("go-counter", 5)
    fmt.Printf("Counter: %d\n", newVal) // 5

    // Delete
    mc.Delete("go-key")
}

PHP (ext-memcached)

PHP's Memcached extension uses the binary protocol via libmemcached. It works with RedCouch out of the box:

<?php
$mc = new Memcached();
$mc->addServer('127.0.0.1', 11210);

// Enable binary protocol (recommended for RedCouch)
$mc->setOption(Memcached::OPT_BINARY_PROTOCOL, true);

// Store and retrieve
$mc->set('php-key', 'hello from php', 60);
echo $mc->get('php-key') . "\n"; // 'hello from php'

// CAS workflow
$cas = null;
$value = $mc->get('php-key', null, Memcached::GET_EXTENDED);
$mc->cas($value['cas'], 'php-key', 'updated from php');

// Counters
$mc->set('php-counter', 0);
$mc->increment('php-counter', 5);
echo $mc->get('php-counter') . "\n"; // 5

CLI Tools (telnet / netcat)

The fastest way to test RedCouch is with telnet or nc. These connect via ASCII protocol:

# Connect
telnet 127.0.0.1 11210

# Store a value (set <key> <flags> <exptime> <bytes>)
set cli-key 0 0 9
cli-value
STORED

# Retrieve
get cli-key
VALUE cli-key 0 9
cli-value
END

# Use meta protocol
ms meta-key 5
hello
HD

mg meta-key v c
VA 5 c1
hello

# Quit
quit

See the ASCII Protocol and Meta Protocol guides for comprehensive command references.


Choosing a Client Library

LanguageLibraryProtocolAuthNotes
PythonpymemcacheASCIINoSimple, well-maintained, pure Python
Node.jsmemjsBinarySASLExercises binary+SASL path
GogomemcacheASCIINoBy the original memcached author
PHPext-memcachedBinarySASLVia libmemcached; widely deployed
CLItelnet / ncASCIIN/AQuick testing and debugging
AnyRaw socketsBinaryOptionalFull control; see Binary Protocol

Tip: All these libraries connect to RedCouch exactly as they would to a standard memcached server — just change the port to 11210. No RedCouch-specific client code is needed.

Next Steps

Tutorial: Migration from Memcached to Redis

RedCouch is designed as a bridge — not a permanent proxy. This tutorial walks through the three-phase migration from a memcached-based architecture to native Redis, with RedCouch providing the zero-downtime transition layer.

The Three Phases

Phase 1: Bridge          Phase 2: Dual-Access      Phase 3: Native
┌──────────┐             ┌──────────┐               ┌──────────┐
│  App     │─memcached──▶│  App     │─memcached──▶  │  App     │
│  (old)   │  protocol   │  (mixed) │  + redis      │  (new)   │─redis──▶ Redis
└──────────┘             └──────────┘               └──────────┘
      │                        │
      ▼                        ▼
  RedCouch ──▶ Redis      RedCouch ──▶ Redis

Phase 1: Bridge — Drop-in Replacement

Step 1: Deploy RedCouch

Load RedCouch into your Redis 8+ server:

redis-server --loadmodule /path/to/libred_couch.so

RedCouch opens a memcached-compatible endpoint on port 11210.

Step 2: Repoint Your Clients

Change your memcached client configuration to point at RedCouch instead of your memcached server. The only change needed is the host and port:

Python (before):

client = Client(("memcached-host", 11211))

Python (after):

client = Client(("redis-host", 11210))

Go (before):

mc := memcache.New("memcached-host:11211")

Go (after):

mc := memcache.New("redis-host:11210")

No other code changes needed. Your application continues using its existing memcached client library.

Step 3: Verify

# Check RedCouch is responding
echo "version" | nc redis-host 11210
# VERSION RedCouch 0.1.0

# Check data is flowing through to Redis
redis-cli -h redis-host KEYS 'rc:*'
# Shows your memcached keys with rc: prefix

What You Get in Phase 1

  • All memcached operations work transparently
  • Data is stored in Redis hashes under the rc: key prefix
  • You can inspect data via redis-cli alongside memcached access
  • Redis persistence (RDB/AOF) now protects your cache data
  • Redis replication can provide high availability

Phase 2: Dual-Access — Gradual Migration

In this phase, you migrate application code service-by-service from memcached clients to native Redis clients. Both access paths work simultaneously against the same data.

Understanding the Storage Model

RedCouch stores each memcached key as a Redis hash:

redis-cli HGETALL rc:session:abc
# 1) "v"    ← hex-encoded value
# 2) "68656c6c6f"
# 3) "f"    ← flags (32-bit integer)
# 4) "0"
# 5) "c"    ← CAS token
# 6) "42"

Reading Data from Redis

To read RedCouch data natively, decode the hex value from the hash:

import redis

r = redis.Redis(host='redis-host', port=6379)

# Read a value stored by a memcached client
hex_value = r.hget("rc:session:abc", "v")
if hex_value:
    value = bytes.fromhex(hex_value.decode())
    print(value)  # b'hello'

Migration Strategy: Service by Service

  1. Pick a service to migrate (start with read-heavy, non-critical services)
  2. Add a Redis client alongside the existing memcached client
  3. Read from Redis (via rc: hashes) while writes still go through memcached
  4. Switch writes to Redis once reads are verified
  5. Remove the memcached client from that service
  6. Repeat for the next service

Example: Migrating a Session Store

Before (memcached client):

from pymemcache.client.base import Client
mc = Client(("redis-host", 11210))

def get_session(session_id):
    return mc.get(f"session:{session_id}")

def set_session(session_id, data, ttl=3600):
    mc.set(f"session:{session_id}", data, expire=ttl)

After (native Redis client):

import redis
r = redis.Redis(host='redis-host', port=6379)

def get_session(session_id):
    return r.get(f"session:{session_id}")

def set_session(session_id, data, ttl=3600):
    r.setex(f"session:{session_id}", ttl, data)

Note: Once you switch to native Redis, you no longer need the rc: prefix or hex encoding — you're using Redis directly with full access to all Redis data structures.

Phase 3: Native — Remove RedCouch

Once all services have migrated to native Redis clients:

  1. Verify no memcached protocol traffic on port 11210
  2. Unload the module: restart Redis without the --loadmodule argument
  3. Clean up any remaining rc:* keys if desired:
redis-cli --scan --pattern 'rc:*' | xargs redis-cli DEL

What You Gain

  • Full access to all Redis data structures (lists, sets, sorted sets, streams, etc.)
  • Native Redis performance without translation overhead
  • Redis Cluster support
  • Redis pub/sub, Lua scripting, modules
  • No memcached protocol parsing overhead

Timeline Expectations

PhaseTypical DurationRisk Level
BridgeHours to daysLow — transparent swap
Dual-AccessWeeks to monthsMedium — requires code changes
NativeMinutesLow — config change + cleanup

Next Steps

Use Cases

This chapter describes real-world scenarios where RedCouch solves a concrete problem. Each case explains the setup, why RedCouch fits, and links to relevant reference material.

Session Store Migration

Scenario: Your web application stores sessions in memcached. You want to move to Redis for persistence and replication, but can't change all services at once.

Solution: Deploy RedCouch on Redis 8+. Repoint your memcached clients to port 11210. Sessions are now stored in Redis hashes — surviving restarts and replicating to replicas — while your application code stays unchanged.

Key operations used: set (store session), get (retrieve session), touch (extend TTL), delete (logout)

Why RedCouch fits:

  • Zero application code changes during Phase 1
  • Redis persistence (RDB/AOF) eliminates cold-cache risk after restarts
  • Gradual migration to native Redis clients is possible per-service

See: Migration Guide for the step-by-step process.


Couchbase-to-Redis Migration

Scenario: You're migrating from Couchbase Server to Redis. Your applications use Couchbase SDKs that speak the memcached binary protocol for key-value operations.

Solution: RedCouch implements the Couchbase memcached binary protocol (all 34 opcodes, including SASL handshake). Point your Couchbase SDKs at RedCouch on port 11210. The SASL stub accepts any credentials, so no auth configuration changes are needed.

Key operations used: Binary GET/SET/DELETE with CAS, SASL auth handshake, quiet variants for pipelining

Why RedCouch fits:

  • Full binary protocol compatibility with Couchbase SDK wire format
  • SASL authentication stub lets SDKs connect without credential changes
  • CAS tokens are real and atomically enforced via Redis Lua scripts

Caveat: Couchbase-specific features (buckets, vbuckets, views, N1QL) are not supported — only key-value operations. See Known Limitations.


Rate Limiter with Dual Access

Scenario: You have a rate limiter using memcached counters (incr/decr). You want to add Redis-based analytics that reads the same counter data in real time.

Solution: Use RedCouch as the bridge. Your rate limiter writes through memcached protocol (incr/decr on port 11210). Your analytics service reads the same counters via native Redis commands on port 6379.

# Rate limiter (memcached client, unchanged)
from pymemcache.client.base import Client
mc = Client(("redis-host", 11210))

def check_rate(client_ip, limit=100):
    key = f"rate:{client_ip}"
    try:
        count = mc.incr(key, 1)
    except Exception:
        mc.set(key, "1", expire=60)
        count = 1
    return int(count) <= limit
# Analytics service (native Redis, new)
import redis
r = redis.Redis(host='redis-host', port=6379)

def get_rate_counts():
    keys = r.keys("rc:rate:*")
    counts = {}
    for key in keys:
        hex_val = r.hget(key, "v")
        if hex_val:
            counts[key.decode()] = bytes.fromhex(hex_val.decode()).decode()
    return counts

Why RedCouch fits:

  • Counters work correctly through the memcached protocol
  • Same data is readable via native Redis for analytics
  • No changes needed to the rate limiter code

Cache Warm-Up Testing

Scenario: You want to validate that your application handles cache misses correctly after a restart, but your memcached setup doesn't support scripted warm-up.

Solution: Use RedCouch with Redis persistence. Load test data via redis-cli or a script, then verify your application reads it correctly through the memcached protocol.

# Pre-load test data directly into Redis
redis-cli HSET rc:config:feature-flags v "$(echo -n '{"dark_mode":true}' | xxd -p)" f 0 c 1

# Verify through memcached protocol
echo -e "get config:feature-flags\r" | nc 127.0.0.1 11210

Why RedCouch fits:

  • Redis persistence means cache data survives restarts
  • Data can be loaded via Redis commands or memcached protocol
  • Useful for integration testing and staging environments

Protocol Debugging and Monitoring

Scenario: You need to debug what your memcached clients are actually sending and receiving, or monitor cache hit rates.

Solution: RedCouch exposes statistics through the memcached stats command. You can also inspect the underlying Redis data directly.

# Check stats via ASCII protocol
echo "stats" | nc 127.0.0.1 11210
# STAT cmd_get 1523
# STAT cmd_set 847
# STAT curr_items 312

# Inspect specific keys via redis-cli
redis-cli HGETALL rc:problematic-key

# Count total cached items
redis-cli --scan --pattern 'rc:*' | wc -l

Why RedCouch fits:

  • Memcached stats are available through standard protocol
  • Redis gives you additional visibility (key inspection, memory usage, slow log)
  • Dual access makes debugging transparent

Summary: When to Use RedCouch

ScenarioFitKey Benefit
Migrating from memcached to Redis✅ IdealZero-downtime, gradual migration
Migrating from Couchbase KV to Redis✅ IdealBinary protocol + SASL compatibility
Adding persistence to a memcached cache✅ GoodRedis RDB/AOF protects cache data
Dual-access (memcached + Redis) during migration✅ GoodBoth protocols hit the same data
Long-term production memcached proxy⚠️ AcceptableWorks, but native Redis is faster
High-throughput, latency-critical cache❌ Not idealTranslation overhead adds ~18% latency

Next Steps

Benchmarks & Performance

This chapter presents RedCouch's measured performance characteristics, explains what the numbers mean for your workload, and documents how to reproduce the benchmarks.

How to Read These Numbers

All benchmarks measure end-to-end operation latency — the time from sending a memcached protocol request to receiving the complete response, including network round-trip, protocol parsing, Lua script execution, and response encoding.

  • ops/sec — Operations per second (higher is better)
  • p50 µs — Median latency in microseconds (50th percentile)
  • p95/p99 µs — Tail latency (95th/99th percentile)
  • Errors — Number of failed operations (should be 0)

The benchmark harness uses Python asyncio clients sending individual operations in a tight loop. Throughput numbers reflect single-system capacity, not network-bound scenarios.

Throughput Baselines (Single Client)

Source: benchmarks/results/bench_20260402_144029.json (Redis 8.4.0, macOS arm64, local — no Docker).

Workloadops/secp50 µsp95 µsp99 µsErrors
SET 64B31,6942939770
SET 1KB29,8993141580
SET 64KB8,0741181511810
GET (hit)26,8143545570
GET (miss)36,7822633410
DELETE40,0382431400
INCREMENT31,8833039560
Mixed R/W14,6356580950
APPEND19,4285173860
TOUCH33,8562836500

What this tells you: A single client can sustain ~30k ops/s for SET and ~27k ops/s for GET at sub-100µs p99 latency. GET misses are faster than hits because misses skip the Lua hex-decode step. APPEND is the slowest mutation because it reads, concatenates, and re-encodes the existing value.

Concurrency Scaling (4 Clients)

Workloadops/secp50 µsp95 µs
SET 64B62,57460100
SET 1KB60,43963104
GET (hit)50,92976120
Mixed R/W30,138129184

What this tells you: Throughput roughly doubles from 1 to 4 clients, but latency also increases due to ThreadSafeContext lock contention. The optimal concurrency is ~4 clients; beyond that, the lock becomes the bottleneck and throughput plateaus (see Stress/Soak below).

Cross-System Comparison

To answer "how does RedCouch compare to Couchbase and native Redis?", all three systems were benchmarked under identical conditions using Docker containers with the same resource limits (256 MB maxmemory, no persistence).

Source: benchmarks/results/cross_system_20260403_092550.json (symmetric Docker topology).

Single Client (c=1)

OperationRedCouch ops/sRedis OSS ops/sCouchbase ops/s
SET 64B7,8738,0398,666
GET (hit)6,7768,2948,360
GET (miss)8,2648,2658,249
DELETE8,4218,4028,471

Four Clients (c=4)

OperationRedCouch ops/sRedis OSS ops/sCouchbase ops/s
SET 64B20,71019,23619,757
GET (hit)15,09818,35121,985
GET (miss)21,99219,90322,376
DELETE22,98621,23622,090

Interpretation

  • All three systems are closely matched — at c=1, all systems fall within ±10% of each other for SET, DELETE, and GET miss. The Docker networking layer dominates single-client latency.
  • GET hit is RedCouch's most expensive operation — ~18% slower than Redis native at c=1 due to the Lua hex-decode overhead. This is the cost of binary-safe value storage.
  • RedCouch is a viable bridge — The protocol translation layer does not introduce order-of-magnitude penalties. Migration from memcached protocol to native Redis removes the bridge overhead entirely.

Note: The absolute numbers in the Docker comparison (~8k ops/s at c=1) are lower than the local baselines (~30k ops/s) because Docker networking adds latency. The relative comparisons between systems are what matter.

Stress/Soak Validation

A 7-phase stress suite validated RedCouch's behavior under sustained load:

FindingValueWhat it means
Performance sweet spot4 clients (~61k ops/s SET)Optimal concurrency for throughput
Post-saturation ceiling~35k ops/s at c≥16ThreadSafeContext lock serialization caps throughput
Soak stability175,036 ops / 5s, 0 errorsStable under sustained mixed workload
Memory growth (soak)742 KB over 175k opsNo memory leaks detected
Connection churn169 conn/s, 0 failuresThread-per-connection model handles rapid connect/disconnect

Running Benchmarks

Prerequisites

  • Redis 8+ with RedCouch module loaded (for single-system benchmarks)
  • Python 3.10+ with asyncio support
  • Docker (for cross-system comparison only)

Commands

# Single-system benchmark (requires Redis 8+ with module loaded)
cd benchmarks && bash run_benchmarks.sh

# Stress/soak validation
cd benchmarks && bash run_stress_soak.sh

# Cross-system three-way comparison (requires Docker)
bash benchmarks/run_cross_system.sh

Benchmark artifacts

Results are stored as JSON files in benchmarks/results/. The latest results are symlinked:

SymlinkPoints to
latest.jsonMost recent single-system benchmark
stress_latest.jsonMost recent stress/soak result

Reproducing the cross-system comparison

The cross-system benchmark uses Docker Compose to run all three systems:

# Start all containers (Couchbase, Redis OSS, Redis + RedCouch)
cd benchmarks && docker compose up --build --wait

# Run the benchmark
python3 bench_cross_system.py

# Tear down
docker compose down

See benchmarks/docker-compose.yml for container configuration and benchmarks/Dockerfile.redcouch for the RedCouch container build.

Release Process

This chapter covers how RedCouch releases are built, published, and verified. It serves as the maintainer runbook for cutting releases.

Distribution Channels

ChannelStatusOutput
GitHub ReleasesPrimaryPre-built .tar.gz archives with SHA-256 checksums for 4 targets
crates.ioSecondary, policy-gatedSource crate (requires explicit opt-in)

GitHub Releases is the primary distribution channel. Users download pre-built module binaries. crates.io publication is optional and not required.

Release Artifacts

Each release produces artifacts for all supported platforms:

TargetArtifactOSArchitecture
x86_64-unknown-linux-gnulibred_couch.soLinuxx86_64
aarch64-unknown-linux-gnulibred_couch.soLinuxARM64
x86_64-apple-darwinlibred_couch.dylibmacOSx86_64
aarch64-apple-darwinlibred_couch.dylibmacOSARM64

Each artifact is packaged as a .tar.gz archive with a matching .tar.gz.sha256 checksum file.

Cutting a Release (Maintainer Runbook)

Pre-release checklist

  • Latest main commit passes CI (check, test, clippy, fmt, doc)
  • Version in Cargo.toml is updated to the new version
  • cargo check succeeds (updates Cargo.lock)
  • CHANGELOG or release notes are prepared (GitHub auto-generates from PR titles)

Steps

  1. Verify CI is green on the latest main commit:

    # Check CI status on GitHub, or run locally:
    cargo check --all-targets
    cargo test
    cargo clippy --all-targets -- -D warnings
    cargo fmt --check
    RUSTDOCFLAGS="-D warnings" cargo doc --no-deps
    
  2. Set the version in Cargo.toml and commit:

    # Edit Cargo.toml: version = "0.2.0"
    cargo check  # Updates Cargo.lock
    git add Cargo.toml Cargo.lock
    git commit -m "chore: bump version to 0.2.0"
    git push origin main
    
  3. Create and push the tag (must match Cargo.toml version):

    git tag v0.2.0
    git push origin v0.2.0
    
  4. The release workflow runs automatically (.github/workflows/release.yml):

    JobWhat it doesFailure behavior
    validate-tagConfirms tag version matches Cargo.tomlFails fast — prevents mismatched releases
    testRuns cargo test, clippy, fmtBlocks build and publish
    buildCross-compiles for all 4 targetsBlocks publish
    publish-githubCreates GitHub Release with artifacts
    publish-cratePublishes to crates.io (if enabled)Non-blocking — GitHub Release still succeeds
  5. Verify the release at https://github.com/fcenedes/RedCouch/releases:

    • All 4 target archives present
    • SHA-256 checksum files present
    • Release notes auto-generated

What the workflow does NOT do

  • It does not deploy the module to any running Redis instance.
  • It does not publish to crates.io unless PUBLISH_CRATE is explicitly enabled (see below).
  • It does not run integration tests or benchmarks — only unit tests, clippy, and fmt.

crates.io Publication (Optional)

crates.io publication is policy-gated and disabled by default. To enable:

  1. Set repository variable PUBLISH_CRATE to true in GitHub Settings → Variables.
  2. Add a CARGO_REGISTRY_TOKEN secret with a valid crates.io API token.
  3. The publish-crate job runs automatically after a successful GitHub Release.

This gate exists because crates.io publication is irreversible — once a version is published, it cannot be unpublished (only yanked).

CI Pipeline

The CI workflow (.github/workflows/ci.yml) runs on every push to main and on pull requests:

CheckPlatformsCommand
Build checkUbuntu + macOScargo check --all-targets
TestsUbuntu + macOScargo test
LintingUbuntu + macOScargo clippy --all-targets -- -D warnings
FormattingUbuntu + macOScargo fmt --check
DocumentationUbuntuRUSTDOCFLAGS="-D warnings" cargo doc --no-deps

All checks must pass before a PR can be merged.

Verification Commands

# Build check
cargo check --all-targets

# Unit/protocol tests (221 tests — no Redis required)
cargo test

# Integration tests (requires Redis 8+ with module loaded)
cd tests/integration && bash run_e2e.sh

# Benchmark suite
cd benchmarks && bash run_benchmarks.sh

# Stress/soak validation
cd benchmarks && bash run_stress_soak.sh

# Cross-system comparison (requires Docker)
bash benchmarks/run_cross_system.sh

Contributing

Thank you for your interest in contributing to RedCouch!

Prerequisites

  • Rust 1.85+ (stable toolchain)
  • Redis 8.x (for integration testing; verified on 8.4.0)
  • Python 3.10+ (for E2E and benchmark scripts)
  • Git

Getting Started

git clone https://github.com/fcenedes/RedCouch.git
cd RedCouch
cargo build --release
cargo test
cargo fmt --check
cargo clippy --all-targets -- -D warnings

Development Workflow

1. Build and Test Locally

cargo check --all-targets     # Quick check
cargo test                     # All 221 tests
cargo clippy --all-targets -- -D warnings
cargo fmt --check

2. Integration Testing (requires Redis 8+)

redis-server --loadmodule ./target/release/libred_couch.dylib  # macOS
redis-server --loadmodule ./target/release/libred_couch.so     # Linux

# In a separate terminal:
cd tests/integration && bash run_e2e.sh

3. Benchmarks

cd benchmarks && bash run_benchmarks.sh
cd benchmarks && bash run_stress_soak.sh

Code Structure

FileResponsibility
src/lib.rsModule entry, TCP listener, connection handling, Redis dispatch
src/protocol.rsBinary protocol parser/encoder, types, constants
src/ascii.rsASCII text protocol parser, meta prefix routing
src/meta.rsMeta protocol parser, flag validation

See the Architecture chapter for detailed architecture documentation.

Coding Standards

  • No unsafe code: The crate uses #![forbid(unsafe_code)].
  • Formatting: Run cargo fmt before committing. CI enforces cargo fmt --check.
  • Linting: Run cargo clippy --all-targets -- -D warnings. CI enforces zero warnings.
  • Documentation: Run cargo doc --no-deps with RUSTDOCFLAGS="-D warnings". CI enforces clean doc builds.
  • Tests: Add tests for new protocol commands or behavior changes. Tests should run without a live Redis instance (use #[cfg(not(test))] guards for Redis-dependent code).

Pull Request Process

  1. Fork and branch: Create a feature branch from main.
  2. Make changes: Keep changes focused and minimal.
  3. Test locally: Run cargo test, cargo clippy, and cargo fmt --check.
  4. Write tests: Add or update tests to cover your changes.
  5. Submit PR: Target the main branch. Describe what changed and why.
  6. CI checks: Your PR must pass all CI checks on both Ubuntu and macOS.

What to Contribute

  • Bug fixes and protocol conformance improvements
  • Test coverage expansion
  • Performance improvements (with benchmark evidence)
  • Documentation improvements

Please open an issue first for larger changes or new features.

License

By contributing, you agree that your contributions will be licensed under the MIT License.

Test Architecture

RedCouch has a layered test strategy: fast unit tests that run without Redis, integration tests that exercise the full protocol stack against a live Redis instance, and benchmark/stress suites for performance validation.

Test Summary

CategoryCountLocationRequires Redis?
Binary protocol unit tests76src/protocol.rsNo
ASCII protocol unit tests97src/ascii.rsNo
Meta protocol unit tests48src/meta.rsNo
Integration/E2E testsSuitetests/integration/test_binary_protocol.pyYes (Redis 8+)
Benchmark workloads10+ profilesbenchmarks/bench_binary_protocol.pyYes (Redis 8+)
Stress/soak workloads7 phasesbenchmarks/stress_soak_validation.pyYes (Redis 8+)

Total unit test count: 221 tests (76 + 97 + 48) — all run via cargo test without Redis.

Host-Process Testing (No Redis Required)

The key design decision in RedCouch's test architecture is that protocol parsing and encoding tests run without Redis. This is achieved through #[cfg(not(test))] guards that exclude Redis allocator and module dependencies during cargo test. The test binary runs as a normal host process, not inside Redis.

# Run all 221 unit tests (no Redis needed)
cargo test

What the unit tests cover

AreaExamples
Parser round-tripsEncode a request → parse it → verify fields match
Opcode coverageEvery supported opcode has at least one test
Quiet/base mappingQuiet opcodes map to correct base opcodes
Frame buildingResponse frames have correct header fields, body length, CAS
Malformed handlingTruncated headers, oversized bodies, invalid opcodes
Binary-safe payloadsNon-UTF-8 bytes preserved through encode/decode
CAS preservationCAS tokens round-trip through request/response
Key validationEmpty keys, oversized keys, boundary lengths
Meta flag validationUnsupported flags rejected, proxy hints accepted
Property-based sweepsExhaustive opcode and size combinations

Writing new unit tests

New protocol tests follow a consistent pattern:

  1. Construct a request using the protocol builder functions
  2. Parse it using the protocol parser
  3. Assert the parsed fields match expectations

Tests live in #[cfg(test)] mod tests blocks within each protocol module. Example pattern from src/protocol.rs:

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_parse_set_request() {
        // Build a SET request with known key, value, flags, expiry
        let request = build_request(Opcode::Set, key, extras, value);
        // Parse it back
        let parsed = parse_request(&request).unwrap();
        // Verify all fields
        assert_eq!(parsed.opcode, Opcode::Set);
        assert_eq!(parsed.key, key);
        // ...
    }
}
}

Integration Testing (Requires Redis 8+)

Integration tests exercise the full stack: TCP connection → protocol parsing → Redis execution → response encoding.

Setup

# Build the module
cargo build --release

# Start Redis with the module loaded
redis-server --loadmodule ./target/release/libred_couch.dylib  # macOS
redis-server --loadmodule ./target/release/libred_couch.so     # Linux

# Verify the module is loaded and listening
redis-cli MODULE LIST
nc -z 127.0.0.1 11210 && echo "RedCouch listening"

Running E2E tests

# In a separate terminal (Redis must be running with module)
cd tests/integration && bash run_e2e.sh

The E2E test suite (tests/integration/test_binary_protocol.py) uses raw socket connections to send binary protocol frames and verify responses. It covers all 34 binary opcodes with correct and error cases.

Benchmark and Stress Testing

Benchmarks and stress tests are separate from the correctness test suite. They require Redis 8+ with the module loaded and produce JSON result files.

# Performance benchmark (10 workload profiles, 2 concurrency levels)
cd benchmarks && bash run_benchmarks.sh

# Stress/soak validation (7 phases — concurrency scaling, soak, connection churn)
cd benchmarks && bash run_stress_soak.sh

# Three-way cross-system comparison (requires Docker)
bash benchmarks/run_cross_system.sh

Results are stored in benchmarks/results/ as timestamped JSON files. See Benchmarks & Performance for interpretation and measured data.

CI Integration

The CI pipeline runs unit tests automatically on every push and PR:

# What CI runs (Ubuntu + macOS):
cargo check --all-targets
cargo test
cargo clippy --all-targets -- -D warnings
cargo fmt --check
RUSTDOCFLAGS="-D warnings" cargo doc --no-deps

Integration tests and benchmarks are not run in CI — they require a live Redis instance with the module loaded. Run them locally before submitting performance-sensitive changes.

API Reference

The full Rust API reference is auto-generated from source-code doc comments by rustdoc and published alongside this book. On the published documentation site, navigate directly to the links below.

Browse Online

Open the full API reference →

Key Entry Points

ModuleDescriptionLink
red_couchCrate root — module registration, TCP listener, connection handlingred_couch
red_couch::protocolBinary protocol types, opcode enum, request parser, response encoderprotocol
red_couch::asciiASCII text protocol parser and command dispatchascii
red_couch::metaMeta protocol parser, flag validation, command typesmeta

Commonly Referenced Items

Generating Locally

# Build and open in your browser
RUSTDOCFLAGS="-D warnings" cargo doc --no-deps --open

The generated docs cover all public types, functions, and modules in the red_couch crate. They are rebuilt from source on every change, so they always reflect the current code.

How It Works

The GitHub Actions Pages workflow builds both the mdBook site and the rustdoc output in a single job:

  1. mdbook build produces the narrative documentation in book/output/.
  2. cargo doc --no-deps produces the API reference in target/doc/.
  3. The workflow copies target/doc/ into book/output/api/, so the final Pages artifact has the structure:
    book/output/
    ├── index.html          ← mdBook site root
    ├── api/
    │   └── red_couch/      ← rustdoc API reference
    │       ├── index.html
    │       ├── protocol/
    │       ├── ascii/
    │       └── meta/
    └── ...                 ← other book chapters
    

The links on this page point into the api/ subtree, so they work on the published site.

CI Validation

The CI pipeline validates that both the API documentation and the book build cleanly on every push to main and on pull requests:

RUSTDOCFLAGS="-D warnings" cargo doc --no-deps
mdbook build

This ensures documentation stays in sync with the code and catches broken links or build errors before merging.