content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
\*intrinsic latency\* because it's mostly opaque to the programmer. If the Redis instance has high latency regardless of all the obvious things that may be the source cause, it's worth to check what's the best your system can do by running `redis-cli` in this special mode directly in the system you are running Redis servers on. By measuring the intrinsic latency, you know that this is the baseline, and Redis cannot outdo your system. In order to run the CLI in this mode, use the `--intrinsic-latency `. Note that the test time is in seconds and dictates how long the test should run. $ ./redis-cli --intrinsic-latency 5 Max latency so far: 1 microseconds. Max latency so far: 7 microseconds. Max latency so far: 9 microseconds. Max latency so far: 11 microseconds. Max latency so far: 13 microseconds. Max latency so far: 15 microseconds. Max latency so far: 34 microseconds. Max latency so far: 82 microseconds. Max latency so far: 586 microseconds. Max latency so far: 739 microseconds. 65433042 total runs (avg latency: 0.0764 microseconds / 764.14 nanoseconds per run). Worst run took 9671x longer than the average latency. IMPORTANT: this command must be executed on the computer that runs the Redis server instance, not on a different host. It does not connect to a Redis instance and performs the test locally. In the above case, the system cannot do better than 739 microseconds of worst case latency, so one can expect certain queries to occasionally run less than 1 millisecond. ## Remote backups of RDB files During a Redis replication's first synchronization, the primary and the replica exchange the whole data set in the form of an RDB file. This feature is exploited by `redis-cli` in order to provide a remote backup facility that allows a transfer of an RDB file from any Redis instance to the local computer running `redis-cli`. To use this mode, call the CLI with the `--rdb ` option: $ redis-cli --rdb /tmp/dump.rdb SYNC sent to master, writing 13256 bytes to '/tmp/dump.rdb' Transfer finished with success. This is a simple but effective way to ensure disaster recovery RDB backups exist of your Redis instance. When using this options in scripts or `cron` jobs, make sure to check the return value of the command. If it is non zero, an error occurred as in the following example: $ redis-cli --rdb /tmp/dump.rdb SYNC with master failed: -ERR Can't SYNC while not connected with my master $ echo $? 1 ## Replica mode The replica mode of the CLI is an advanced feature useful for Redis developers and for debugging operations. It allows for the inspection of the content a primary sends to its replicas in the replication stream in order to propagate the writes to its replicas. The option name is simply `--replica`. The following is a working example: $ redis-cli --replica SYNC with master, discarding 13256 bytes of bulk transfer... SYNC done. Logging commands from master. "PING" "SELECT","0" "SET","last\_name","Enigk" "PING" "INCR","mycounter" The command begins by discarding the RDB file of the first synchronization and then logs each command received in CSV format. If you think some of the commands are not replicated correctly in your replicas this is a good way to check what's happening, and also useful information in order to improve the bug report. ## Performing an LRU simulation Redis is often used as a cache with [LRU eviction](/topics/lru-cache). Depending on the number of keys and the amount of memory allocated for the cache (specified via the `maxmemory` directive), the amount of cache hits and misses will change. Sometimes, simulating the rate of hits is | https://github.com/redis/redis-doc/blob/master//docs/connect/cli.md | master | redis | [
0.015250985510647297,
-0.08446367084980011,
-0.06026751548051834,
0.025875648483633995,
0.03934406489133835,
-0.13407431542873383,
-0.0015234784223139286,
-0.001554898452013731,
0.076375313103199,
-0.019033243879675865,
-0.0030843080021440983,
-0.07183557003736496,
-0.03500478342175484,
-0... | 0.070657 |
Performing an LRU simulation Redis is often used as a cache with [LRU eviction](/topics/lru-cache). Depending on the number of keys and the amount of memory allocated for the cache (specified via the `maxmemory` directive), the amount of cache hits and misses will change. Sometimes, simulating the rate of hits is very useful to correctly provision your cache. The `redis-cli` has a special mode where it performs a simulation of GET and SET operations, using an 80-20% power law distribution in the requests pattern. This means that 20% of keys will be requested 80% of times, which is a common distribution in caching scenarios. Theoretically, given the distribution of the requests and the Redis memory overhead, it should be possible to compute the hit rate analytically with a mathematical formula. However, Redis can be configured with different LRU settings (number of samples) and LRU's implementation, which is approximated in Redis, changes a lot between different versions. Similarly the amount of memory per key may change between versions. That is why this tool was built: its main motivation was for testing the quality of Redis' LRU implementation, but now is also useful for testing how a given version behaves with the settings originally intended for deployment. To use this mode, specify the amount of keys in the test and configure a sensible `maxmemory` setting as a first attempt. IMPORTANT NOTE: Configuring the `maxmemory` setting in the Redis configuration is crucial: if there is no cap to the maximum memory usage, the hit will eventually be 100% since all the keys can be stored in memory. If too many keys are specified with maximum memory, eventually all of the computer RAM will be used. It is also needed to configure an appropriate \*maxmemory policy\*; most of the time `allkeys-lru` is selected. In the following example there is a configured a memory limit of 100MB and an LRU simulation using 10 million keys. WARNING: the test uses pipelining and will stress the server, don't use it with production instances. $ ./redis-cli --lru-test 10000000 156000 Gets/sec | Hits: 4552 (2.92%) | Misses: 151448 (97.08%) 153750 Gets/sec | Hits: 12906 (8.39%) | Misses: 140844 (91.61%) 159250 Gets/sec | Hits: 21811 (13.70%) | Misses: 137439 (86.30%) 151000 Gets/sec | Hits: 27615 (18.29%) | Misses: 123385 (81.71%) 145000 Gets/sec | Hits: 32791 (22.61%) | Misses: 112209 (77.39%) 157750 Gets/sec | Hits: 42178 (26.74%) | Misses: 115572 (73.26%) 154500 Gets/sec | Hits: 47418 (30.69%) | Misses: 107082 (69.31%) 151250 Gets/sec | Hits: 51636 (34.14%) | Misses: 99614 (65.86%) The program shows stats every second. In the first seconds the cache starts to be populated. The misses rate later stabilizes into the actual figure that can be expected: 120750 Gets/sec | Hits: 48774 (40.39%) | Misses: 71976 (59.61%) 122500 Gets/sec | Hits: 49052 (40.04%) | Misses: 73448 (59.96%) 127000 Gets/sec | Hits: 50870 (40.06%) | Misses: 76130 (59.94%) 124250 Gets/sec | Hits: 50147 (40.36%) | Misses: 74103 (59.64%) A miss rate of 59% may not be acceptable for certain use cases therefor 100MB of memory is not enough. Observe an example using a half gigabyte of memory. After several minutes the output stabilizes to the following figures: 140000 Gets/sec | Hits: 135376 (96.70%) | Misses: 4624 (3.30%) 141250 Gets/sec | Hits: 136523 (96.65%) | Misses: 4727 (3.35%) 140250 Gets/sec | Hits: 135457 (96.58%) | Misses: 4793 (3.42%) 140500 Gets/sec | Hits: 135947 (96.76%) | Misses: 4553 (3.24%) With 500MB there is sufficient space for the key quantity (10 million) and distribution (80-20 style). | https://github.com/redis/redis-doc/blob/master//docs/connect/cli.md | master | redis | [
-0.028993375599384308,
-0.06177422031760216,
-0.07696127891540527,
0.009591152891516685,
-0.013626485131680965,
-0.14667513966560364,
-0.03184134513139725,
0.04218440502882004,
0.07055298238992691,
-0.005011092871427536,
-0.002237520180642605,
0.05289649963378906,
0.033145733177661896,
-0.... | 0.146251 |
Hits: 135457 (96.58%) | Misses: 4793 (3.42%) 140500 Gets/sec | Hits: 135947 (96.76%) | Misses: 4553 (3.24%) With 500MB there is sufficient space for the key quantity (10 million) and distribution (80-20 style). | https://github.com/redis/redis-doc/blob/master//docs/connect/cli.md | master | redis | [
0.06475810706615448,
-0.0777570828795433,
-0.07883457839488983,
-0.0005392708699218929,
-0.0437522791326046,
-0.02979269251227379,
0.013232165016233921,
0.06290055811405182,
0.025178968906402588,
0.04281512275338173,
0.025733551010489464,
-0.03256983682513237,
0.07593467086553574,
-0.10079... | 0.02695 |
You can connect to Redis in the following ways: \* With the `redis-cli` command line tool \* Use RedisInsight as a graphical user interface \* Via a client library for your programming language ## Redis command line interface The [Redis command line interface](/docs/connect/cli) (also known as `redis-cli`) is a terminal program that sends commands to and reads replies from the Redis server. It has the following two main modes: 1. An interactive Read Eval Print Loop (REPL) mode where the user types Redis commands and receives replies. 2. A command mode where `redis-cli` is executed with additional arguments, and the reply is printed to the standard output. ## RedisInsight [RedisInsight](/docs/connect/insight) combines a graphical user interface with Redis CLI to let you work with any Redis deployment. You can visually browse and interact with data, take advantage of diagnostic tools, learn by example, and much more. Best of all, RedisInsight is free. ## Client libraries It's easy to connect your application to a Redis database. The official client libraries cover the following languages: \* [C#/.NET](/docs/connect/clients/dotnet) \* [Go](/docs/connect/clients/go) \* [Java](/docs/connect/clients/java) \* [Node.js](/docs/connect/clients/nodejs) \* [Python](/docs/connect/clients/python) You can find a complete list of all client libraries, including the community-maintained ones, on the [clients page](/resources/clients/). --- | https://github.com/redis/redis-doc/blob/master//docs/connect/_index.md | master | redis | [
-0.04063607007265091,
-0.10375478863716125,
-0.14102302491664886,
0.020880842581391335,
-0.04281872883439064,
-0.07039293646812439,
0.04519201070070267,
0.05574453994631767,
-0.0003646320546977222,
-0.03366972878575325,
-0.0329744778573513,
0.005924306344240904,
0.04703962802886963,
-0.011... | 0.154905 |
Install Redis and the Redis client, then connect your Go application to a Redis database. ## go-redis [go-redis](https://github.com/redis/go-redis) provides Go clients for various flavors of Redis and a type-safe API for each Redis command. ### Install `go-redis` supports last two Go versions and only works with Go modules. So, first, you need to initialize a Go module: ``` go mod init github.com/my/repo ``` To install go-redis/v9: ``` go get github.com/redis/go-redis/v9 ``` ### Connect To connect to a Redis server: ```go import ( "context" "fmt" "github.com/redis/go-redis/v9" ) func main() { client := redis.NewClient(&redis.Options{ Addr: "localhost:6379", Password: "", // no password set DB: 0, // use default DB }) } ``` Another way to connect is using a connection string. ```go opt, err := redis.ParseURL("redis://:@localhost:6379/") if err != nil { panic(err) } client := redis.NewClient(opt) ``` Store and retrieve a simple string. ```go ctx := context.Background() err := client.Set(ctx, "foo", "bar", 0).Err() if err != nil { panic(err) } val, err := client.Get(ctx, "foo").Result() if err != nil { panic(err) } fmt.Println("foo", val) ``` Store and retrieve a map. ```go session := map[string]string{"name": "John", "surname": "Smith", "company": "Redis", "age": "29"} for k, v := range session { err := client.HSet(ctx, "user-session:123", k, v).Err() if err != nil { panic(err) } } userSession := client.HGetAll(ctx, "user-session:123").Val() fmt.Println(userSession) ``` #### Connect to a Redis cluster To connect to a Redis cluster, use `NewClusterClient`. ```go client := redis.NewClusterClient(&redis.ClusterOptions{ Addrs: []string{":16379", ":16380", ":16381", ":16382", ":16383", ":16384"}, // To route commands by latency or randomly, enable one of the following. //RouteByLatency: true, //RouteRandomly: true, }) ``` #### Connect to your production Redis with TLS When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. Establish a secure connection with your Redis database using this snippet. ```go // Load client cert cert, err := tls.LoadX509KeyPair("redis\_user.crt", "redis\_user\_private.key") if err != nil { log.Fatal(err) } // Load CA cert caCert, err := os.ReadFile("redis\_ca.pem") if err != nil { log.Fatal(err) } caCertPool := x509.NewCertPool() caCertPool.AppendCertsFromPEM(caCert) client := redis.NewClient(&redis.Options{ Addr: "my-redis.cloud.redislabs.com:6379", Username: "default", // use your Redis user. More info https://redis.io/docs/management/security/acl/ Password: "secret", // use your Redis password TLSConfig: &tls.Config{ MinVersion: tls.VersionTLS12, Certificates: []tls.Certificate{cert}, RootCAs: caCertPool, }, }) //send SET command err = client.Set(ctx, "foo", "bar", 0).Err() if err != nil { panic(err) } //send GET command and print the value val, err := client.Get(ctx, "foo").Result() if err != nil { panic(err) } fmt.Println("foo", val) ``` #### dial tcp: i/o timeout You get a `dial tcp: i/o timeout` error when `go-redis` can't connect to the Redis Server, for example, when the server is down or the port is protected by a firewall. To check if Redis Server is listening on the port, run telnet command on the host where the `go-redis` client is running. ```go telnet localhost 6379 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused ``` If you use Docker, Istio, or any other service mesh/sidecar, make sure the app starts after the container is fully available, for example, by configuring healthchecks with Docker and holdApplicationUntilProxyStarts with Istio. For more information, see [Healthcheck](https://docs.docker.com/engine/reference/run/#healthcheck). ### Learn more \* [Documentation](https://redis.uptrace.dev/guide/) \* [GitHub](https://github.com/redis/go-redis) | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/go.md | master | redis | [
-0.024085640907287598,
-0.09080158919095993,
-0.13225507736206055,
-0.03228927403688431,
-0.05625308305025101,
-0.0552549734711647,
0.017847726121544838,
0.08356788009405136,
-0.03284626454114914,
-0.017538510262966156,
-0.013603528030216694,
0.025138286873698235,
0.060014523565769196,
0.0... | 0.035213 |
Install Redis and the Redis client, then connect your Python application to a Redis database. ## redis-py Get started with the [redis-py](https://github.com/redis/redis-py) client for Redis. `redis-py` requires a running Redis or [Redis Stack](/docs/getting-started/install-stack/) server. See [Getting started](/docs/getting-started/) for Redis installation instructions. ### Install To install `redis-py`, enter: ```bash pip install redis ``` For faster performance, install Redis with [`hiredis`](https://github.com/redis/hiredis) support. This provides a compiled response parser, and for most cases requires zero code changes. By default, if `hiredis` >= 1.0 is available, `redis-py` attempts to use it for response parsing. {{% alert title="Note" %}} The Python `distutils` packaging scheme is no longer part of Python 3.12 and greater. If you're having difficulties getting `redis-py` installed in a Python 3.12 environment, consider updating to a recent release of `redis-py`. {{% /alert %}} ```bash pip install redis[hiredis] ``` ### Connect Connect to localhost on port 6379, set a value in Redis, and retrieve it. All responses are returned as bytes in Python. To receive decoded strings, set `decode\_responses=True`. For more connection options, see [these examples](https://redis.readthedocs.io/en/stable/examples.html). ```python r = redis.Redis(host='localhost', port=6379, decode\_responses=True) ``` Store and retrieve a simple string. ```python r.set('foo', 'bar') # True r.get('foo') # bar ``` Store and retrieve a dict. ```python r.hset('user-session:123', mapping={ 'name': 'John', "surname": 'Smith', "company": 'Redis', "age": 29 }) # True r.hgetall('user-session:123') # {'surname': 'Smith', 'name': 'John', 'company': 'Redis', 'age': '29'} ``` #### Connect to a Redis cluster To connect to a Redis cluster, use `RedisCluster`. ```python from redis.cluster import RedisCluster rc = RedisCluster(host='localhost', port=16379) print(rc.get\_nodes()) # [[host=127.0.0.1,port=16379,name=127.0.0.1:16379,server\_type=primary,redis\_connection=Redis>>], ... rc.set('foo', 'bar') # True rc.get('foo') # b'bar' ``` For more information, see [redis-py Clustering](https://redis-py.readthedocs.io/en/stable/clustering.html). #### Connect to your production Redis with TLS When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. ```python import redis r = redis.Redis( host="my-redis.cloud.redislabs.com", port=6379, username="default", # use your Redis user. More info https://redis.io/docs/management/security/acl/ password="secret", # use your Redis password ssl=True, ssl\_certfile="./redis\_user.crt", ssl\_keyfile="./redis\_user\_private.key", ssl\_ca\_certs="./redis\_ca.pem", ) r.set('foo', 'bar') # True r.get('foo') # b'bar' ``` For more information, see [redis-py TLS examples](https://redis-py.readthedocs.io/en/stable/examples/ssl\_connection\_examples.html). ### Example: Indexing and querying JSON documents Make sure that you have Redis Stack and `redis-py` installed. Import dependencies: ```python import redis from redis.commands.json.path import Path import redis.commands.search.aggregation as aggregations import redis.commands.search.reducers as reducers from redis.commands.search.field import TextField, NumericField, TagField from redis.commands.search.indexDefinition import IndexDefinition, IndexType from redis.commands.search.query import NumericFilter, Query ``` Connect to your Redis database. ```python r = redis.Redis(host='localhost', port=6379) ``` Let's create some test data to add to your database. ```python user1 = { "name": "Paul John", "email": "paul.john@example.com", "age": 42, "city": "London" } user2 = { "name": "Eden Zamir", "email": "eden.zamir@example.com", "age": 29, "city": "Tel Aviv" } user3 = { "name": "Paul Zamir", "email": "paul.zamir@example.com", "age": 35, "city": "Tel Aviv" } ``` Define indexed fields and their data types using `schema`. Use JSON path expressions to map specific JSON elements to the schema fields. ```python schema = ( TextField("$.name", as\_name="name"), TagField("$.city", as\_name="city"), NumericField("$.age", as\_name="age") ) ``` Create an index. In this example, all JSON documents with the key prefix `user:` will be indexed. For more information, see [Query syntax](/docs/interact/search-and-query/query/). ```python rs = r.ft("idx:users") rs.create\_index( schema, definition=IndexDefinition( prefix=["user:"], index\_type=IndexType.JSON ) ) # b'OK' ``` Use `JSON.SET` to set each user value at the specified path. ```python r.json().set("user:1", Path.root\_path(), user1) r.json().set("user:2", Path.root\_path(), user2) r.json().set("user:3", Path.root\_path(), user3) ``` Let's find user `Paul` and filter the results by age. ```python res = rs.search( Query("Paul @age:[30 40]") ) # Result{1 total, docs: [Document {'id': 'user:3', 'payload': None, 'json': '{"name":"Paul Zamir","email":"paul.zamir@example.com","age":35,"city":"Tel Aviv"}'}]} ``` Query using JSON Path expressions. ```python rs.search( Query("Paul").return\_field("$.city", as\_field="city") ).docs # [Document {'id': 'user:1', 'payload': None, 'city': 'London'}, Document {'id': 'user:3', 'payload': None, 'city': 'Tel Aviv'}] ``` Aggregate your results | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/python.md | master | redis | [
-0.05085067078471184,
-0.06019609794020653,
-0.06949558854103088,
0.02259618043899536,
-0.0497237928211689,
-0.127751886844635,
-0.02678769640624523,
0.0367145910859108,
-0.006109344307333231,
-0.008476132526993752,
-0.048298146575689316,
0.053358208388090134,
0.03102903813123703,
-0.00317... | 0.038465 |
= rs.search( Query("Paul @age:[30 40]") ) # Result{1 total, docs: [Document {'id': 'user:3', 'payload': None, 'json': '{"name":"Paul Zamir","email":"paul.zamir@example.com","age":35,"city":"Tel Aviv"}'}]} ``` Query using JSON Path expressions. ```python rs.search( Query("Paul").return\_field("$.city", as\_field="city") ).docs # [Document {'id': 'user:1', 'payload': None, 'city': 'London'}, Document {'id': 'user:3', 'payload': None, 'city': 'Tel Aviv'}] ``` Aggregate your results using `FT.AGGREGATE`. ```python req = aggregations.AggregateRequest("\*").group\_by('@city', reducers.count().alias('count')) print(rs.aggregate(req).rows) # [[b'city', b'Tel Aviv', b'count', b'2'], [b'city', b'London', b'count', b'1']] ``` ### Learn more \* [Command reference](https://redis-py.readthedocs.io/en/stable/commands.html) \* [Tutorials](https://redis.readthedocs.io/en/stable/examples.html) \* [GitHub](https://github.com/redis/redis-py) | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/python.md | master | redis | [
0.02581377513706684,
0.09707470238208771,
0.06316043436527252,
0.07062838226556778,
-0.019018493592739105,
-0.010218541137874126,
-0.011990784667432308,
0.06188138201832771,
0.05374402925372124,
-0.02967754192650318,
-0.011356410570442677,
-0.016305001452565193,
0.03498426824808121,
0.0150... | 0.038226 |
Install Redis and the Redis client, then connect your .NET application to a Redis database. ## NRedisStack [NRedisStack](https://github.com/redis/NRedisStack) is a .NET client for Redis. `NredisStack` requires a running Redis or [Redis Stack](https://redis.io/docs/getting-started/install-stack/) server. See [Getting started](/docs/getting-started/) for Redis installation instructions. ### Install Using the `dotnet` CLI, run: ``` dotnet add package NRedisStack ``` ### Connect Connect to localhost on port 6379. ``` using NRedisStack; using NRedisStack.RedisStackCommands; using StackExchange.Redis; //... ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("localhost"); IDatabase db = redis.GetDatabase(); ``` Store and retrieve a simple string. ```csharp db.StringSet("foo", "bar"); Console.WriteLine(db.StringGet("foo")); // prints bar ``` Store and retrieve a HashMap. ```csharp var hash = new HashEntry[] { new HashEntry("name", "John"), new HashEntry("surname", "Smith"), new HashEntry("company", "Redis"), new HashEntry("age", "29"), }; db.HashSet("user-session:123", hash); var hashFields = db.HashGetAll("user-session:123"); Console.WriteLine(String.Join("; ", hashFields)); // Prints: // name: John; surname: Smith; company: Redis; age: 29 ``` To access Redis Stack capabilities, you should use appropriate interface like this: ``` IBloomCommands bf = db.BF(); ICuckooCommands cf = db.CF(); ICmsCommands cms = db.CMS(); IGraphCommands graph = db.GRAPH(); ITopKCommands topk = db.TOPK(); ITdigestCommands tdigest = db.TDIGEST(); ISearchCommands ft = db.FT(); IJsonCommands json = db.JSON(); ITimeSeriesCommands ts = db.TS(); ``` #### Connect to a Redis cluster To connect to a Redis cluster, you just need to specify one or all cluster endpoints in the client configuration: ```csharp ConfigurationOptions options = new ConfigurationOptions { //list of available nodes of the cluster along with the endpoint port. EndPoints = { { "localhost", 16379 }, { "localhost", 16380 }, // ... }, }; ConnectionMultiplexer cluster = ConnectionMultiplexer.Connect(options); IDatabase db = cluster.GetDatabase(); db.StringSet("foo", "bar"); Console.WriteLine(db.StringGet("foo")); // prints bar ``` #### Connect to your production Redis with TLS When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. Before connecting your application to the TLS-enabled Redis server, ensure that your certificates and private keys are in the correct format. To convert user certificate and private key from the PEM format to `pfx`, use this command: ```bash openssl pkcs12 -inkey redis\_user\_private.key -in redis\_user.crt -export -out redis.pfx ``` Enter password to protect your `pfx` file. Establish a secure connection with your Redis database using this snippet. ```csharp ConfigurationOptions options = new ConfigurationOptions { EndPoints = { { "my-redis.cloud.redislabs.com", 6379 } }, User = "default", // use your Redis user. More info https://redis.io/docs/management/security/acl/ Password = "secret", // use your Redis password Ssl = true, SslProtocols = System.Security.Authentication.SslProtocols.Tls12 }; options.CertificateSelection += delegate { return new X509Certificate2("redis.pfx", "secret"); // use the password you specified for pfx file }; options.CertificateValidation += ValidateServerCertificate; bool ValidateServerCertificate( object sender, X509Certificate? certificate, X509Chain? chain, SslPolicyErrors sslPolicyErrors) { if (certificate == null) { return false; } var ca = new X509Certificate2("redis\_ca.pem"); bool verdict = (certificate.Issuer == ca.Subject); if (verdict) { return true; } Console.WriteLine("Certificate error: {0}", sslPolicyErrors); return false; } ConnectionMultiplexer muxer = ConnectionMultiplexer.Connect(options); //Creation of the connection to the DB IDatabase conn = muxer.GetDatabase(); //send SET command conn.StringSet("foo", "bar"); //send GET command and print the value Console.WriteLine(conn.StringGet("foo")); ``` ### Example: Indexing and querying JSON documents This example shows how to convert Redis search results to JSON format using `NRedisStack`. Make sure that you have Redis Stack and `NRedisStack` installed. Import dependencies and connect to the Redis server: ```csharp using NRedisStack; using NRedisStack.RedisStackCommands; using NRedisStack.Search; using NRedisStack.Search.Aggregation; using NRedisStack.Search.Literals.Enums; using StackExchange.Redis; // ... ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("localhost"); ``` Get a reference to the database and for search and JSON commands. ```csharp var db = redis.GetDatabase(); var ft = db.FT(); var json = db.JSON(); ``` Let's create some test data to add to your database. ```csharp var user1 = new { name = "Paul John", email = "paul.john@example.com", age = 42, city = "London" }; var | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/dotnet.md | master | redis | [
-0.022103890776634216,
-0.09424249827861786,
-0.10827814042568207,
-0.03919757530093193,
-0.04785636067390442,
-0.040641747415065765,
0.031283099204301834,
0.06418243050575256,
-0.030936118215322495,
0.03868153691291809,
-0.06524927914142609,
0.0010907126124948263,
0.05470454692840576,
0.0... | 0.100504 |
for search and JSON commands. ```csharp var db = redis.GetDatabase(); var ft = db.FT(); var json = db.JSON(); ``` Let's create some test data to add to your database. ```csharp var user1 = new { name = "Paul John", email = "paul.john@example.com", age = 42, city = "London" }; var user2 = new { name = "Eden Zamir", email = "eden.zamir@example.com", age = 29, city = "Tel Aviv" }; var user3 = new { name = "Paul Zamir", email = "paul.zamir@example.com", age = 35, city = "Tel Aviv" }; ``` Create an index. In this example, all JSON documents with the key prefix `user:` are indexed. For more information, see [Query syntax](/docs/interact/search-and-query/query/). ```csharp var schema = new Schema() .AddTextField(new FieldName("$.name", "name")) .AddTagField(new FieldName("$.city", "city")) .AddNumericField(new FieldName("$.age", "age")); ft.Create( "idx:users", new FTCreateParams().On(IndexDataType.JSON).Prefix("user:"), schema); ``` Use `JSON.SET` to set each user value at the specified path. ```csharp json.Set("user:1", "$", user1); json.Set("user:2", "$", user2); json.Set("user:3", "$", user3); ``` Let's find user `Paul` and filter the results by age. ```csharp var res = ft.Search("idx:users", new Query("Paul @age:[30 40]")).Documents.Select(x => x["json"]); Console.WriteLine(string.Join("\n", res)); // Prints: {"name":"Paul Zamir","email":"paul.zamir@example.com","age":35,"city":"Tel Aviv"} ``` Return only the `city` field. ```csharp var res\_cities = ft.Search("idx:users", new Query("Paul").ReturnFields(new FieldName("$.city", "city"))).Documents.Select(x => x["city"]); Console.WriteLine(string.Join(", ", res\_cities)); // Prints: London, Tel Aviv ``` Count all users in the same city. ```csharp var request = new AggregationRequest("\*").GroupBy("@city", Reducers.Count().As("count")); var result = ft.Aggregate("idx:users", request); for (var i=0; i | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/dotnet.md | master | redis | [
-0.0017704900819808245,
-0.011889203451573849,
-0.03908384591341019,
0.1002453863620758,
-0.05949009954929352,
-0.0031465888023376465,
0.038883332163095474,
0.03681951016187668,
0.02856602892279625,
-0.05811622738838196,
0.011415695771574974,
-0.04090685024857521,
0.07876291871070862,
-0.0... | -0.019376 |
Install Redis and the Redis client, then connect your Node.js application to a Redis database. ## node-redis [node-redis](https://github.com/redis/node-redis) is a modern, high-performance Redis client for Node.js. `node-redis` requires a running Redis or [Redis Stack](https://redis.io/docs/getting-started/install-stack/) server. See [Getting started](/docs/getting-started/) for Redis installation instructions. ### Install To install node-redis, run: ``` npm install redis ``` ### Connect Connect to localhost on port 6379. ```js import { createClient } from 'redis'; const client = createClient(); client.on('error', err => console.log('Redis Client Error', err)); await client.connect(); ``` Store and retrieve a simple string. ```js await client.set('key', 'value'); const value = await client.get('key'); ``` Store and retrieve a map. ```js await client.hSet('user-session:123', { name: 'John', surname: 'Smith', company: 'Redis', age: 29 }) let userSession = await client.hGetAll('user-session:123'); console.log(JSON.stringify(userSession, null, 2)); /\* { "surname": "Smith", "name": "John", "company": "Redis", "age": "29" } \*/ ``` To connect to a different host or port, use a connection string in the format `redis[s]://[[username][:password]@][host][:port][/db-number]`: ```js createClient({ url: 'redis://alice:foobared@awesome.redis.server:6380' }); ``` To check if the client is connected and ready to send commands, use `client.isReady`, which returns a Boolean. `client.isOpen` is also available. This returns `true` when the client's underlying socket is open, and `false` when it isn't (for example, when the client is still connecting or reconnecting after a network error). #### Connect to a Redis cluster To connect to a Redis cluster, use `createCluster`. ```js import { createCluster } from 'redis'; const cluster = createCluster({ rootNodes: [ { url: 'redis://127.0.0.1:16379' }, { url: 'redis://127.0.0.1:16380' }, // ... ] }); cluster.on('error', (err) => console.log('Redis Cluster Error', err)); await cluster.connect(); await cluster.set('foo', 'bar'); const value = await cluster.get('foo'); console.log(value); // returns 'bar' await cluster.quit(); ``` #### Connect to your production Redis with TLS When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. ```js const client = createClient({ username: 'default', // use your Redis user. More info https://redis.io/docs/management/security/acl/ password: 'secret', // use your password here socket: { host: 'my-redis.cloud.redislabs.com', port: 6379, tls: true, key: readFileSync('./redis\_user\_private.key'), cert: readFileSync('./redis\_user.crt'), ca: [readFileSync('./redis\_ca.pem')] } }); client.on('error', (err) => console.log('Redis Client Error', err)); await client.connect(); await client.set('foo', 'bar'); const value = await client.get('foo'); console.log(value) // returns 'bar' await client.disconnect(); ``` You can also use discrete parameters and UNIX sockets. Details can be found in the [client configuration guide](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md). ### Production usage #### Handling errors Node-Redis provides [multiple events to handle various scenarios](https://github.com/redis/node-redis?tab=readme-ov-file#events), among which the most critical is the `error` event. This event is triggered whenever an error occurs within the client. It is crucial to listen for error events. If a client does not register at least one error listener and an error occurs, the system will throw that error, potentially causing the Node.js process to exit unexpectedly. See [the EventEmitter docs](https://nodejs.org/api/events.html#events\_error\_events) for more details. ```typescript const client = createClient({ // ... client options }); // Always ensure there's a listener for errors in the client to prevent process crashes due to unhandled errors client.on('error', error => { console.error(`Redis client error:`, error); }); ``` #### Handling reconnections If network issues or other problems unexpectedly close the socket, the client will reject all commands already sent, since the server might have already executed them. The rest of the pending commands will remain queued in memory until a new socket is established. This behaviour is controlled by the `enableOfflineQueue` option, which is enabled by default. The client uses `reconnectStrategy` to decide when to attempt to reconnect. The default strategy is to calculate the delay before each attempt based on the attempt number `Math.min(retries \* 50, 500)`. You can customize this strategy by passing a supported value to `reconnectStrategy` option: 1. Define a callback `(retries: number, | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/nodejs.md | master | redis | [
-0.02727567031979561,
-0.07684662938117981,
-0.07480508834123611,
0.024723703041672707,
-0.029817044734954834,
-0.06102435290813446,
-0.06905045360326767,
0.06819871068000793,
-0.01238398626446724,
0.02501596324145794,
-0.0718301311135292,
0.038327667862176895,
0.041508644819259644,
-0.043... | 0.082024 |
The client uses `reconnectStrategy` to decide when to attempt to reconnect. The default strategy is to calculate the delay before each attempt based on the attempt number `Math.min(retries \* 50, 500)`. You can customize this strategy by passing a supported value to `reconnectStrategy` option: 1. Define a callback `(retries: number, cause: Error) => false | number | Error` \*\*(recommended)\*\* ```typescript const client = createClient({ socket: { reconnectStrategy: function(retries) { if (retries > 20) { console.log("Too many attempts to reconnect. Redis connection was terminated"); return new Error("Too many retries."); } else { return retries \* 500; } } } }); client.on('error', error => console.error('Redis client error:', error)); ``` In the provided reconnection strategy callback, the client attempts to reconnect up to 20 times with a delay of `retries \* 500` milliseconds between attempts. After approximately two minutes, the client logs an error message and terminates the connection if the maximum retry limit is exceeded. 2. Use a numerical value to set a fixed delay in milliseconds. 3. Use `false` to disable reconnection attempts. This option should only be used for testing purposes. #### Timeout To set a timeout for a connection, use the `connectTimeout` option: ```typescript const client = createClient({ // setting a 10-second timeout connectTimeout: 10000 // in milliseconds }); client.on('error', error => console.error('Redis client error:', error)); ``` ### Learn more \* [Node-Redis Configuration Options](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md) \* [Redis commands](https://redis.js.org/#node-redis-usage-redis-commands) \* [Programmability](https://redis.js.org/#node-redis-usage-programmability) \* [Clustering](https://redis.js.org/#node-redis-usage-clustering) \* [GitHub](https://github.com/redis/node-redis) | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/nodejs.md | master | redis | [
-0.038624025881290436,
-0.05419021099805832,
-0.006796704139560461,
0.09320288151502609,
-0.08300366252660751,
0.004738240968436003,
0.02398144081234932,
0.017989816144108772,
0.08339939266443253,
-0.005892891902476549,
-0.052235789597034454,
0.06569819897413254,
-0.0007533708703704178,
-0... | 0.064267 |
Here, you will learn how to connect your application to a Redis database. If you're new to Redis, you might first want to [install Redis with Redis Stack and RedisInsight](/docs/getting-started/install-stack/). For more Redis topics, see [Using](/docs/manual/) and [Managing](/docs/management/) Redis. If you're ready to get started, see the following guides for the official client libraries you can use with Redis. For a complete list of community-driven clients, see [Clients](/resources/clients/). ## High-level client libraries The Redis OM client libraries let you use the document modeling, indexing, and querying capabilities of Redis Stack much like the way you'd use an [ORM](https://en.wikipedia.org/wiki/Object%E2%80%93relational\_mapping). The following Redis OM libraries support Redis Stack: \* [Redis OM .NET](/docs/clients/om-clients/stack-dotnet/) \* [Redis OM Node](/docs/clients/om-clients/stack-node/) \* [Redis OM Python](/docs/clients/om-clients/stack-python/) \* [Redis OM Spring](/docs/clients/om-clients/stack-spring/) --- | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/_index.md | master | redis | [
-0.026450486853718758,
-0.1296854317188263,
-0.10122154653072357,
-0.0020177054684609175,
-0.06212761253118515,
-0.05789940804243088,
0.015720661729574203,
0.06825561076402664,
-0.04394957795739174,
-0.007284275721758604,
-0.04656479135155678,
0.041919197887182236,
0.076075978577137,
-0.00... | 0.135903 |
Install Redis and the Redis client, then connect your Java application to a Redis database. ## Jedis [Jedis](https://github.com/redis/jedis) is a Java client for Redis designed for performance and ease of use. ### Install To include `Jedis` as a dependency in your application, edit the dependency file, as follows. \* If you use \*\*Maven\*\*: ```xml redis.clients jedis 5.1.2 ``` \* If you use \*\*Gradle\*\*: ``` repositories { mavenCentral() } //... dependencies { implementation 'redis.clients:jedis:5.1.2' //... } ``` \* If you use the JAR files, download the latest Jedis and Apache Commons Pool2 JAR files from [Maven Central](https://central.sonatype.com/) or any other Maven repository. \* Build from [source](https://github.com/redis/jedis) ### Connect For many applications, it's best to use a connection pool. You can instantiate and use a `Jedis` connection pool like so: ```java package org.example; import redis.clients.jedis.Jedis; import redis.clients.jedis.JedisPool; public class Main { public static void main(String[] args) { JedisPool pool = new JedisPool("localhost", 6379); try (Jedis jedis = pool.getResource()) { // Store & Retrieve a simple string jedis.set("foo", "bar"); System.out.println(jedis.get("foo")); // prints bar // Store & Retrieve a HashMap Map hash = new HashMap<>();; hash.put("name", "John"); hash.put("surname", "Smith"); hash.put("company", "Redis"); hash.put("age", "29"); jedis.hset("user-session:123", hash); System.out.println(jedis.hgetAll("user-session:123")); // Prints: {name=John, surname=Smith, company=Redis, age=29} } } } ``` Because adding a `try-with-resources` block for each command can be cumbersome, consider using `JedisPooled` as an easier way to pool connections. ```java import redis.clients.jedis.JedisPooled; //... JedisPooled jedis = new JedisPooled("localhost", 6379); jedis.set("foo", "bar"); System.out.println(jedis.get("foo")); // prints "bar" ``` #### Connect to a Redis cluster To connect to a Redis cluster, use `JedisCluster`. ```java import redis.clients.jedis.JedisCluster; import redis.clients.jedis.HostAndPort; //... Set jedisClusterNodes = new HashSet(); jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7379)); jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7380)); JedisCluster jedis = new JedisCluster(jedisClusterNodes); ``` #### Connect to your production Redis with TLS When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. Before connecting your application to the TLS-enabled Redis server, ensure that your certificates and private keys are in the correct format. To convert user certificate and private key from the PEM format to `pkcs12`, use this command: ``` openssl pkcs12 -export -in ./redis\_user.crt -inkey ./redis\_user\_private.key -out redis-user-keystore.p12 -name "redis" ``` Enter password to protect your `pkcs12` file. Convert the server (CA) certificate to the JKS format using the [keytool](https://docs.oracle.com/en/java/javase/12/tools/keytool.html) shipped with JDK. ``` keytool -importcert -keystore truststore.jks \ -storepass REPLACE\_WITH\_YOUR\_PASSWORD \ -file redis\_ca.pem ``` Establish a secure connection with your Redis database using this snippet. ```java package org.example; import redis.clients.jedis.\*; import javax.net.ssl.\*; import java.io.FileInputStream; import java.io.IOException; import java.security.GeneralSecurityException; import java.security.KeyStore; public class Main { public static void main(String[] args) throws GeneralSecurityException, IOException { HostAndPort address = new HostAndPort("my-redis-instance.cloud.redislabs.com", 6379); SSLSocketFactory sslFactory = createSslSocketFactory( "./truststore.jks", "secret!", // use the password you specified for keytool command "./redis-user-keystore.p12", "secret!" // use the password you specified for openssl command ); JedisClientConfig config = DefaultJedisClientConfig.builder() .ssl(true).sslSocketFactory(sslFactory) .user("default") // use your Redis user. More info https://redis.io/docs/management/security/acl/ .password("secret!") // use your Redis password .build(); JedisPooled jedis = new JedisPooled(address, config); jedis.set("foo", "bar"); System.out.println(jedis.get("foo")); // prints bar } private static SSLSocketFactory createSslSocketFactory( String caCertPath, String caCertPassword, String userCertPath, String userCertPassword) throws IOException, GeneralSecurityException { KeyStore keyStore = KeyStore.getInstance("pkcs12"); keyStore.load(new FileInputStream(userCertPath), userCertPassword.toCharArray()); KeyStore trustStore = KeyStore.getInstance("jks"); trustStore.load(new FileInputStream(caCertPath), caCertPassword.toCharArray()); TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance("X509"); trustManagerFactory.init(trustStore); KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance("PKIX"); keyManagerFactory.init(keyStore, userCertPassword.toCharArray()); SSLContext sslContext = SSLContext.getInstance("TLS"); sslContext.init(keyManagerFactory.getKeyManagers(), trustManagerFactory.getTrustManagers(), null); return sslContext.getSocketFactory(); } } ``` ### Production usage ### Configuring Connection pool As mentioned in the previous section, use `JedisPool` or `JedisPooled` to create a connection pool. `JedisPooled`, added in Jedis version 4.0.0, provides capabilities similar to `JedisPool` but with a more straightforward API. A connection pool holds a specified number of connections, creates more connections when necessary, and terminates them when they are | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/java/jedis.md | master | redis | [
-0.03551763668656349,
-0.07640410214662552,
-0.05662813410162926,
-0.06642809510231018,
0.00583348935469985,
-0.03844649717211723,
-0.030701979994773865,
0.011416085995733738,
0.013734261505305767,
-0.00819372944533825,
0.004325256682932377,
0.00021512644889298826,
0.03958193212747574,
-0.... | 0.051945 |
in the previous section, use `JedisPool` or `JedisPooled` to create a connection pool. `JedisPooled`, added in Jedis version 4.0.0, provides capabilities similar to `JedisPool` but with a more straightforward API. A connection pool holds a specified number of connections, creates more connections when necessary, and terminates them when they are no longer needed. Here is a simplified connection lifecycle in a pool: 1. A connection is requested from the pool. 2. A connection is served: - An idle connection is served when non-active connections are available, or - A new connection is created when the number of connections is under `maxTotal`. 3. The connection becomes active. 4. The connection is released back to the pool. 5. The connection is marked as stale. 6. The connection is kept idle for `minEvictableIdleTime`. 7. The connection becomes evictable if the number of connections is greater than `minIdle`. 8. The connection is ready to be closed. It's important to configure the connection pool correctly. Use `GenericObjectPoolConfig` from [Apache Commons Pool2](https://commons.apache.org/proper/commons-pool/apidocs/org/apache/commons/pool2/impl/GenericObjectPoolConfig.html). ```java ConnectionPoolConfig poolConfig = new ConnectionPoolConfig(); // maximum active connections in the pool, // tune this according to your needs and application type // default is 8 poolConfig.setMaxTotal(8); // maximum idle connections in the pool, default is 8 poolConfig.setMaxIdle(8); // minimum idle connections in the pool, default 0 poolConfig.setMinIdle(0); // Enables waiting for a connection to become available. poolConfig.setBlockWhenExhausted(true); // The maximum number of seconds to wait for a connection to become available poolConfig.setMaxWait(Duration.ofSeconds(1)); // Enables sending a PING command periodically while the connection is idle. poolConfig.setTestWhileIdle(true); // controls the period between checks for idle connections in the pool poolConfig.setTimeBetweenEvictionRuns(Duration.ofSeconds(1)); // JedisPooled does all hard work on fetching and releasing connection to the pool // to prevent connection starvation JedisPooled jedis = new JedisPooled(poolConfig, "localhost", 6379); ``` ### Timeout To set a timeout for a connection, use the `JedisPooled` or `JedisPool` constructor with the `timeout` parameter, or use `JedisClientConfig` with the `socketTimeout` and `connectionTimeout` parameters: ```java HostAndPort hostAndPort = new HostAndPort("localhost", 6379); JedisPooled jedisWithTimeout = new JedisPooled(hostAndPort, DefaultJedisClientConfig.builder() .socketTimeoutMillis(5000) // set timeout to 5 seconds .connectionTimeoutMillis(5000) // set connection timeout to 5 seconds .build(), poolConfig ); ``` ### Exception handling The Jedis Exception Hierarchy is rooted on `JedisException`, which implements `RuntimeException`, and are therefore all unchecked exceptions. ``` JedisException ├── JedisDataException │ ├── JedisRedirectionException │ │ ├── JedisMovedDataException │ │ └── JedisAskDataException │ ├── AbortedTransactionException │ ├── JedisAccessControlException │ └── JedisNoScriptException ├── JedisClusterException │ ├── JedisClusterOperationException │ ├── JedisConnectionException │ └── JedisValidationException └── InvalidURIException ``` #### General Exceptions In general, Jedis can throw the following exceptions while executing commands: - `JedisConnectionException` - when the connection to Redis is lost or closed unexpectedly. Configure failover to handle this exception automatically with Resilience4J and the built-in Jedis failover mechanism. - `JedisAccessControlException` - when the user does not have the permission to execute the command or the user ID and/or password are incorrect. - `JedisDataException` - when there is a problem with the data being sent to or received from the Redis server. Usually, the error message will contain more information about the failed command. - `JedisException` - this exception is a catch-all exception that can be thrown for any other unexpected errors. Conditions when `JedisException` can be thrown: - Bad return from a health check with the `PING` command - Failure during SHUTDOWN - Pub/Sub failure when issuing commands (disconnect) - Any unknown server messages - Sentinel: can connect to sentinel but master is not monitored or all Sentinels are down. - MULTI or DISCARD command failed - Shard commands key hash check failed or no Reachable Shards - Retry deadline exceeded/number of attempts (Retry Command | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/java/jedis.md | master | redis | [
-0.023592589423060417,
-0.05701013654470444,
-0.08925807476043701,
-0.021789314225316048,
-0.06411239504814148,
-0.028983047232031822,
0.055996935814619064,
-0.029698608443140984,
0.04671498388051987,
0.018728598952293396,
0.04400178790092468,
0.00971261691302061,
0.053101472556591034,
0.0... | 0.051479 |
when issuing commands (disconnect) - Any unknown server messages - Sentinel: can connect to sentinel but master is not monitored or all Sentinels are down. - MULTI or DISCARD command failed - Shard commands key hash check failed or no Reachable Shards - Retry deadline exceeded/number of attempts (Retry Command Executor) - POOL - pool exhausted, error adding idle objects, returning broken resources to the pool All the Jedis exceptions are runtime exceptions and in most cases irrecoverable, so in general bubble up to the API capturing the error message. ## DNS cache and Redis When you connect to a Redis with multiple endpoints, such as [Redis Enterprise Active-Active](https://redis.com/redis-enterprise/technology/active-active-geo-distribution/), it's recommended to disable the JVM's DNS cache to load-balance requests across multiple endpoints. You can do this in your application's code with the following snippet: ```java java.security.Security.setProperty("networkaddress.cache.ttl","0"); java.security.Security.setProperty("networkaddress.cache.negative.ttl", "0"); ``` ### Learn more \* [Jedis API reference](https://www.javadoc.io/doc/redis.clients/jedis/latest/index.html) \* [Failover with Jedis](https://github.com/redis/jedis/blob/master/docs/failover.md) \* [GitHub](https://github.com/redis/jedis) | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/java/jedis.md | master | redis | [
0.011046890169382095,
-0.06983058899641037,
-0.011377058923244476,
0.036518871784210205,
-0.004611510317772627,
-0.0893707126379013,
-0.029250016435980797,
-0.019896605983376503,
0.053185805678367615,
0.012423161417245865,
-0.0313333123922348,
0.03131265565752983,
0.01762993074953556,
0.00... | 0.09608 |
Install Redis and the Redis client, then connect your Lettuce application to a Redis database. ## Lettuce Lettuce offers a powerful and efficient way to interact with Redis through its asynchronous and reactive APIs. By leveraging these capabilities, you can build high-performance, scalable Java applications that make optimal use of Redis's capabilities. ## Install To include Lettuce as a dependency in your application, edit the appropriate dependency file as shown below. If you use Maven, add the following dependency to your `pom.xml`: ```xml io.lettuce lettuce-core 6.3.2.RELEASE ``` If you use Gradle, include this line in your `build.gradle` file: ``` dependencies { compile 'io.lettuce:lettuce-core:6.3.2.RELEASE } ``` If you wish to use the JAR files directly, download the latest Lettuce and, optionally, Apache Commons Pool2 JAR files from Maven Central or any other Maven repository. To build from source, see the instructions on the [Lettuce source code GitHub repo](https://github.com/lettuce-io/lettuce-core). ## Connect Start by creating a connection to your Redis server. There are many ways to achieve this using Lettuce. Here are a few. ### Asynchronous connection ```java package org.example; import java.util.\*; import java.util.concurrent.ExecutionException; import io.lettuce.core.\*; import io.lettuce.core.api.async.RedisAsyncCommands; import io.lettuce.core.api.StatefulRedisConnection; public class Async { public static void main(String[] args) { RedisClient redisClient = RedisClient.create("redis://localhost:6379"); try (StatefulRedisConnection connection = redisClient.connect()) { RedisAsyncCommands asyncCommands = connection.async(); // Asynchronously store & retrieve a simple string asyncCommands.set("foo", "bar").get(); System.out.println(asyncCommands.get("foo").get()); // prints bar // Asynchronously store key-value pairs in a hash directly Map hash = new HashMap<>(); hash.put("name", "John"); hash.put("surname", "Smith"); hash.put("company", "Redis"); hash.put("age", "29"); asyncCommands.hset("user-session:123", hash).get(); System.out.println(asyncCommands.hgetall("user-session:123").get()); // Prints: {name=John, surname=Smith, company=Redis, age=29} } catch (ExecutionException | InterruptedException e) { throw new RuntimeException(e); } finally { redisClient.shutdown(); } } } ``` Learn more about asynchronous Lettuce API in [the reference guide](https://lettuce.io/core/release/reference/index.html#asynchronous-api). ### Reactive connection ```java package org.example; import java.util.\*; import io.lettuce.core.\*; import io.lettuce.core.api.reactive.RedisReactiveCommands; import io.lettuce.core.api.StatefulRedisConnection; public class Main { public static void main(String[] args) { RedisClient redisClient = RedisClient.create("redis://localhost:6379"); try (StatefulRedisConnection connection = redisClient.connect()) { RedisReactiveCommands reactiveCommands = connection.reactive(); // Reactively store & retrieve a simple string reactiveCommands.set("foo", "bar").block(); reactiveCommands.get("foo").doOnNext(System.out::println).block(); // prints bar // Reactively store key-value pairs in a hash directly Map hash = new HashMap<>(); hash.put("name", "John"); hash.put("surname", "Smith"); hash.put("company", "Redis"); hash.put("age", "29"); reactiveCommands.hset("user-session:124", hash).then( reactiveCommands.hgetall("user-session:124") .collectMap(KeyValue::getKey, KeyValue::getValue).doOnNext(System.out::println)) .block(); // Prints: {surname=Smith, name=John, company=Redis, age=29} } finally { redisClient.shutdown(); } } } ``` Learn more about reactive Lettuce API in [the reference guide](https://lettuce.io/core/release/reference/index.html#reactive-api). ### Redis Cluster connection ```java import io.lettuce.core.RedisURI; import io.lettuce.core.cluster.RedisClusterClient; import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; // ... RedisURI redisUri = RedisURI.Builder.redis("localhost").withPassword("authentication").build(); RedisClusterClient clusterClient = RedisClusterClient.create(redisUri); StatefulRedisClusterConnection connection = clusterClient.connect(); RedisAdvancedClusterAsyncCommands commands = connection.async(); // ... connection.close(); clusterClient.shutdown(); ``` ### TLS connection When you deploy your application, use TLS and follow the [Redis security guidelines](/docs/management/security/). ```java RedisURI redisUri = RedisURI.Builder.redis("localhost") .withSsl(true) .withPassword("secret!") // use your Redis password .build(); RedisClient client = RedisClient.create(redisUri); ``` ## Connection Management in Lettuce Lettuce uses `ClientResources` for efficient management of shared resources like event loop groups and thread pools. For connection pooling, Lettuce leverages `RedisClient` or `RedisClusterClient`, which can handle multiple concurrent connections efficiently. A typical approach with Lettuce is to create a single `RedisClient` instance and reuse it to establish connections to your Redis server(s). These connections are multiplexed; that is, multiple commands can be run concurrently over a single or a small set of connections, making explicit pooling less critical. Lettuce provides pool config to be used with Lettuce asynchronous connection methods. ```java package org.example; import io.lettuce.core.RedisClient; import io.lettuce.core.RedisURI; import io.lettuce.core.TransactionResult; import io.lettuce.core.api.StatefulRedisConnection; import io.lettuce.core.api.async.RedisAsyncCommands; import io.lettuce.core.codec.StringCodec; import io.lettuce.core.support.\*; import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionStage; public class Pool { public static void main(String[] args) { RedisClient client = RedisClient.create(); String host = "localhost"; int port = 6379; CompletionStage>> poolFuture = | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/java/lettuce.md | master | redis | [
-0.0319584459066391,
-0.0759415403008461,
-0.047310370951890945,
-0.035516805946826935,
-0.030330825597047806,
0.0104694664478302,
-0.06311051547527313,
0.06452228128910065,
-0.018453335389494896,
0.005609831772744656,
-0.027032960206270218,
0.014691350981593132,
0.037292659282684326,
-0.0... | 0.062585 |
Lettuce asynchronous connection methods. ```java package org.example; import io.lettuce.core.RedisClient; import io.lettuce.core.RedisURI; import io.lettuce.core.TransactionResult; import io.lettuce.core.api.StatefulRedisConnection; import io.lettuce.core.api.async.RedisAsyncCommands; import io.lettuce.core.codec.StringCodec; import io.lettuce.core.support.\*; import java.util.concurrent.CompletableFuture; import java.util.concurrent.CompletionStage; public class Pool { public static void main(String[] args) { RedisClient client = RedisClient.create(); String host = "localhost"; int port = 6379; CompletionStage>> poolFuture = AsyncConnectionPoolSupport.createBoundedObjectPoolAsync( () -> client.connectAsync(StringCodec.UTF8, RedisURI.create(host, port)), BoundedPoolConfig.create()); // await poolFuture initialization to avoid NoSuchElementException: Pool exhausted when starting your application AsyncPool> pool = poolFuture.toCompletableFuture() .join(); // execute work CompletableFuture transactionResult = pool.acquire() .thenCompose(connection -> { RedisAsyncCommands async = connection.async(); async.multi(); async.set("key", "value"); async.set("key2", "value2"); System.out.println("Executed commands in pipeline"); return async.exec().whenComplete((s, throwable) -> pool.release(connection)); }); transactionResult.join(); // terminating pool.closeAsync(); // after pool completion client.shutdownAsync(); } } ``` In this setup, `LettuceConnectionFactory` is a custom class you would need to implement, adhering to Apache Commons Pool's `PooledObjectFactory` interface, to manage lifecycle events of pooled `StatefulRedisConnection` objects. ## DNS cache and Redis When you connect to a Redis database with multiple endpoints, such as Redis Enterprise Active-Active, it's recommended to disable the JVM's DNS cache to load-balance requests across multiple endpoints. You can do this in your application's code with the following snippet: ```java java.security.Security.setProperty("networkaddress.cache.ttl","0"); java.security.Security.setProperty("networkaddress.cache.negative.ttl", "0"); ``` ## Learn more - [Lettuce reference documentation](https://lettuce.io/docs/) - [Redis commands](https://redis.io/commands) - [Project Reactor](https://projectreactor.io/) | https://github.com/redis/redis-doc/blob/master//docs/connect/clients/java/lettuce.md | master | redis | [
-0.029880601912736893,
-0.041636720299720764,
-0.09603244066238403,
0.0026714385021477938,
-0.08277074992656708,
-0.02849499322474003,
0.018777571618556976,
-0.0022891319822520018,
0.023763636127114296,
-0.003943486604839563,
-0.015424604527652264,
0.0114390654489398,
0.021380912512540817,
... | 0.072587 |
`SUBSCRIBE`, `UNSUBSCRIBE` and `PUBLISH` implement the [Publish/Subscribe messaging paradigm](http://en.wikipedia.org/wiki/Publish/subscribe) where (citing Wikipedia) senders (publishers) are not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be. Subscribers express interest in one or more channels and only receive messages that are of interest, without knowledge of what (if any) publishers there are. This decoupling of publishers and subscribers allows for greater scalability and a more dynamic network topology. For instance, to subscribe to channels "channel11" and "ch:00" the client issues a `SUBSCRIBE` providing the names of the channels: ```bash SUBSCRIBE channel11 ch:00 ``` Messages sent by other clients to these channels will be pushed by Redis to all the subscribed clients. Subscribers receive the messages in the order that the messages are published. A client subscribed to one or more channels shouldn't issue commands, although it can `SUBSCRIBE` and `UNSUBSCRIBE` to and from other channels. The replies to subscription and unsubscribing operations are sent in the form of messages so that the client can just read a coherent stream of messages where the first element indicates the type of message. The commands that are allowed in the context of a subscribed RESP2 client are: \* `PING` \* `PSUBSCRIBE` \* `PUNSUBSCRIBE` \* `QUIT` \* `RESET` \* `SSUBSCRIBE` \* `SUBSCRIBE` \* `SUNSUBSCRIBE` \* `UNSUBSCRIBE` However, if RESP3 is used (see `HELLO`), a client can issue any commands while in the subscribed state. Please note that when using `redis-cli`, in subscribed mode commands such as `UNSUBSCRIBE` and `PUNSUBSCRIBE` cannot be used because `redis-cli` will not accept any commands and can only quit the mode with `Ctrl-C`. ## Delivery semantics Redis' Pub/Sub exhibits \_at-most-once\_ message delivery semantics. As the name suggests, it means that a message will be delivered once if at all. Once the message is sent by the Redis server, there's no chance of it being sent again. If the subscriber is unable to handle the message (for example, due to an error or a network disconnect) the message is forever lost. If your application requires stronger delivery guarantees, you may want to learn about [Redis Streams](/docs/data-types/streams-tutorial). Messages in streams are persisted, and support both \_at-most-once\_ as well as \_at-least-once\_ delivery semantics. ## Format of pushed messages A message is an [array-reply](/topics/protocol#array-reply) with three elements. The first element is the kind of message: \* `subscribe`: means that we successfully subscribed to the channel given as the second element in the reply. The third argument represents the number of channels we are currently subscribed to. \* `unsubscribe`: means that we successfully unsubscribed from the channel given as second element in the reply. The third argument represents the number of channels we are currently subscribed to. When the last argument is zero, we are no longer subscribed to any channel, and the client can issue any kind of Redis command as we are outside the Pub/Sub state. \* `message`: it is a message received as a result of a `PUBLISH` command issued by another client. The second element is the name of the originating channel, and the third argument is the actual message payload. ## Database & Scoping Pub/Sub has no relation to the key space. It was made to not interfere with it on any level, including database numbers. Publishing on db 10, will be heard by a subscriber on db 1. If you need scoping of some kind, prefix the channels with the name of the environment (test, staging, production...). ## Wire protocol example ``` SUBSCRIBE first second \*3 $9 subscribe $5 first :1 \*3 $9 subscribe | https://github.com/redis/redis-doc/blob/master//docs/interact/pubsub.md | master | redis | [
0.002356095938012004,
-0.12797163426876068,
-0.05178603529930115,
0.013834996148943901,
-0.036378730088472366,
-0.05960891395807266,
0.017994090914726257,
-0.023933570832014084,
0.061667341738939285,
0.008352004922926426,
-0.031658515334129333,
0.07964417338371277,
0.07826648652553558,
-0.... | 0.153087 |
numbers. Publishing on db 10, will be heard by a subscriber on db 1. If you need scoping of some kind, prefix the channels with the name of the environment (test, staging, production...). ## Wire protocol example ``` SUBSCRIBE first second \*3 $9 subscribe $5 first :1 \*3 $9 subscribe $6 second :2 ``` At this point, from another client we issue a `PUBLISH` operation against the channel named `second`: ``` > PUBLISH second Hello ``` This is what the first client receives: ``` \*3 $7 message $6 second $5 Hello ``` Now the client unsubscribes itself from all the channels using the `UNSUBSCRIBE` command without additional arguments: ``` UNSUBSCRIBE \*3 $11 unsubscribe $6 second :1 \*3 $11 unsubscribe $5 first :0 ``` ## Pattern-matching subscriptions The Redis Pub/Sub implementation supports pattern matching. Clients may subscribe to glob-style patterns to receive all the messages sent to channel names matching a given pattern. For instance: ``` PSUBSCRIBE news.\* ``` Will receive all the messages sent to the channel `news.art.figurative`, `news.music.jazz`, etc. All the glob-style patterns are valid, so multiple wildcards are supported. ``` PUNSUBSCRIBE news.\* ``` Will then unsubscribe the client from that pattern. No other subscriptions will be affected by this call. Messages received as a result of pattern matching are sent in a different format: \* The type of the message is `pmessage`: it is a message received as a result from a `PUBLISH` command issued by another client, matching a pattern-matching subscription. The second element is the original pattern matched, the third element is the name of the originating channel, and the last element is the actual message payload. Similarly to `SUBSCRIBE` and `UNSUBSCRIBE`, `PSUBSCRIBE` and `PUNSUBSCRIBE` commands are acknowledged by the system sending a message of type `psubscribe` and `punsubscribe` using the same format as the `subscribe` and `unsubscribe` message format. ## Messages matching both a pattern and a channel subscription A client may receive a single message multiple times if it's subscribed to multiple patterns matching a published message, or if it is subscribed to both patterns and channels matching the message. This is shown by the following example: ``` SUBSCRIBE foo PSUBSCRIBE f\* ``` In the above example, if a message is sent to channel `foo`, the client will receive two messages: one of type `message` and one of type `pmessage`. ## The meaning of the subscription count with pattern matching In `subscribe`, `unsubscribe`, `psubscribe` and `punsubscribe` message types, the last argument is the count of subscriptions still active. This number is the total number of channels and patterns the client is still subscribed to. So the client will exit the Pub/Sub state only when this count drops to zero as a result of unsubscribing from all the channels and patterns. ## Sharded Pub/Sub From Redis 7.0, sharded Pub/Sub is introduced in which shard channels are assigned to slots by the same algorithm used to assign keys to slots. A shard message must be sent to a node that owns the slot the shard channel is hashed to. The cluster makes sure the published shard messages are forwarded to all nodes in the shard, so clients can subscribe to a shard channel by connecting to either the master responsible for the slot, or to any of its replicas. `SSUBSCRIBE`, `SUNSUBSCRIBE` and `SPUBLISH` are used to implement sharded Pub/Sub. Sharded Pub/Sub helps to scale the usage of Pub/Sub in cluster mode. It restricts the propagation of messages to be within the shard of a cluster. Hence, the amount of data passing through the cluster bus is limited in comparison to global Pub/Sub where each message propagates | https://github.com/redis/redis-doc/blob/master//docs/interact/pubsub.md | master | redis | [
-0.04618855193257332,
-0.08925064653158188,
-0.05382947251200676,
0.03466525673866272,
-0.05137110874056816,
-0.06902124732732773,
0.06424978375434875,
-0.06609530001878738,
-0.023223670199513435,
0.021789247170090675,
-0.02391042746603489,
0.005682666786015034,
0.10589233040809631,
-0.015... | 0.120327 |
implement sharded Pub/Sub. Sharded Pub/Sub helps to scale the usage of Pub/Sub in cluster mode. It restricts the propagation of messages to be within the shard of a cluster. Hence, the amount of data passing through the cluster bus is limited in comparison to global Pub/Sub where each message propagates to each node in the cluster. This allows users to horizontally scale the Pub/Sub usage by adding more shards. ## Programming example Pieter Noordhuis provided a great example using EventMachine and Redis to create [a multi user high performance web chat](https://gist.github.com/pietern/348262). ## Client library implementation hints Because all the messages received contain the original subscription causing the message delivery (the channel in the case of message type, and the original pattern in the case of pmessage type) client libraries may bind the original subscription to callbacks (that can be anonymous functions, blocks, function pointers), using a hash table. When a message is received an O(1) lookup can be done to deliver the message to the registered callback. | https://github.com/redis/redis-doc/blob/master//docs/interact/pubsub.md | master | redis | [
0.02288063056766987,
-0.019330017268657684,
0.02536880038678646,
-0.014109262265264988,
-0.02181435190141201,
-0.052162084728479385,
0.04009770601987839,
-0.009079686366021633,
0.09194530546665192,
0.006194184999912977,
-0.09092649817466736,
0.0305768009275198,
0.06838614493608475,
0.01492... | 0.084191 |
Redis Transactions allow the execution of a group of commands in a single step, they are centered around the commands `MULTI`, `EXEC`, `DISCARD` and `WATCH`. Redis Transactions make two important guarantees: \* All the commands in a transaction are serialized and executed sequentially. A request sent by another client will never be served \*\*in the middle\*\* of the execution of a Redis Transaction. This guarantees that the commands are executed as a single isolated operation. \* The `EXEC` command triggers the execution of all the commands in the transaction, so if a client loses the connection to the server in the context of a transaction before calling the `EXEC` command none of the operations are performed, instead if the `EXEC` command is called, all the operations are performed. When using the [append-only file](/topics/persistence#append-only-file) Redis makes sure to use a single write(2) syscall to write the transaction on disk. However if the Redis server crashes or is killed by the system administrator in some hard way it is possible that only a partial number of operations are registered. Redis will detect this condition at restart, and will exit with an error. Using the `redis-check-aof` tool it is possible to fix the append only file that will remove the partial transaction so that the server can start again. Starting with version 2.2, Redis allows for an extra guarantee to the above two, in the form of optimistic locking in a way very similar to a check-and-set (CAS) operation. This is documented [later](#cas) on this page. ## Usage A Redis Transaction is entered using the `MULTI` command. The command always replies with `OK`. At this point the user can issue multiple commands. Instead of executing these commands, Redis will queue them. All the commands are executed once `EXEC` is called. Calling `DISCARD` instead will flush the transaction queue and will exit the transaction. The following example increments keys `foo` and `bar` atomically. ``` > MULTI OK > INCR foo QUEUED > INCR bar QUEUED > EXEC 1) (integer) 1 2) (integer) 1 ``` As is clear from the session above, `EXEC` returns an array of replies, where every element is the reply of a single command in the transaction, in the same order the commands were issued. When a Redis connection is in the context of a `MULTI` request, all commands will reply with the string `QUEUED` (sent as a Status Reply from the point of view of the Redis protocol). A queued command is simply scheduled for execution when `EXEC` is called. ## Errors inside a transaction During a transaction it is possible to encounter two kind of command errors: \* A command may fail to be queued, so there may be an error before `EXEC` is called. For instance the command may be syntactically wrong (wrong number of arguments, wrong command name, ...), or there may be some critical condition like an out of memory condition (if the server is configured to have a memory limit using the `maxmemory` directive). \* A command may fail \*after\* `EXEC` is called, for instance since we performed an operation against a key with the wrong value (like calling a list operation against a string value). Starting with Redis 2.6.5, the server will detect an error during the accumulation of commands. It will then refuse to execute the transaction returning an error during `EXEC`, discarding the transaction. > \*\*Note for Redis < 2.6.5:\*\* Prior to Redis 2.6.5 clients needed to detect errors occurring prior to `EXEC` by checking the return value of the queued command: if the command replies with QUEUED it was queued correctly, | https://github.com/redis/redis-doc/blob/master//docs/interact/transactions.md | master | redis | [
-0.042988963425159454,
-0.053813591599464417,
-0.08659256994724274,
0.016050703823566437,
0.03343234956264496,
-0.07439576089382172,
0.03783749043941498,
0.007348410319536924,
0.13624341785907745,
0.0571146197617054,
0.044711947441101074,
0.09295899420976639,
-0.0012254371540620923,
-0.052... | 0.139628 |
to execute the transaction returning an error during `EXEC`, discarding the transaction. > \*\*Note for Redis < 2.6.5:\*\* Prior to Redis 2.6.5 clients needed to detect errors occurring prior to `EXEC` by checking the return value of the queued command: if the command replies with QUEUED it was queued correctly, otherwise Redis returns an error. If there is an error while queueing a command, most clients will abort and discard the transaction. Otherwise, if the client elected to proceed with the transaction the `EXEC` command would execute all commands queued successfully regardless of previous errors. Errors happening \*after\* `EXEC` instead are not handled in a special way: all the other commands will be executed even if some command fails during the transaction. This is more clear on the protocol level. In the following example one command will fail when executed even if the syntax is right: ``` Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. MULTI +OK SET a abc +QUEUED LPOP a +QUEUED EXEC \*2 +OK -WRONGTYPE Operation against a key holding the wrong kind of value ``` `EXEC` returned two-element [bulk string reply](/topics/protocol#bulk-string-reply) where one is an `OK` code and the other an error reply. It's up to the client library to find a sensible way to provide the error to the user. It's important to note that \*\*even when a command fails, all the other commands in the queue are processed\*\* – Redis will \_not\_ stop the processing of commands. Another example, again using the wire protocol with `telnet`, shows how syntax errors are reported ASAP instead: ``` MULTI +OK INCR a b c -ERR wrong number of arguments for 'incr' command ``` This time due to the syntax error the bad `INCR` command is not queued at all. ## What about rollbacks? Redis does not support rollbacks of transactions since supporting rollbacks would have a significant impact on the simplicity and performance of Redis. ## Discarding the command queue `DISCARD` can be used in order to abort a transaction. In this case, no commands are executed and the state of the connection is restored to normal. ``` > SET foo 1 OK > MULTI OK > INCR foo QUEUED > DISCARD OK > GET foo "1" ``` ## Optimistic locking using check-and-set `WATCH` is used to provide a check-and-set (CAS) behavior to Redis transactions. `WATCH`ed keys are monitored in order to detect changes against them. If at least one watched key is modified before the `EXEC` command, the whole transaction aborts, and `EXEC` returns a [Null reply](/topics/protocol#nil-reply) to notify that the transaction failed. For example, imagine we have the need to atomically increment the value of a key by 1 (let's suppose Redis doesn't have `INCR`). The first try may be the following: ``` val = GET mykey val = val + 1 SET mykey $val ``` This will work reliably only if we have a single client performing the operation in a given time. If multiple clients try to increment the key at about the same time there will be a race condition. For instance, client A and B will read the old value, for instance, 10. The value will be incremented to 11 by both the clients, and finally `SET` as the value of the key. So the final value will be 11 instead of 12. Thanks to `WATCH` we are able to model the problem very well: ``` WATCH mykey val = GET mykey val = val + 1 MULTI SET mykey $val EXEC ``` Using the above code, if there are race conditions and another client modifies the result of `val` | https://github.com/redis/redis-doc/blob/master//docs/interact/transactions.md | master | redis | [
-0.04602127894759178,
-0.005020106676965952,
-0.03408029302954674,
0.033299002796411514,
-0.08788664638996124,
-0.06447120755910873,
0.07259060442447662,
0.017883146181702614,
0.03928264603018761,
0.04711908847093582,
0.03898878023028374,
-0.007077328395098448,
0.029574064537882805,
-0.036... | 0.088815 |
instead of 12. Thanks to `WATCH` we are able to model the problem very well: ``` WATCH mykey val = GET mykey val = val + 1 MULTI SET mykey $val EXEC ``` Using the above code, if there are race conditions and another client modifies the result of `val` in the time between our call to `WATCH` and our call to `EXEC`, the transaction will fail. We just have to repeat the operation hoping this time we'll not get a new race. This form of locking is called \_optimistic locking\_. In many use cases, multiple clients will be accessing different keys, so collisions are unlikely – usually there's no need to repeat the operation. ## WATCH explained So what is `WATCH` really about? It is a command that will make the `EXEC` conditional: we are asking Redis to perform the transaction only if none of the `WATCH`ed keys were modified. This includes modifications made by the client, like write commands, and by Redis itself, like expiration or eviction. If keys were modified between when they were `WATCH`ed and when the `EXEC` was received, the entire transaction will be aborted instead. \*\*NOTE\*\* \* In Redis versions before 6.0.9, an expired key would not cause a transaction to be aborted. [More on this](https://github.com/redis/redis/pull/7920) \* Commands within a transaction won't trigger the `WATCH` condition since they are only queued until the `EXEC` is sent. `WATCH` can be called multiple times. Simply all the `WATCH` calls will have the effects to watch for changes starting from the call, up to the moment `EXEC` is called. You can also send any number of keys to a single `WATCH` call. When `EXEC` is called, all keys are `UNWATCH`ed, regardless of whether the transaction was aborted or not. Also when a client connection is closed, everything gets `UNWATCH`ed. It is also possible to use the `UNWATCH` command (without arguments) in order to flush all the watched keys. Sometimes this is useful as we optimistically lock a few keys, since possibly we need to perform a transaction to alter those keys, but after reading the current content of the keys we don't want to proceed. When this happens we just call `UNWATCH` so that the connection can already be used freely for new transactions. ### Using WATCH to implement ZPOP A good example to illustrate how `WATCH` can be used to create new atomic operations otherwise not supported by Redis is to implement ZPOP (`ZPOPMIN`, `ZPOPMAX` and their blocking variants have only been added in version 5.0), that is a command that pops the element with the lower score from a sorted set in an atomic way. This is the simplest implementation: ``` WATCH zset element = ZRANGE zset 0 0 MULTI ZREM zset element EXEC ``` If `EXEC` fails (i.e. returns a [Null reply](/topics/protocol#nil-reply)) we just repeat the operation. ## Redis scripting and transactions Something else to consider for transaction like operations in redis are [redis scripts](/commands/eval) which are transactional. Everything you can do with a Redis Transaction, you can also do with a script, and usually the script will be both simpler and faster. | https://github.com/redis/redis-doc/blob/master//docs/interact/transactions.md | master | redis | [
-0.060145020484924316,
-0.015982648357748985,
-0.0755128413438797,
-0.01742253638803959,
-0.0380084291100502,
-0.0270775705575943,
0.054824717342853546,
-0.024387283250689507,
0.11151331663131714,
0.04515165835618973,
0.049737393856048584,
0.060589905828237534,
0.03466083109378815,
-0.0987... | 0.122746 |
Redis Functions is an API for managing code to be executed on the server. This feature, which became available in Redis 7, supersedes the use of [EVAL](/docs/manual/programmability/eval-intro) in prior versions of Redis. ## Prologue (or, what's wrong with Eval Scripts?) Prior versions of Redis made scripting available only via the `EVAL` command, which allows a Lua script to be sent for execution by the server. The core use cases for [Eval Scripts](/topics/eval-intro) is executing part of your application logic inside Redis, efficiently and atomically. Such script can perform conditional updates across multiple keys, possibly combining several different data types. Using `EVAL` requires that the application sends the entire script for execution every time. Because this results in network and script compilation overheads, Redis provides an optimization in the form of the `EVALSHA` command. By first calling `SCRIPT LOAD` to obtain the script's SHA1, the application can invoke it repeatedly afterward with its digest alone. By design, Redis only caches the loaded scripts. That means that the script cache can become lost at any time, such as after calling `SCRIPT FLUSH`, after restarting the server, or when failing over to a replica. The application is responsible for reloading scripts during runtime if any are missing. The underlying assumption is that scripts are a part of the application and not maintained by the Redis server. This approach suits many light-weight scripting use cases, but introduces several difficulties once an application becomes complex and relies more heavily on scripting, namely: 1. All client application instances must maintain a copy of all scripts. That means having some mechanism that applies script updates to all of the application's instances. 1. Calling cached scripts within the context of a [transaction](/topics/transactions) increases the probability of the transaction failing because of a missing script. Being more likely to fail makes using cached scripts as building blocks of workflows less attractive. 1. SHA1 digests are meaningless, making debugging the system extremely hard (e.g., in a `MONITOR` session). 1. When used naively, `EVAL` promotes an anti-pattern in which scripts the client application renders verbatim scripts instead of responsibly using the [`!KEYS` and `ARGV` Lua APIs](/topics/lua-api#runtime-globals). 1. Because they are ephemeral, a script can't call another script. This makes sharing and reusing code between scripts nearly impossible, short of client-side preprocessing (see the first point). To address these needs while avoiding breaking changes to already-established and well-liked ephemeral scripts, Redis v7.0 introduces Redis Functions. ## What are Redis Functions? Redis functions are an evolutionary step from ephemeral scripting. Functions provide the same core functionality as scripts but are first-class software artifacts of the database. Redis manages functions as an integral part of the database and ensures their availability via data persistence and replication. Because functions are part of the database and therefore declared before use, applications aren't required to load them during runtime nor risk aborted transactions. An application that uses functions depends only on their APIs rather than on the embedded script logic in the database. Whereas ephemeral scripts are considered a part of the application's domain, functions extend the database server itself with user-provided logic. They can be used to expose a richer API composed of core Redis commands, similar to modules, developed once, loaded at startup, and used repeatedly by various applications / clients. Every function has a unique user-defined name, making it much easier to call and trace its execution. The design of Redis Functions also attempts to demarcate between the programming language used for writing functions and their management by the server. Lua, the only language interpreter that Redis presently support as an embedded execution engine, | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/functions-intro.md | master | redis | [
-0.05633964017033577,
-0.026973219588398933,
-0.07145237177610397,
0.026347825303673744,
0.053011514246463776,
-0.1359397917985916,
0.031856145709753036,
0.009517446160316467,
0.09750886261463165,
0.0064974818378686905,
-0.018728528171777725,
0.05096140131354332,
0.019922545179724693,
-0.1... | 0.125967 |
unique user-defined name, making it much easier to call and trace its execution. The design of Redis Functions also attempts to demarcate between the programming language used for writing functions and their management by the server. Lua, the only language interpreter that Redis presently support as an embedded execution engine, is meant to be simple and easy to learn. However, the choice of Lua as a language still presents many Redis users with a challenge. The Redis Functions feature makes no assumptions about the implementation's language. An execution engine that is part of the definition of the function handles running it. An engine can theoretically execute functions in any language as long as it respects several rules (such as the ability to terminate an executing function). Presently, as noted above, Redis ships with a single embedded [Lua 5.1](/topics/lua-api) engine. There are plans to support additional engines in the future. Redis functions can use all of Lua's available capabilities to ephemeral scripts, with the only exception being the [Redis Lua scripts debugger](/topics/ldb). Functions also simplify development by enabling code sharing. Every function belongs to a single library, and any given library can consist of multiple functions. The library's contents are immutable, and selective updates of its functions aren't allowed. Instead, libraries are updated as a whole with all of their functions together in one operation. This allows calling functions from other functions within the same library, or sharing code between functions by using a common code in library-internal methods, that can also take language native arguments. Functions are intended to better support the use case of maintaining a consistent view for data entities through a logical schema, as mentioned above. As such, functions are stored alongside the data itself. Functions are also persisted to the AOF file and replicated from master to replicas, so they are as durable as the data itself. When Redis is used as an ephemeral cache, additional mechanisms (described below) are required to make functions more durable. Like all other operations in Redis, the execution of a function is atomic. A function's execution blocks all server activities during its entire time, similarly to the semantics of [transactions](/topics/transactions). These semantics mean that all of the script's effects either have yet to happen or had already happened. The blocking semantics of an executed function apply to all connected clients at all times. Because running a function blocks the Redis server, functions are meant to finish executing quickly, so you should avoid using long-running functions. ## Loading libraries and functions Let's explore Redis Functions via some tangible examples and Lua snippets. At this point, if you're unfamiliar with Lua in general and specifically in Redis, you may benefit from reviewing some of the examples in [Introduction to Eval Scripts](/topics/eval-intro) and [Lua API](/topics/lua-api) pages for a better grasp of the language. Every Redis function belongs to a single library that's loaded to Redis. Loading a library to the database is done with the `FUNCTION LOAD` command. The command gets the library payload as input, the library payload must start with Shebang statement that provides a metadata about the library (like the engine to use and the library name). The Shebang format is: ``` #! name= ``` Let's try loading an empty library: ``` redis> FUNCTION LOAD "#!lua name=mylib\n" (error) ERR No functions registered ``` The error is expected, as there are no functions in the loaded library. Every library needs to include at least one registered function to load successfully. A registered function is named and acts as an entry point to the library. When the target execution engine handles the | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/functions-intro.md | master | redis | [
-0.09944574534893036,
-0.061877187341451645,
-0.07286181300878525,
0.012681412510573864,
-0.0025453297421336174,
-0.129643514752388,
-0.021495796740055084,
0.04427700489759445,
0.022641412913799286,
-0.06544622033834457,
-0.04662302881479263,
0.012045400217175484,
-0.013613223098218441,
-0... | 0.239097 |
functions registered ``` The error is expected, as there are no functions in the loaded library. Every library needs to include at least one registered function to load successfully. A registered function is named and acts as an entry point to the library. When the target execution engine handles the `FUNCTION LOAD` command, it registers the library's functions. The Lua engine compiles and evaluates the library source code when loaded, and expects functions to be registered by calling the `redis.register\_function()` API. The following snippet demonstrates a simple library registering a single function named \_knockknock\_, returning a string reply: ```lua #!lua name=mylib redis.register\_function( 'knockknock', function() return 'Who\'s there?' end ) ``` In the example above, we provide two arguments about the function to Lua's `redis.register\_function()` API: its registered name and a callback. We can load our library and use `FCALL` to call the registered function: ``` redis> FUNCTION LOAD "#!lua name=mylib\nredis.register\_function('knockknock', function() return 'Who\\'s there?' end)" mylib redis> FCALL knockknock 0 "Who's there?" ``` Notice that the `FUNCTION LOAD` command returns the name of the loaded library, this name can later be used `FUNCTION LIST` and `FUNCTION DELETE`. We've provided `FCALL` with two arguments: the function's registered name and the numeric value `0`. This numeric value indicates the number of key names that follow it (the same way `EVAL` and `EVALSHA` work). We'll explain immediately how key names and additional arguments are available to the function. As this simple example doesn't involve keys, we simply use 0 for now. ## Input keys and regular arguments Before we move to the following example, it is vital to understand the distinction Redis makes between arguments that are names of keys and those that aren't. While key names in Redis are just strings, unlike any other string values, these represent keys in the database. The name of a key is a fundamental concept in Redis and is the basis for operating the Redis Cluster. \*\*Important:\*\* To ensure the correct execution of Redis Functions, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. Any input to the function that isn't the name of a key is a regular input argument. Now, let's pretend that our application stores some of its data in Redis Hashes. We want an `HSET`-like way to set and update fields in said Hashes and store the last modification time in a new field named `\_last\_modified\_`. We can implement a function to do all that. Our function will call `TIME` to get the server's clock reading and update the target Hash with the new fields' values and the modification's timestamp. The function we'll implement accepts the following input arguments: the Hash's key name and the field-value pairs to update. The Lua API for Redis Functions makes these inputs accessible as the first and second arguments to the function's callback. The callback's first argument is a Lua table populated with all key names inputs to the function. Similarly, the callback's second argument consists of all regular arguments. The following is a possible implementation for our function and its library registration: ```lua #!lua name=mylib local function my\_hset(keys, args) local hash = keys[1] local time = redis.call('TIME')[1] return redis.call('HSET', hash, '\_last\_modified\_', time, unpack(args)) end redis.register\_function('my\_hset', my\_hset) ``` If we create a new file named \_mylib.lua\_ that consists of the library's definition, we can load it like so (without stripping the source code of helpful whitespaces): ```bash $ cat mylib.lua | redis-cli -x FUNCTION LOAD REPLACE ``` We've added the `REPLACE` modifier to the call to `FUNCTION LOAD` to tell Redis that we | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/functions-intro.md | master | redis | [
-0.10306800156831741,
-0.07205197960138321,
-0.07059869915246964,
0.030420027673244476,
-0.03687050938606262,
-0.07324142009019852,
0.05670543015003204,
0.0743420422077179,
0.007980172522366047,
-0.09615769982337952,
-0.019328614696860313,
-0.03137543424963951,
-0.018898434937000275,
-0.07... | 0.069958 |
file named \_mylib.lua\_ that consists of the library's definition, we can load it like so (without stripping the source code of helpful whitespaces): ```bash $ cat mylib.lua | redis-cli -x FUNCTION LOAD REPLACE ``` We've added the `REPLACE` modifier to the call to `FUNCTION LOAD` to tell Redis that we want to overwrite the existing library definition. Otherwise, we would have gotten an error from Redis complaining that the library already exists. Now that the library's updated code is loaded to Redis, we can proceed and call our function: ``` redis> FCALL my\_hset 1 myhash myfield "some value" another\_field "another value" (integer) 3 redis> HGETALL myhash 1) "\_last\_modified\_" 2) "1640772721" 3) "myfield" 4) "some value" 5) "another\_field" 6) "another value" ``` In this case, we had invoked `FCALL` with \_1\_ as the number of key name arguments. That means that the function's first input argument is a name of a key (and is therefore included in the callback's `keys` table). After that first argument, all following input arguments are considered regular arguments and constitute the `args` table passed to the callback as its second argument. ## Expanding the library We can add more functions to our library to benefit our application. The additional metadata field we've added to the Hash shouldn't be included in responses when accessing the Hash's data. On the other hand, we do want to provide the means to obtain the modification timestamp for a given Hash key. We'll add two new functions to our library to accomplish these objectives: 1. The `my\_hgetall` Redis Function will return all fields and their respective values from a given Hash key name, excluding the metadata (i.e., the `\_last\_modified\_` field). 1. The `my\_hlastmodified` Redis Function will return the modification timestamp for a given Hash key name. The library's source code could look something like the following: ```lua #!lua name=mylib local function my\_hset(keys, args) local hash = keys[1] local time = redis.call('TIME')[1] return redis.call('HSET', hash, '\_last\_modified\_', time, unpack(args)) end local function my\_hgetall(keys, args) redis.setresp(3) local hash = keys[1] local res = redis.call('HGETALL', hash) res['map']['\_last\_modified\_'] = nil return res end local function my\_hlastmodified(keys, args) local hash = keys[1] return redis.call('HGET', hash, '\_last\_modified\_') end redis.register\_function('my\_hset', my\_hset) redis.register\_function('my\_hgetall', my\_hgetall) redis.register\_function('my\_hlastmodified', my\_hlastmodified) ``` While all of the above should be straightforward, note that the `my\_hgetall` also calls [`redis.setresp(3)`](/topics/lua-api#redis.setresp). That means that the function expects [RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md) replies after calling `redis.call()`, which, unlike the default RESP2 protocol, provides dictionary (associative arrays) replies. Doing so allows the function to delete (or set to `nil` as is the case with Lua tables) specific fields from the reply, and in our case, the `\_last\_modified\_` field. Assuming you've saved the library's implementation in the \_mylib.lua\_ file, you can replace it with: ```bash $ cat mylib.lua | redis-cli -x FUNCTION LOAD REPLACE ``` Once loaded, you can call the library's functions with `FCALL`: ``` redis> FCALL my\_hgetall 1 myhash 1) "myfield" 2) "some value" 3) "another\_field" 4) "another value" redis> FCALL my\_hlastmodified 1 myhash "1640772721" ``` You can also get the library's details with the `FUNCTION LIST` command: ``` redis> FUNCTION LIST 1) 1) "library\_name" 2) "mylib" 3) "engine" 4) "LUA" 5) "functions" 6) 1) 1) "name" 2) "my\_hset" 3) "description" 4) (nil) 5) "flags" 6) (empty array) 2) 1) "name" 2) "my\_hgetall" 3) "description" 4) (nil) 5) "flags" 6) (empty array) 3) 1) "name" 2) "my\_hlastmodified" 3) "description" 4) (nil) 5) "flags" 6) (empty array) ``` You can see that it is easy to update our library with new capabilities. ## Reusing code in the library On top of bundling functions together into database-managed software artifacts, libraries also facilitate code sharing. We | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/functions-intro.md | master | redis | [
-0.03359375521540642,
-0.04324229434132576,
-0.03814114257693291,
-0.016835635527968407,
-0.021321656182408333,
-0.060884129256010056,
0.10654853284358978,
0.07428570091724396,
0.033406563103199005,
-0.0354098305106163,
-0.01262595597654581,
0.017615584656596184,
0.018969176337122917,
-0.1... | 0.060218 |
1) "name" 2) "my\_hlastmodified" 3) "description" 4) (nil) 5) "flags" 6) (empty array) ``` You can see that it is easy to update our library with new capabilities. ## Reusing code in the library On top of bundling functions together into database-managed software artifacts, libraries also facilitate code sharing. We can add to our library an error handling helper function called from other functions. The helper function `check\_keys()` verifies that the input \_keys\_ table has a single key. Upon success it returns `nil`, otherwise it returns an [error reply](/topics/lua-api#redis.error\_reply). The updated library's source code would be: ```lua #!lua name=mylib local function check\_keys(keys) local error = nil local nkeys = table.getn(keys) if nkeys == 0 then error = 'Hash key name not provided' elseif nkeys > 1 then error = 'Only one key name is allowed' end if error ~= nil then redis.log(redis.LOG\_WARNING, error); return redis.error\_reply(error) end return nil end local function my\_hset(keys, args) local error = check\_keys(keys) if error ~= nil then return error end local hash = keys[1] local time = redis.call('TIME')[1] return redis.call('HSET', hash, '\_last\_modified\_', time, unpack(args)) end local function my\_hgetall(keys, args) local error = check\_keys(keys) if error ~= nil then return error end redis.setresp(3) local hash = keys[1] local res = redis.call('HGETALL', hash) res['map']['\_last\_modified\_'] = nil return res end local function my\_hlastmodified(keys, args) local error = check\_keys(keys) if error ~= nil then return error end local hash = keys[1] return redis.call('HGET', keys[1], '\_last\_modified\_') end redis.register\_function('my\_hset', my\_hset) redis.register\_function('my\_hgetall', my\_hgetall) redis.register\_function('my\_hlastmodified', my\_hlastmodified) ``` After you've replaced the library in Redis with the above, you can immediately try out the new error handling mechanism: ``` 127.0.0.1:6379> FCALL my\_hset 0 myhash nope nope (error) Hash key name not provided 127.0.0.1:6379> FCALL my\_hgetall 2 myhash anotherone (error) Only one key name is allowed ``` And your Redis log file should have lines in it that are similar to: ``` ... 20075:M 1 Jan 2022 16:53:57.688 # Hash key name not provided 20075:M 1 Jan 2022 16:54:01.309 # Only one key name is allowed ``` ## Functions in cluster As noted above, Redis automatically handles propagation of loaded functions to replicas. In a Redis Cluster, it is also necessary to load functions to all cluster nodes. This is not handled automatically by Redis Cluster, and needs to be handled by the cluster administrator (like module loading, configuration setting, etc.). As one of the goals of functions is to live separately from the client application, this should not be part of the Redis client library responsibilities. Instead, `redis-cli --cluster-only-masters --cluster call host:port FUNCTION LOAD ...` can be used to execute the load command on all master nodes. Also, note that `redis-cli --cluster add-node` automatically takes care to propagate the loaded functions from one of the existing nodes to the new node. ## Functions and ephemeral Redis instances In some cases there may be a need to start a fresh Redis server with a set of functions pre-loaded. Common reasons for that could be: \* Starting Redis in a new environment \* Re-starting an ephemeral (cache-only) Redis, that uses functions In such cases, we need to make sure that the pre-loaded functions are available before Redis accepts inbound user connections and commands. To do that, it is possible to use `redis-cli --functions-rdb` to extract the functions from an existing server. This generates an RDB file that can be loaded by Redis at startup. ## Function flags Redis needs to have some information about how a function is going to behave when executed, in order to properly enforce resource usage policies and maintain data consistency. For example, Redis needs to know that a certain function is | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/functions-intro.md | master | redis | [
-0.06259512156248093,
-0.04509442672133446,
-0.060026656836271286,
0.0049683814868330956,
0.011824030429124832,
-0.11809159070253372,
0.07101468741893768,
0.020828329026699066,
-0.02100483700633049,
-0.019739428535103798,
0.049297261983156204,
-0.01183623168617487,
0.05048518255352974,
-0.... | 0.057571 |
that can be loaded by Redis at startup. ## Function flags Redis needs to have some information about how a function is going to behave when executed, in order to properly enforce resource usage policies and maintain data consistency. For example, Redis needs to know that a certain function is read-only before permitting it to execute using `FCALL\_RO` on a read-only replica. By default, Redis assumes that all functions may perform arbitrary read or write operations. Function Flags make it possible to declare more specific function behavior at the time of registration. Let's see how this works. In our previous example, we defined two functions that only read data. We can try executing them using `FCALL\_RO` against a read-only replica. ``` redis > FCALL\_RO my\_hgetall 1 myhash (error) ERR Can not execute a function with write flag using fcall\_ro. ``` Redis returns this error because a function can, in theory, perform both read and write operations on the database. As a safeguard and by default, Redis assumes that the function does both, so it blocks its execution. The server will reply with this error in the following cases: 1. Executing a function with `FCALL` against a read-only replica. 2. Using `FCALL\_RO` to execute a function. 3. A disk error was detected (Redis is unable to persist so it rejects writes). In these cases, you can add the `no-writes` flag to the function's registration, disable the safeguard and allow them to run. To register a function with flags use the [named arguments](/topics/lua-api#redis.register\_function\_named\_args) variant of `redis.register\_function`. The updated registration code snippet from the library looks like this: ```lua redis.register\_function('my\_hset', my\_hset) redis.register\_function{ function\_name='my\_hgetall', callback=my\_hgetall, flags={ 'no-writes' } } redis.register\_function{ function\_name='my\_hlastmodified', callback=my\_hlastmodified, flags={ 'no-writes' } } ``` Once we've replaced the library, Redis allows running both `my\_hgetall` and `my\_hlastmodified` with `FCALL\_RO` against a read-only replica: ``` redis> FCALL\_RO my\_hgetall 1 myhash 1) "myfield" 2) "some value" 3) "another\_field" 4) "another value" redis> FCALL\_RO my\_hlastmodified 1 myhash "1640772721" ``` For the complete documentation flags, please refer to [Script flags](/topics/lua-api#script\_flags). | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/functions-intro.md | master | redis | [
-0.05852533504366875,
-0.05936071276664734,
-0.0959586575627327,
0.04346979781985283,
0.005788546986877918,
-0.06949348747730255,
0.024497468024492264,
0.034598227590322495,
0.026951728388667107,
0.010708579793572426,
-0.035849325358867645,
0.05594617500901222,
0.02723780833184719,
-0.0630... | 0.158384 |
Starting with version 3.2 Redis includes a complete Lua debugger, that can be used in order to make the task of writing complex Redis scripts much simpler. The Redis Lua debugger, codenamed LDB, has the following important features: \* It uses a server-client model, so it's a remote debugger. The Redis server acts as the debugging server, while the default client is `redis-cli`. However other clients can be developed by following the simple protocol implemented by the server. \* By default every new debugging session is a forked session. It means that while the Redis Lua script is being debugged, the server does not block and is usable for development or in order to execute multiple debugging sessions in parallel. This also means that changes are \*\*rolled back\*\* after the script debugging session finished, so that's possible to restart a new debugging session again, using exactly the same Redis data set as the previous debugging session. \* An alternative synchronous (non forked) debugging model is available on demand, so that changes to the dataset can be retained. In this mode the server blocks for the time the debugging session is active. \* Support for step by step execution. \* Support for static and dynamic breakpoints. \* Support from logging the debugged script into the debugger console. \* Inspection of Lua variables. \* Tracing of Redis commands executed by the script. \* Pretty printing of Redis and Lua values. \* Infinite loops and long execution detection, which simulates a breakpoint. ## Quick start A simple way to get started with the Lua debugger is to watch this video introduction: > Important Note: please make sure to avoid debugging Lua scripts using your Redis production server. Use a development server instead. Also note that using the synchronous debugging mode (which is NOT the default) results in the Redis server blocking for all the time the debugging session lasts. To start a new debugging session using `redis-cli` do the following: 1. Create your script in some file with your preferred editor. Let's assume you are editing your Redis Lua script located at `/tmp/script.lua`. 2. Start a debugging session with: ./redis-cli --ldb --eval /tmp/script.lua Note that with the `--eval` option of `redis-cli` you can pass key names and arguments to the script, separated by a comma, like in the following example: ``` ./redis-cli --ldb --eval /tmp/script.lua mykey somekey , arg1 arg2 ``` You'll enter a special mode where `redis-cli` no longer accepts its normal commands, but instead prints a help screen and passes the unmodified debugging commands directly to Redis. The only commands which are not passed to the Redis debugger are: \* `quit` -- this will terminate the debugging session. It's like removing all the breakpoints and using the `continue` debugging command. Moreover the command will exit from `redis-cli`. \* `restart` -- the debugging session will restart from scratch, \*\*reloading the new version of the script from the file\*\*. So a normal debugging cycle involves modifying the script after some debugging, and calling `restart` in order to start debugging again with the new script changes. \* `help` -- this command is passed to the Redis Lua debugger, that will print a list of commands like the following: ``` lua debugger> help Redis Lua debugger help: [h]elp Show this help. [s]tep Run current line and stop again. [n]ext Alias for step. [c]ontinue Run till next breakpoint. [l]ist List source code around current line. [l]ist [line] List source code around [line]. line = 0 means: current position. [l]ist [line] [ctx] In this form [ctx] specifies how many lines to show before/after [line]. [w]hole List all | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-debugging.md | master | redis | [
-0.07115960121154785,
-0.10555107146501541,
-0.044205717742443085,
0.017902756109833717,
0.015248950570821762,
-0.1131722629070282,
-0.01351137738674879,
0.029749497771263123,
0.0455845445394516,
0.01135837472975254,
0.038405001163482666,
0.03718532621860504,
-0.047844622284173965,
-0.0962... | 0.071601 |
and stop again. [n]ext Alias for step. [c]ontinue Run till next breakpoint. [l]ist List source code around current line. [l]ist [line] List source code around [line]. line = 0 means: current position. [l]ist [line] [ctx] In this form [ctx] specifies how many lines to show before/after [line]. [w]hole List all source code. Alias for 'list 1 1000000'. [p]rint Show all the local variables. [p]rint Show the value of the specified variable. Can also show global vars KEYS and ARGV. [b]reak Show all breakpoints. [b]reak Add a breakpoint to the specified line. [b]reak - Remove breakpoint from the specified line. [b]reak 0 Remove all breakpoints. [t]race Show a backtrace. [e]val ```` Execute some Lua code (in a different callframe). [r]edis Execute a Redis command. [m]axlen [len] Trim logged Redis replies and Lua var dumps to len. Specifying zero as means unlimited. [a]bort Stop the execution of the script. In sync mode dataset changes will be retained. Debugger functions you can call from Lua scripts: redis.debug() Produce logs in the debugger console. redis.breakpoint() Stop execution as if there was a breakpoint in the next line of code. ``` Note that when you start the debugger it will start in **stepping mode**. It will stop at the first line of the script that actually does something before executing it. From this point you usually call `step` in order to execute the line and go to the next line. While you step Redis will show all the commands executed by the server like in the following example: ``` * Stopped at 1, stop reason = step over -> 1 redis.call('ping') lua debugger> step ping "+PONG" * Stopped at 2, stop reason = step over ``` The `` and `` lines show the command executed by the line just executed, and the reply from the server. Note that this happens only in stepping mode. If you use `continue` in order to execute the script till the next breakpoint, commands will not be dumped on the screen to prevent too much output. ## Termination of the debugging session When the scripts terminates naturally, the debugging session ends and `redis-cli` returns in its normal non-debugging mode. You can restart the session using the `restart` command as usual. Another way to stop a debugging session is just interrupting `redis-cli` manually by pressing `Ctrl+C`. Note that also any event breaking the connection between `redis-cli` and the `redis-server` will interrupt the debugging session. All the forked debugging sessions are terminated when the server is shut down. ## Abbreviating debugging commands Debugging can be a very repetitive task. For this reason every Redis debugger command starts with a different character, and you can use the single initial character in order to refer to the command. So for example instead of typing `step` you can just type `s`. ## Breakpoints Adding and removing breakpoints is trivial as described in the online help. Just use `b 1 2 3 4` to add a breakpoint in line 1, 2, 3, 4. The command `b 0` removes all the breakpoints. Selected breakpoints can be removed using as argument the line where the breakpoint we want to remove is, but prefixed by a minus sign. So for example `b -3` removes the breakpoint from line 3. Note that adding breakpoints to lines that Lua never executes, like declaration of local variables or comments, will not work. The breakpoint will be added but since this part of the script will never be executed, the program will never stop. ## Dynamic breakpoints Using the `breakpoint` command it is possible to add breakpoints into specific lines. However sometimes we | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-debugging.md | master | redis | [
-0.10414154082536697,
0.0017546646995469928,
-0.03640475124120712,
-0.007777791004627943,
-0.021839380264282227,
-0.021815845742821693,
0.04110216349363327,
0.08070514351129532,
-0.07801207154989243,
-0.004784795921295881,
-0.017946094274520874,
0.01332206279039383,
-0.009413042105734348,
... | 0.081397 |
like declaration of local variables or comments, will not work. The breakpoint will be added but since this part of the script will never be executed, the program will never stop. ## Dynamic breakpoints Using the `breakpoint` command it is possible to add breakpoints into specific lines. However sometimes we want to stop the execution of the program only when something special happens. In order to do so, you can use the `redis.breakpoint()` function inside your Lua script. When called it simulates a breakpoint in the next line that will be executed. ``` if counter > 10 then redis.breakpoint() end ``` This feature is extremely useful when debugging, so that we can avoid continuing the script execution manually multiple times until a given condition is encountered. ## Synchronous mode As explained previously, but default LDB uses forked sessions with rollback of all the data changes operated by the script while it has being debugged. Determinism is usually a good thing to have during debugging, so that successive debugging sessions can be started without having to reset the database content to its original state. However for tracking certain bugs, you may want to retain the changes performed to the key space by each debugging session. When this is a good idea you should start the debugger using a special option, `ldb-sync-mode`, in `redis-cli`. ``` ./redis-cli --ldb-sync-mode --eval /tmp/script.lua ``` > Note: Redis server will be unreachable during the debugging session in this mode, so use with care. In this special mode, the `abort` command can stop the script half-way taking the changes operated to the dataset. Note that this is different compared to ending the debugging session normally. If you just interrupt `redis-cli` the script will be fully executed and then the session terminated. Instead with `abort` you can interrupt the script execution in the middle and start a new debugging session if needed. ## Logging from scripts The `redis.debug()` command is a powerful debugging facility that can be called inside the Redis Lua script in order to log things into the debug console: ``` lua debugger> list -> 1 local a = {1,2,3} 2 local b = false 3 redis.debug(a,b) lua debugger> continue line 3: {1; 2; 3}, false ``` If the script is executed outside of a debugging session, `redis.debug()` has no effects at all. Note that the function accepts multiple arguments, that are separated by a comma and a space in the output. Tables and nested tables are displayed correctly in order to make values simple to observe for the programmer debugging the script. ## Inspecting the program state with `print` and `eval` While the `redis.debug()` function can be used in order to print values directly from within the Lua script, often it is useful to observe the local variables of a program while stepping or when stopped into a breakpoint. The `print` command does just that, and performs lookup in the call frames starting from the current one back to the previous ones, up to top-level. This means that even if we are into a nested function inside a Lua script, we can still use `print foo` to look at the value of `foo` in the context of the calling function. When called without a variable name, `print` will print all variables and their respective values. The `eval` command executes small pieces of Lua scripts **outside the context of the current call frame** (evaluating inside the context of the current call frame is not possible with the current Lua internals). However you can use this command in order to test Lua functions. ``` lua debugger> e redis.sha1hex('foo') | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-debugging.md | master | redis | [
-0.03941724821925163,
-0.09139080345630646,
-0.02006031572818756,
0.05433764308691025,
-0.004849889315664768,
-0.027308790013194084,
0.044583093374967575,
0.083126962184906,
-0.02818262204527855,
-0.0003219390055164695,
0.035653725266456604,
0.00395608227699995,
-0.038561005145311356,
-0.0... | 0.013166 |
`eval` command executes small pieces of Lua scripts **outside the context of the current call frame** (evaluating inside the context of the current call frame is not possible with the current Lua internals). However you can use this command in order to test Lua functions. ``` lua debugger> e redis.sha1hex('foo') "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33" ``` ## Debugging clients LDB uses the client-server model where the Redis server acts as a debugging server that communicates using [RESP](/topics/protocol). While `redis-cli` is the default debug client, any [client](/clients) can be used for debugging as long as it meets one of the following conditions: 1. The client provides a native interface for setting the debug mode and controlling the debug session. 2. The client provides an interface for sending arbitrary commands over RESP. 3. The client allows sending raw messages to the Redis server. For example, the [Redis plugin](https://redis.com/blog/zerobrane-studio-plugin-for-redis-lua-scripts) for [ZeroBrane Studio](http://studio.zerobrane.com/) integrates with LDB using [redis-lua](https://github.com/nrk/redis-lua). The following Lua code is a simplified example of how the plugin achieves that: ```Lua local redis = require 'redis' -- add LDB's Continue command redis.commands['ldbcontinue'] = redis.command('C') -- script to be debugged local script = [[ local x, y = tonumber(ARGV[1]), tonumber(ARGV[2]) local result = x * y return result ]] local client = redis.connect('127.0.0.1', 6379) client:script("DEBUG", "YES") print(unpack(client:eval(script, 0, 6, 9))) client:ldbcontinue() ``` ```` | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-debugging.md | master | redis | [
-0.05060959979891777,
-0.05518872290849686,
-0.07243520021438599,
0.021424120292067528,
0.04213450849056244,
-0.13036184012889862,
0.026623200625181198,
0.03865724056959152,
0.01754596456885338,
-0.006148568820208311,
0.01096018124371767,
-0.013231552205979824,
-0.0433344729244709,
-0.0543... | 0.056293 |
Redis lets users upload and execute Lua scripts on the server. Scripts can employ programmatic control structures and use most of the [commands](/commands) while executing to access the database. Because scripts execute in the server, reading and writing data from scripts is very efficient. Redis guarantees the script's atomic execution. While executing the script, all server activities are blocked during its entire runtime. These semantics mean that all of the script's effects either have yet to happen or had already happened. Scripting offers several properties that can be valuable in many cases. These include: \* Providing locality by executing logic where data lives. Data locality reduces overall latency and saves networking resources. \* Blocking semantics that ensure the script's atomic execution. \* Enabling the composition of simple capabilities that are either missing from Redis or are too niche to be a part of it. Lua lets you run part of your application logic inside Redis. Such scripts can perform conditional updates across multiple keys, possibly combining several different data types atomically. Scripts are executed in Redis by an embedded execution engine. Presently, Redis supports a single scripting engine, the [Lua 5.1](https://www.lua.org/) interpreter. Please refer to the [Redis Lua API Reference](/topics/lua-api) page for complete documentation. Although the server executes them, Eval scripts are regarded as a part of the client-side application, which is why they're not named, versioned, or persisted. So all scripts may need to be reloaded by the application at any time if missing (after a server restart, fail-over to a replica, etc.). As of version 7.0, [Redis Functions](/topics/functions-intro) offer an alternative approach to programmability which allow the server itself to be extended with additional programmed logic. ## Getting started We'll start scripting with Redis by using the `EVAL` command. Here's our first example: ``` > EVAL "return 'Hello, scripting!'" 0 "Hello, scripting!" ``` In this example, `EVAL` takes two arguments. The first argument is a string that consists of the script's Lua source code. The script doesn't need to include any definitions of Lua function. It is just a Lua program that will run in the Redis engine's context. The second argument is the number of arguments that follow the script's body, starting from the third argument, representing Redis key names. In this example, we used the value \_0\_ because we didn't provide the script with any arguments, whether the names of keys or not. ## Script parameterization It is possible, although highly ill-advised, to have the application dynamically generate script source code per its needs. For example, the application could send these two entirely different, but at the same time perfectly identical scripts: ``` redis> EVAL "return 'Hello'" 0 "Hello" redis> EVAL "return 'Scripting!'" 0 "Scripting!" ``` Although this mode of operation isn't blocked by Redis, it is an anti-pattern due to script cache considerations (more on the topic below). Instead of having your application generate subtle variations of the same scripts, you can parametrize them and pass any arguments needed for to execute them. The following example demonstrates how to achieve the same effects as above, but via parameterization: ``` redis> EVAL "return ARGV[1]" 0 Hello "Hello" redis> EVAL "return ARGV[1]" 0 Parameterization! "Parameterization!" ``` At this point, it is essential to understand the distinction Redis makes between input arguments that are names of keys and those that aren't. While key names in Redis are just strings, unlike any other string values, these represent keys in the database. The name of a key is a fundamental concept in Redis and is the basis for operating the Redis Cluster. \*\*Important:\*\* to ensure the correct execution of scripts, | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/eval-intro.md | master | redis | [
-0.06225200742483139,
-0.04772438108921051,
-0.05560889467597008,
0.04548124596476555,
0.0005841927486471832,
-0.11444813758134842,
-0.028217852115631104,
0.07265032827854156,
0.06886980682611465,
0.03608091175556183,
0.0032573258504271507,
0.09397277981042862,
0.007394328247755766,
-0.111... | 0.195714 |
those that aren't. While key names in Redis are just strings, unlike any other string values, these represent keys in the database. The name of a key is a fundamental concept in Redis and is the basis for operating the Redis Cluster. \*\*Important:\*\* to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a script accesses must be explicitly provided as input key arguments. The script \*\*should only\*\* access keys whose names are given as input arguments. Scripts \*\*should never\*\* access keys with programmatically-generated names or based on the contents of data structures stored in the database. Any input to the function that isn't the name of a key is a regular input argument. In the example above, both \_Hello\_ and \_Parameterization!\_ regular input arguments for the script. Because the script doesn't touch any keys, we use the numerical argument \_0\_ to specify there are no key name arguments. The execution context makes arguments available to the script through [\_KEYS\_](/topics/lua-api#the-keys-global-variable) and [\_ARGV\_](/topics/lua-api#the-argv-global-variable) global runtime variables. The \_KEYS\_ table is pre-populated with all key name arguments provided to the script before its execution, whereas the \_ARGV\_ table serves a similar purpose but for regular arguments. The following attempts to demonstrate the distribution of input arguments between the scripts \_KEYS\_ and \_ARGV\_ runtime global variables: ``` redis> EVAL "return { KEYS[1], KEYS[2], ARGV[1], ARGV[2], ARGV[3] }" 2 key1 key2 arg1 arg2 arg3 1) "key1" 2) "key2" 3) "arg1" 4) "arg2" 5) "arg3" ``` \*\*Note:\*\* as can been seen above, Lua's table arrays are returned as [RESP2 array replies](/topics/protocol#resp-arrays), so it is likely that your client's library will convert it to the native array data type in your programming language. Please refer to the rules that govern [data type conversion](/topics/lua-api#data-type-conversion) for more pertinent information. ## Interacting with Redis from a script It is possible to call Redis commands from a Lua script either via [`redis.call()`](/topics/lua-api#redis.call) or [`redis.pcall()`](/topics/lua-api#redis.pcall). The two are nearly identical. Both execute a Redis command along with its provided arguments, if these represent a well-formed command. However, the difference between the two functions lies in the manner in which runtime errors (such as syntax errors, for example) are handled. Errors raised from calling `redis.call()` function are returned directly to the client that had executed it. Conversely, errors encountered when calling the `redis.pcall()` function are returned to the script's execution context instead for possible handling. For example, consider the following: ``` > EVAL "return redis.call('SET', KEYS[1], ARGV[1])" 1 foo bar OK ``` The above script accepts one key name and one value as its input arguments. When executed, the script calls the `SET` command to set the input key, \_foo\_, with the string value "bar". ## Script cache Until this point, we've used the `EVAL` command to run our script. Whenever we call `EVAL`, we also include the script's source code with the request. Repeatedly calling `EVAL` to execute the same set of parameterized scripts, wastes both network bandwidth and also has some overheads in Redis. Naturally, saving on network and compute resources is key, so, instead, Redis provides a caching mechanism for scripts. Every script you execute with `EVAL` is stored in a dedicated cache that the server keeps. The cache's contents are organized by the scripts' SHA1 digest sums, so the SHA1 digest sum of a script uniquely identifies it in the cache. You can verify this behavior by running `EVAL` and calling `INFO` afterward. You'll notice that the \_used\_memory\_scripts\_eval\_ and \_number\_of\_cached\_scripts\_ metrics grow with every new script that's executed. As mentioned above, dynamically-generated scripts are an anti-pattern. Generating scripts during the | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/eval-intro.md | master | redis | [
-0.055364858359098434,
-0.05012349411845207,
-0.08198779076337814,
0.05701887980103493,
-0.036021698266267776,
-0.08213429898023605,
0.05771692097187042,
0.03309452906250954,
0.032611340284347534,
-0.014535891823470592,
0.03633393347263336,
-0.0016514572780579329,
0.06684581190347672,
-0.1... | 0.150444 |
digest sum of a script uniquely identifies it in the cache. You can verify this behavior by running `EVAL` and calling `INFO` afterward. You'll notice that the \_used\_memory\_scripts\_eval\_ and \_number\_of\_cached\_scripts\_ metrics grow with every new script that's executed. As mentioned above, dynamically-generated scripts are an anti-pattern. Generating scripts during the application's runtime may, and probably will, exhaust the host's memory resources for caching them. Instead, scripts should be as generic as possible and provide customized execution via their arguments. A script is loaded to the server's cache by calling the `SCRIPT LOAD` command and providing its source code. The server doesn't execute the script, but instead just compiles and loads it to the server's cache. Once loaded, you can execute the cached script with the SHA1 digest returned from the server. Here's an example of loading and then executing a cached script: ``` redis> SCRIPT LOAD "return 'Immabe a cached script'" "c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f" redis> EVALSHA c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f 0 "Immabe a cached script" ``` ### Cache volatility The Redis script cache is \*\*always volatile\*\*. It isn't considered as a part of the database and is \*\*not persisted\*\*. The cache may be cleared when the server restarts, during fail-over when a replica assumes the master role, or explicitly by `SCRIPT FLUSH`. That means that cached scripts are ephemeral, and the cache's contents can be lost at any time. Applications that use scripts should always call `EVALSHA` to execute them. The server returns an error if the script's SHA1 digest is not in the cache. For example: ``` redis> EVALSHA ffffffffffffffffffffffffffffffffffffffff 0 (error) NOSCRIPT No matching script ``` In this case, the application should first load it with `SCRIPT LOAD` and then call `EVALSHA` once more to run the cached script by its SHA1 sum. Most of [Redis' clients](/clients) already provide utility APIs for doing that automatically. Please consult your client's documentation regarding the specific details. ### `!EVALSHA` in the context of pipelining Special care should be given executing `EVALSHA` in the context of a [pipelined request](/topics/pipelining). The commands in a pipelined request run in the order they are sent, but other clients' commands may be interleaved for execution between these. Because of that, the `NOSCRIPT` error can return from a pipelined request but can't be handled. Therefore, a client library's implementation should revert to using plain `EVAL` of parameterized in the context of a pipeline. ### Script cache semantics During normal operation, an application's scripts are meant to stay indefinitely in the cache (that is, until the server is restarted or the cache being flushed). The underlying reasoning is that the script cache contents of a well-written application are unlikely to grow continuously. Even large applications that use hundreds of cached scripts shouldn't be an issue in terms of cache memory usage. The only way to flush the script cache is by explicitly calling the `SCRIPT FLUSH` command. Running the command will \_completely flush\_ the scripts cache, removing all the scripts executed so far. Typically, this is only needed when the instance is going to be instantiated for another customer or application in a cloud environment. Also, as already mentioned, restarting a Redis instance flushes the non-persistent script cache. However, from the point of view of the Redis client, there are only two ways to make sure that a Redis instance was not restarted between two different commands: \* The connection we have with the server is persistent and was never closed so far. \* The client explicitly checks the `run\_id` field in the `INFO` command to ensure the server was not restarted and is still the same process. Practically speaking, it is much | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/eval-intro.md | master | redis | [
0.004123029764741659,
0.014431812800467014,
-0.05829230323433876,
0.0446317158639431,
-0.016389185562729836,
-0.10268545150756836,
0.047601498663425446,
-0.021044034510850906,
0.11520452797412872,
-0.0004347885260358453,
-0.06218283995985985,
0.04061666876077652,
0.02498192898929119,
-0.08... | 0.07655 |
between two different commands: \* The connection we have with the server is persistent and was never closed so far. \* The client explicitly checks the `run\_id` field in the `INFO` command to ensure the server was not restarted and is still the same process. Practically speaking, it is much simpler for the client to assume that in the context of a given connection, cached scripts are guaranteed to be there unless the administrator explicitly invoked the `SCRIPT FLUSH` command. The fact that the user can count on Redis to retain cached scripts is semantically helpful in the context of pipelining. ## The `!SCRIPT` command The Redis `SCRIPT` provides several ways for controlling the scripting subsystem. These are: \* `SCRIPT FLUSH`: this command is the only way to force Redis to flush the scripts cache. It is most useful in environments where the same Redis instance is reassigned to different uses. It is also helpful for testing client libraries' implementations of the scripting feature. \* `SCRIPT EXISTS`: given one or more SHA1 digests as arguments, this command returns an array of \_1\_'s and \_0\_'s. \_1\_ means the specific SHA1 is recognized as a script already present in the scripting cache. \_0\_'s meaning is that a script with this SHA1 wasn't loaded before (or at least never since the latest call to `SCRIPT FLUSH`). \* `SCRIPT LOAD script`: this command registers the specified script in the Redis script cache. It is a useful command in all the contexts where we want to ensure that `EVALSHA` doesn't not fail (for instance, in a pipeline or when called from a [`MULTI`/`EXEC` transaction](/topics/transactions)), without the need to execute the script. \* `SCRIPT KILL`: this command is the only way to interrupt a long-running script (a.k.a slow script), short of shutting down the server. A script is deemed as slow once its execution's duration exceeds the configured [maximum execution time](/topics/programmability#maximum-execution-time) threshold. The `SCRIPT KILL` command can be used only with scripts that did not modify the dataset during their execution (since stopping a read-only script does not violate the scripting engine's guaranteed atomicity). \* `SCRIPT DEBUG`: controls use of the built-in [Redis Lua scripts debugger](/topics/ldb). ## Script replication In standalone deployments, a single Redis instance called \_master\_ manages the entire database. A [clustered deployment](/topics/cluster-tutorial) has at least three masters managing the sharded database. Redis uses [replication](/topics/replication) to maintain one or more replicas, or exact copies, for any given master. Because scripts can modify the data, Redis ensures all write operations performed by a script are also sent to replicas to maintain consistency. There are two conceptual approaches when it comes to script replication: 1. Verbatim replication: the master sends the script's source code to the replicas. Replicas then execute the script and apply the write effects. This mode can save on replication bandwidth in cases where short scripts generate many commands (for example, a \_for\_ loop). However, this replication mode means that replicas redo the same work done by the master, which is wasteful. More importantly, it also requires [all write scripts to be deterministic](#scripts-with-deterministic-writes). 1. Effects replication: only the script's data-modifying commands are replicated. Replicas then run the commands without executing any scripts. While potentially lengthier in terms of network traffic, this replication mode is deterministic by definition and therefore doesn't require special consideration. Verbatim script replication was the only mode supported until Redis 3.2, in which effects replication was added. The \_lua-replicate-commands\_ configuration directive and [`redis.replicate\_commands()`](/topics/lua-api#redis.replicate\_commands) Lua API can be used to enable it. In Redis 5.0, effects replication became the default mode. As of Redis 7.0, verbatim replication is no longer supported. | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/eval-intro.md | master | redis | [
-0.05678032711148262,
-0.02584642916917801,
-0.08073432743549347,
0.017285699024796486,
-0.023510606959462166,
-0.08388920873403549,
0.035576753318309784,
-0.0016735225217416883,
0.07599520683288574,
-0.014455346390604973,
-0.025971638038754463,
0.09974238276481628,
0.004955771844834089,
-... | 0.146958 |
Verbatim script replication was the only mode supported until Redis 3.2, in which effects replication was added. The \_lua-replicate-commands\_ configuration directive and [`redis.replicate\_commands()`](/topics/lua-api#redis.replicate\_commands) Lua API can be used to enable it. In Redis 5.0, effects replication became the default mode. As of Redis 7.0, verbatim replication is no longer supported. ### Replicating commands instead of scripts Starting with Redis 3.2, it is possible to select an alternative replication method. Instead of replicating whole scripts, we can replicate the write commands generated by the script. We call this \*\*script effects replication\*\*. \*\*Note:\*\* starting with Redis 5.0, script effects replication is the default mode and does not need to be explicitly enabled. In this replication mode, while Lua scripts are executed, Redis collects all the commands executed by the Lua scripting engine that actually modify the dataset. When the script execution finishes, the sequence of commands that the script generated are wrapped into a [`MULTI`/`EXEC` transaction](/topics/transactions) and are sent to the replicas and AOF. This is useful in several ways depending on the use case: \* When the script is slow to compute, but the effects can be summarized by a few write commands, it is a shame to re-compute the script on the replicas or when reloading the AOF. In this case, it is much better to replicate just the effects of the script. \* When script effects replication is enabled, the restrictions on non-deterministic functions are removed. You can, for example, use the `TIME` or `SRANDMEMBER` commands inside your scripts freely at any place. \* The Lua PRNG in this mode is seeded randomly on every call. Unless already enabled by the server's configuration or defaults (before Redis 7.0), you need to issue the following Lua command before the script performs a write: ```lua redis.replicate\_commands() ``` The [`redis.replicate\_commands()`](/topics/lua-api#redis.replicate\_commands) function returns \_true) if script effects replication was enabled; otherwise, if the function was called after the script already called a write command, it returns \_false\_, and normal whole script replication is used. This function is deprecated as of Redis 7.0, and while you can still call it, it will always succeed. ### Scripts with deterministic writes \*\*Note:\*\* Starting with Redis 5.0, script replication is by default effect-based rather than verbatim. In Redis 7.0, verbatim script replication had been removed entirely. The following section only applies to versions lower than Redis 7.0 when not using effect-based script replication. An important part of scripting is writing scripts that only change the database in a deterministic way. Scripts executed in a Redis instance are, by default until version 5.0, propagated to replicas and to the AOF file by sending the script itself -- not the resulting commands. Since the script will be re-run on the remote host (or when reloading the AOF file), its changes to the database must be reproducible. The reason for sending the script is that it is often much faster than sending the multiple commands that the script generates. If the client is sending many scripts to the master, converting the scripts into individual commands for the replica / AOF would result in too much bandwidth for the replication link or the Append Only File (and also too much CPU since dispatching a command received via the network is a lot more work for Redis compared to dispatching a command invoked by Lua scripts). Normally replicating scripts instead of the effects of the scripts makes sense, however not in all the cases. So starting with Redis 3.2, the scripting engine is able to, alternatively, replicate the sequence of write commands resulting from the script execution, instead of replication the script | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/eval-intro.md | master | redis | [
-0.09859315305948257,
-0.13949833810329437,
-0.046910010278224945,
0.06528007239103317,
0.009140397422015667,
-0.05513777583837509,
-0.02837701141834259,
-0.04004501923918724,
0.019675657153129578,
0.08699966967105865,
0.018559331074357033,
0.027239231392741203,
0.046032365411520004,
-0.04... | 0.084783 |
invoked by Lua scripts). Normally replicating scripts instead of the effects of the scripts makes sense, however not in all the cases. So starting with Redis 3.2, the scripting engine is able to, alternatively, replicate the sequence of write commands resulting from the script execution, instead of replication the script itself. In this section, we'll assume that scripts are replicated verbatim by sending the whole script. Let's call this replication mode \*\*verbatim scripts replication\*\*. The main drawback with the \*whole scripts replication\* approach is that scripts are required to have the following property: the script \*\*always must\*\* execute the same Redis \_write\_ commands with the same arguments given the same input data set. Operations performed by the script can't depend on any hidden (non-explicit) information or state that may change as the script execution proceeds or between different executions of the script. Nor can it depend on any external input from I/O devices. Acts such as using the system time, calling Redis commands that return random values (e.g., `RANDOMKEY`), or using Lua's random number generator, could result in scripts that will not evaluate consistently. To enforce the deterministic behavior of scripts, Redis does the following: \* Lua does not export commands to access the system time or other external states. \* Redis will block the script with an error if a script calls a Redis command able to alter the data set \*\*after\*\* a Redis \_random\_ command like `RANDOMKEY`, `SRANDMEMBER`, `TIME`. That means that read-only scripts that don't modify the dataset can call those commands. Note that a \_random command\_ does not necessarily mean a command that uses random numbers: any non-deterministic command is considered as a random command (the best example in this regard is the `TIME` command). \* In Redis version 4.0, commands that may return elements in random order, such as `SMEMBERS` (because Redis Sets are \_unordered\_), exhibit a different behavior when called from Lua, and undergo a silent lexicographical sorting filter before returning data to Lua scripts. So `redis.call("SMEMBERS",KEYS[1])` will always return the Set elements in the same order, while the same command invoked by normal clients may return different results even if the key contains exactly the same elements. However, starting with Redis 5.0, this ordering is no longer performed because replicating effects circumvents this type of non-determinism. In general, even when developing for Redis 4.0, never assume that certain commands in Lua will be ordered, but instead rely on the documentation of the original command you call to see the properties it provides. \* Lua's pseudo-random number generation function `math.random` is modified and always uses the same seed for every execution. This means that calling [`math.random`](/topics/lua-api#runtime-libraries) will always generate the same sequence of numbers every time a script is executed (unless `math.randomseed` is used). All that said, you can still use commands that write and random behavior with a simple trick. Imagine that you want to write a Redis script that will populate a list with N random integers. The initial implementation in Ruby could look like this: ``` require 'rubygems' require 'redis' r = Redis.new RandomPushScript = < 0) do res = redis.call('LPUSH',KEYS[1],math.random()) i = i-1 end return res EOF r.del(:mylist) puts r.eval(RandomPushScript,[:mylist],[10,rand(2\*\*32)]) ``` Every time this code runs, the resulting list will have exactly the following elements: ``` redis> LRANGE mylist 0 -1 1) "0.74509509873814" 2) "0.87390407681181" 3) "0.36876626981831" 4) "0.6921941534114" 5) "0.7857992587545" 6) "0.57730350670279" 7) "0.87046522734243" 8) "0.09637165539729" 9) "0.74990198051087" 10) "0.17082803611217" ``` To make the script both deterministic and still have it produce different random elements, we can add an extra argument to the script that's the seed to Lua's | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/eval-intro.md | master | redis | [
-0.10646461695432663,
-0.09887173771858215,
-0.06314530968666077,
0.03925476223230362,
-0.02428652159869671,
-0.10738982260227203,
0.01900804415345192,
-0.0031626312993466854,
0.07882213592529297,
0.054858848452568054,
0.02461419254541397,
0.07213298976421356,
0.06697410345077515,
-0.09475... | 0.117842 |
-1 1) "0.74509509873814" 2) "0.87390407681181" 3) "0.36876626981831" 4) "0.6921941534114" 5) "0.7857992587545" 6) "0.57730350670279" 7) "0.87046522734243" 8) "0.09637165539729" 9) "0.74990198051087" 10) "0.17082803611217" ``` To make the script both deterministic and still have it produce different random elements, we can add an extra argument to the script that's the seed to Lua's pseudo-random number generator. The new script is as follows: ``` RandomPushScript = < 0) do res = redis.call('LPUSH',KEYS[1],math.random()) i = i-1 end return res EOF r.del(:mylist) puts r.eval(RandomPushScript,1,:mylist,10,rand(2\*\*32)) ``` What we are doing here is sending the seed of the PRNG as one of the arguments. The script output will always be the same given the same arguments (our requirement) but we are changing one of the arguments at every invocation, generating the random seed client-side. The seed will be propagated as one of the arguments both in the replication link and in the Append Only File, guaranteeing that the same changes will be generated when the AOF is reloaded or when the replica processes the script. Note: an important part of this behavior is that the PRNG that Redis implements as `math.random` and `math.randomseed` is guaranteed to have the same output regardless of the architecture of the system running Redis. 32-bit, 64-bit, big-endian and little-endian systems will all produce the same output. ## Debugging Eval scripts Starting with Redis 3.2, Redis has support for native Lua debugging. The Redis Lua debugger is a remote debugger consisting of a server, which is Redis itself, and a client, which is by default [`redis-cli`](/topics/rediscli). The Lua debugger is described in the [Lua scripts debugging](/topics/ldb) section of the Redis documentation. ## Execution under low memory conditions When memory usage in Redis exceeds the `maxmemory` limit, the first write command encountered in the script that uses additional memory will cause the script to abort (unless [`redis.pcall`](/topics/lua-api#redis.pcall) was used). However, an exception to the above is when the script's first write command does not use additional memory, as is the case with (for example, `DEL` and `LREM`). In this case, Redis will allow all commands in the script to run to ensure atomicity. If subsequent writes in the script consume additional memory, Redis' memory usage can exceed the threshold set by the `maxmemory` configuration directive. Another scenario in which a script can cause memory usage to cross the `maxmemory` threshold is when the execution begins when Redis is slightly below `maxmemory`, so the script's first write command is allowed. As the script executes, subsequent write commands consume more memory leading to the server using more RAM than the configured `maxmemory` directive. In those scenarios, you should consider setting the `maxmemory-policy` configuration directive to any values other than `noeviction`. In addition, Lua scripts should be as fast as possible so that eviction can kick in between executions. Note that you can change this behaviour by using [flags](#eval-flags) ## Eval flags Normally, when you run an Eval script, the server does not know how it accesses the database. By default, Redis assumes that all scripts read and write data. However, starting with Redis 7.0, there's a way to declare flags when creating a script in order to tell Redis how it should behave. The way to do that is by using a Shebang statement on the first line of the script like so: ``` #!lua flags=no-writes,allow-stale local x = redis.call('get','x') return x ``` Note that as soon as Redis sees the `#!` comment, it'll treat the script as if it declares flags, even if no flags are defined, it still has a different set of defaults compared to a script without a `#!` line. Another difference is | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/eval-intro.md | master | redis | [
-0.11082341521978378,
-0.0318215973675251,
-0.08172470331192017,
0.00414647813886404,
0.022349920123815536,
-0.11773157864809036,
0.11159088462591171,
0.0156343262642622,
0.016866564750671387,
0.003977462649345398,
0.03487183898687363,
-0.01771068200469017,
0.02735464833676815,
-0.10523664... | 0.001226 |
x = redis.call('get','x') return x ``` Note that as soon as Redis sees the `#!` comment, it'll treat the script as if it declares flags, even if no flags are defined, it still has a different set of defaults compared to a script without a `#!` line. Another difference is that scripts without `#!` can run commands that access keys belonging to different cluster hash slots, but ones with `#!` inherit the default flags, so they cannot. Please refer to [Script flags](/topics/lua-api#script\_flags) to learn about the various scripts and the defaults. | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/eval-intro.md | master | redis | [
-0.030137933790683746,
-0.05092094838619232,
-0.07561386376619339,
0.033528879284858704,
0.030745314434170723,
-0.07392682880163193,
0.08088654279708862,
0.012527362443506718,
0.03237351030111313,
-0.07221104949712753,
0.04650985822081566,
0.02928396314382553,
-0.000856242491863668,
-0.043... | 0.065372 |
Redis includes an embedded [Lua 5.1](https://www.lua.org/) interpreter. The interpreter runs user-defined [ephemeral scripts](/topics/eval-intro) and [functions](/topics/functions-intro). Scripts run in a sandboxed context and can only access specific Lua packages. This page describes the packages and APIs available inside the execution's context. ## Sandbox context The sandboxed Lua context attempts to prevent accidental misuse and reduce potential threats from the server's environment. Scripts should never try to access the Redis server's underlying host systems. That includes the file system, network, and any other attempt to perform a system call other than those supported by the API. Scripts should operate solely on data stored in Redis and data provided as arguments to their execution. ### Global variables and functions The sandboxed Lua execution context blocks the declaration of global variables and functions. The blocking of global variables is in place to ensure that scripts and functions don't attempt to maintain any runtime context other than the data stored in Redis. In the (somewhat uncommon) use case that a context needs to be maintain between executions, you should store the context in Redis' keyspace. Redis will return a "Script attempted to create global variable 'my\_global\_variable" error when trying to execute the following snippet: ```lua my\_global\_variable = 'some value' ``` And similarly for the following global function declaration: ```lua function my\_global\_function() -- Do something amazing end ``` You'll also get a similar error when your script attempts to access any global variables that are undefined in the runtime's context: ```lua -- The following will surely raise an error return an\_undefined\_global\_variable ``` Instead, all variable and function definitions are required to be declared as local. To do so, you'll need to prepend the [\_local\_](https://www.lua.org/manual/5.1/manual.html#2.4.7) keyword to your declarations. For example, the following snippet will be considered perfectly valid by Redis: ```lua local my\_local\_variable = 'some value' local function my\_local\_function() -- Do something else, but equally amazing end ``` \*\*Note:\*\* the sandbox attempts to prevent the use of globals. Using Lua's debugging functionality or other approaches such as altering the meta table used for implementing the globals' protection to circumvent the sandbox isn't hard. However, it is difficult to circumvent the protection by accident. If the user messes with the Lua global state, the consistency of AOF and replication can't be guaranteed. In other words, just don't do it. ### Imported Lua modules Using imported Lua modules is not supported inside the sandboxed execution context. The sandboxed execution context prevents the loading modules by disabling Lua's [`require` function](https://www.lua.org/pil/8.1.html). The only libraries that Redis ships with and that you can use in scripts are listed under the [Runtime libraries](#runtime-libraries) section. ## Runtime globals While the sandbox prevents users from declaring globals, the execution context is pre-populated with several of these. ### The \_redis\_ singleton The \_redis\_ singleton is an object instance that's accessible from all scripts. It provides the API to interact with Redis from scripts. Its description follows [below](#redis\_object). ### The \_KEYS\_ global variable \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: no \*\*Important:\*\* to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. The script \*\*should only\*\* access keys whose names are given as input arguments. Scripts \*\*should never\*\* access keys with programmatically-generated names or based on the contents of data structures stored in the database. The \_KEYS\_ global variable is available only for [ephemeral scripts](/topics/eval-intro). It is pre-populated with all key name input arguments. ### The \_ARGV\_ global variable \* Since version: 2.6.0 \* Available in scripts: yes \* Available in | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.04934467002749443,
-0.04871683195233345,
-0.07100096344947815,
0.007360952440649271,
0.07224872708320618,
-0.11614252626895905,
-0.00020953906641807407,
0.051494479179382324,
0.06073767691850662,
-0.023114100098609924,
0.02176271192729473,
0.07109451293945312,
-0.017920423299074173,
-0.... | 0.155599 |
programmatically-generated names or based on the contents of data structures stored in the database. The \_KEYS\_ global variable is available only for [ephemeral scripts](/topics/eval-intro). It is pre-populated with all key name input arguments. ### The \_ARGV\_ global variable \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: no The \_ARGV\_ global variable is available only in [ephemeral scripts](/topics/eval-intro). It is pre-populated with all regular input arguments. ## \_redis\_ object \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes The Redis Lua execution context always provides a singleton instance of an object named \_redis\_. The \_redis\_ instance enables the script to interact with the Redis server that's running it. Following is the API provided by the \_redis\_ object instance. ### `redis.call(command [,arg...])` \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes The `redis.call()` function calls a given Redis command and returns its reply. Its inputs are the command and arguments, and once called, it executes the command in Redis and returns the reply. For example, we can call the `ECHO` command from a script and return its reply like so: ```lua return redis.call('ECHO', 'Echo, echo... eco... o...') ``` If and when `redis.call()` triggers a runtime exception, the raw exception is raised back to the user as an error, automatically. Therefore, attempting to execute the following ephemeral script will fail and generate a runtime exception because `ECHO` accepts exactly one argument: ```lua redis> EVAL "return redis.call('ECHO', 'Echo,', 'echo... ', 'eco... ', 'o...')" 0 (error) ERR Wrong number of args calling Redis command from script script: b0345693f4b77517a711221050e76d24ae60b7f7, on @user\_script:1. ``` Note that the call can fail due to various reasons, see [Execution under low memory conditions](/topics/eval-intro#execution-under-low-memory-conditions) and [Script flags](#script\_flags) To handle Redis runtime errors use `redis.pcall()` instead. ### `redis.pcall(command [,arg...])` \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes This function enables handling runtime errors raised by the Redis server. The `redis.pcall()` function behaves exactly like [`redis.call()`](#redis.call), except that it: \* Always returns a reply. \* Never throws a runtime exception, and returns in its stead a [`redis.error\_reply`](#redis.error\_reply) in case that a runtime exception is thrown by the server. The following demonstrates how to use `redis.pcall()` to intercept and handle runtime exceptions from within the context of an ephemeral script. ```lua local reply = redis.pcall('ECHO', unpack(ARGV)) if reply['err'] ~= nil then -- Handle the error sometime, but for now just log it redis.log(redis.LOG\_WARNING, reply['err']) reply['err'] = 'ERR Something is wrong, but no worries, everything is under control' end return reply ``` Evaluating this script with more than one argument will return: ``` redis> EVAL "..." 0 hello world (error) ERR Something is wrong, but no worries, everything is under control ``` ### `redis.error\_reply(x)` \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes This is a helper function that returns an [error reply](/topics/protocol#resp-errors). The helper accepts a single string argument and returns a Lua table with the \_err\_ field set to that string. The outcome of the following code is that \_error1\_ and \_error2\_ are identical for all intents and purposes: ```lua local text = 'ERR My very special error' local reply1 = { err = text } local reply2 = redis.error\_reply(text) ``` Therefore, both forms are valid as means for returning an error reply from scripts: ``` redis> EVAL "return { err = 'ERR My very special table error' }" 0 (error) ERR My very special table error redis> EVAL "return redis.error\_reply('ERR My very special reply error')" 0 (error) ERR My very special reply error ``` For returning Redis | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.03551479056477547,
-0.06982308626174927,
-0.11121182143688202,
0.05380554497241974,
-0.030541634187102318,
-0.0860433429479599,
0.07319874316453934,
0.021757621318101883,
0.012532172724604607,
-0.02479497902095318,
0.026355162262916565,
-0.009597687982022762,
0.03369951248168945,
-0.099... | 0.12932 |
for returning an error reply from scripts: ``` redis> EVAL "return { err = 'ERR My very special table error' }" 0 (error) ERR My very special table error redis> EVAL "return redis.error\_reply('ERR My very special reply error')" 0 (error) ERR My very special reply error ``` For returning Redis status replies refer to [`redis.status\_reply()`](#redis.status\_reply). Refer to the [Data type conversion](#data-type-conversion) for returning other response types. \*\*Note:\*\* By convention, Redis uses the first word of an error string as a unique error code for specific errors or `ERR` for general-purpose errors. Scripts are advised to follow this convention, as shown in the example above, but this is not mandatory. ### `redis.status\_reply(x)` \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes This is a helper function that returns a [simple string reply](/topics/protocol#resp-simple-strings). "OK" is an example of a standard Redis status reply. The Lua API represents status replies as tables with a single field, \_ok\_, set with a simple status string. The outcome of the following code is that \_status1\_ and \_status2\_ are identical for all intents and purposes: ```lua local text = 'Frosty' local status1 = { ok = text } local status2 = redis.status\_reply(text) ``` Therefore, both forms are valid as means for returning status replies from scripts: ``` redis> EVAL "return { ok = 'TICK' }" 0 TICK redis> EVAL "return redis.status\_reply('TOCK')" 0 TOCK ``` For returning Redis error replies refer to [`redis.error\_reply()`](#redis.error\_reply). Refer to the [Data type conversion](#data-type-conversion) for returning other response types. ### `redis.sha1hex(x)` \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes This function returns the SHA1 hexadecimal digest of its single string argument. You can, for example, obtain the empty string's SHA1 digest: ``` redis> EVAL "return redis.sha1hex('')" 0 "da39a3ee5e6b4b0d3255bfef95601890afd80709" ``` ### `redis.log(level, message)` \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes This function writes to the Redis server log. It expects two input arguments: the log level and a message. The message is a string to write to the log file. Log level can be on of these: \* `redis.LOG\_DEBUG` \* `redis.LOG\_VERBOSE` \* `redis.LOG\_NOTICE` \* `redis.LOG\_WARNING` These levels map to the server's log levels. The log only records messages equal or greater in level than the server's `loglevel` configuration directive. The following snippet: ```lua redis.log(redis.LOG\_WARNING, 'Something is terribly wrong') ``` will produce a line similar to the following in your server's log: ``` [32343] 22 Mar 15:21:39 # Something is terribly wrong ``` ### `redis.setresp(x)` \* Since version: 6.0.0 \* Available in scripts: yes \* Available in functions: yes This function allows the executing script to switch between [Redis Serialization Protocol (RESP)](/topics/protocol) versions for the replies returned by [`redis.call()`](#redis.call) and [`redis.pcall()`](#redis.pcall). It expects a single numerical argument as the protocol's version. The default protocol version is \_2\_, but it can be switched to version \_3\_. Here's an example of switching to RESP3 replies: ```lua redis.setresp(3) ``` Please refer to the [Data type conversion](#data-type-conversion) for more information about type conversions. ### `redis.set\_repl(x)` \* Since version: 3.2.0 \* Available in scripts: yes \* Available in functions: no \*\*Note:\*\* this feature is only available when script effects replication is employed. Calling it when using verbatim script replication will result in an error. As of Redis version 2.6.0, scripts were replicated verbatim, meaning that the scripts' source code was sent for execution by replicas and stored in the AOF. An alternative replication mode added in version 3.2.0 allows replicating only the scripts' effects. As of Redis version 7.0, script replication is no longer supported, and the only replication mode available is | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.03989680856466293,
-0.005881678778678179,
0.0070911007933318615,
0.04608208313584328,
0.004195463843643665,
-0.07749314606189728,
0.07685759663581848,
0.04663394019007683,
-0.014971188269555569,
-0.0020909574814140797,
-0.014231797307729721,
-0.045957792550325394,
0.06548904627561569,
-... | 0.0698 |
verbatim, meaning that the scripts' source code was sent for execution by replicas and stored in the AOF. An alternative replication mode added in version 3.2.0 allows replicating only the scripts' effects. As of Redis version 7.0, script replication is no longer supported, and the only replication mode available is script effects replication. \*\*Warning:\*\* this is an advanced feature. Misuse can cause damage by violating the contract that binds the Redis master, its replicas, and AOF contents to hold the same logical content. This function allows a script to assert control over how its effects are propagated to replicas and the AOF afterward. A script's effects are the Redis write commands that it calls. By default, all write commands that a script executes are replicated. Sometimes, however, better control over this behavior can be helpful. This can be the case, for example, when storing intermediate values in the master alone. Consider a script that intersects two sets and stores the result in a temporary key with `SUNIONSTORE`. It then picks five random elements (`SRANDMEMBER`) from the intersection and stores (`SADD`) them in another set. Finally, before returning, it deletes the temporary key that stores the intersection of the two source sets. In this case, only the new set with its five randomly-chosen elements needs to be replicated. Replicating the `SUNIONSTORE` command and the `DEL`ition of the temporary key is unnecessary and wasteful. The `redis.set\_repl()` function instructs the server how to treat subsequent write commands in terms of replication. It accepts a single input argument that only be one of the following: \* `redis.REPL\_ALL`: replicates the effects to the AOF and replicas. \* `redis.REPL\_AOF`: replicates the effects to the AOF alone. \* `redis.REPL\_REPLICA`: replicates the effects to the replicas alone. \* `redis.REPL\_SLAVE`: same as `REPL\_REPLICA`, maintained for backward compatibility. \* `redis.REPL\_NONE`: disables effect replication entirely. By default, the scripting engine is initialized to the `redis.REPL\_ALL` setting when a script begins its execution. You can call the `redis.set\_repl()` function at any time during the script's execution to switch between the different replication modes. A simple example follows: ```lua redis.replicate\_commands() -- Enable effects replication in versions lower than Redis v7.0 redis.call('SET', KEYS[1], ARGV[1]) redis.set\_repl(redis.REPL\_NONE) redis.call('SET', KEYS[2], ARGV[2]) redis.set\_repl(redis.REPL\_ALL) redis.call('SET', KEYS[3], ARGV[3]) ``` If you run this script by calling `EVAL "..." 3 A B C 1 2 3`, the result will be that only the keys \_A\_ and \_C\_ are created on the replicas and AOF. ### `redis.replicate\_commands()` \* Since version: 3.2.0 \* Until version: 7.0.0 \* Available in scripts: yes \* Available in functions: no This function switches the script's replication mode from verbatim replication to effects replication. You can use it to override the default verbatim script replication mode used by Redis until version 7.0. \*\*Note:\*\* as of Redis v7.0, verbatim script replication is no longer supported. The default, and only script replication mode supported, is script effects' replication. For more information, please refer to [`Replicating commands instead of scripts`](/topics/eval-intro#replicating-commands-instead-of-scripts) ### `redis.breakpoint()` \* Since version: 3.2.0 \* Available in scripts: yes \* Available in functions: no This function triggers a breakpoint when using the [Redis Lua debugger](/topics/ldb). ### `redis.debug(x)` \* Since version: 3.2.0 \* Available in scripts: yes \* Available in functions: no This function prints its argument in the [Redis Lua debugger](/topics/ldb) console. ### `redis.acl\_check\_cmd(command [,arg...])` \* Since version: 7.0.0 \* Available in scripts: yes \* Available in functions: yes This function is used for checking if the current user running the script has [ACL](/topics/acl) permissions to execute the given command with the given arguments. The return value is a boolean `true` in case the current user has permissions to | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.05800357460975647,
-0.0825633630156517,
-0.06185610592365265,
0.05579235777258873,
0.0507652685046196,
-0.0510152131319046,
0.018384966999292374,
-0.037786632776260376,
0.08942917734384537,
0.059538476169109344,
-0.009273933246731758,
0.07387957721948624,
0.07254014909267426,
-0.1235892... | 0.143033 |
\* Available in scripts: yes \* Available in functions: yes This function is used for checking if the current user running the script has [ACL](/topics/acl) permissions to execute the given command with the given arguments. The return value is a boolean `true` in case the current user has permissions to execute the command (via a call to [redis.call](#redis.call) or [redis.pcall](#redis.pcall)) or `false` in case they don't. The function will raise an error if the passed command or its arguments are invalid. ### `redis.register\_function` \* Since version: 7.0.0 \* Available in scripts: no \* Available in functions: yes This function is only available from the context of the `FUNCTION LOAD` command. When called, it registers a function to the loaded library. The function can be called either with positional or named arguments. #### positional arguments: `redis.register\_function(name, callback)` The first argument to `redis.register\_function` is a Lua string representing the function name. The second argument to `redis.register\_function` is a Lua function. Usage example: ``` redis> FUNCTION LOAD "#!lua name=mylib\n redis.register\_function('noop', function() end)" ``` #### Named arguments: `redis.register\_function{function\_name=name, callback=callback, flags={flag1, flag2, ..}, description=description}` The named arguments variant accepts the following arguments: \* \_function\\_name\_: the function's name. \* \_callback\_: the function's callback. \* \_flags\_: an array of strings, each a function flag (optional). \* \_description\_: function's description (optional). Both \_function\\_name\_ and \_callback\_ are mandatory. Usage example: ``` redis> FUNCTION LOAD "#!lua name=mylib\n redis.register\_function{function\_name='noop', callback=function() end, flags={ 'no-writes' }, description='Does nothing'}" ``` #### Script flags \*\*Important:\*\* Use script flags with care, which may negatively impact if misused. Note that the default for Eval scripts are different than the default for functions that are mentioned below, see [Eval Flags](/docs/manual/programmability/eval-intro/#eval-flags) When you register a function or load an Eval script, the server does not know how it accesses the database. By default, Redis assumes that all scripts read and write data. This results in the following behavior: 1. They can read and write data. 1. They can run in cluster mode, and are not able to run commands accessing keys of different hash slots. 1. Execution against a stale replica is denied to avoid inconsistent reads. 1. Execution under low memory is denied to avoid exceeding the configured threshold. You can use the following flags and instruct the server to treat the scripts' execution differently: \* `no-writes`: this flag indicates that the script only reads data but never writes. By default, Redis will deny the execution of flagged scripts (Functions and Eval scripts with [shebang](/topics/eval-intro#eval-flags)) against read-only replicas, as they may attempt to perform writes. Similarly, the server will not allow calling scripts with `FCALL\_RO` / `EVAL\_RO`. Lastly, when data persistence is at risk due to a disk error, execution is blocked as well. Using this flag allows executing the script: 1. With `FCALL\_RO` / `EVAL\_RO` 2. On read-only replicas. 3. Even if there's a disk error (Redis is unable to persist so it rejects writes). 4. When over the memory limit since it implies the script doesn't increase memory consumption (see `allow-oom` below) However, note that the server will return an error if the script attempts to call a write command. Also note that currently `PUBLISH`, `SPUBLISH` and `PFCOUNT` are also considered write commands in scripts, because they could attempt to propagate commands to replicas and AOF file. For more information please refer to [Read-only scripts](/docs/manual/programmability/#read-only\_scripts) \* `allow-oom`: use this flag to allow a script to execute when the server is out of memory (OOM). Unless used, Redis will deny the execution of flagged scripts (Functions and Eval scripts with [shebang](/topics/eval-intro#eval-flags)) when in an OOM state. Furthermore, when you use this flag, the script can call any | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.09826753288507462,
-0.039785176515579224,
-0.10790866613388062,
0.026864109560847282,
-0.027681954205036163,
-0.07353641092777252,
0.09396415203809738,
0.04292165860533714,
0.003330961801111698,
-0.048516109585762024,
-0.009294524788856506,
0.03132938593626022,
-0.007385428994894028,
-0... | 0.162123 |
`allow-oom`: use this flag to allow a script to execute when the server is out of memory (OOM). Unless used, Redis will deny the execution of flagged scripts (Functions and Eval scripts with [shebang](/topics/eval-intro#eval-flags)) when in an OOM state. Furthermore, when you use this flag, the script can call any Redis command, including commands that aren't usually allowed in this state. Specifying `no-writes` or using `FCALL\_RO` / `EVAL\_RO` also implies the script can run in OOM state (without specifying `allow-oom`) \* `allow-stale`: a flag that enables running the flagged scripts (Functions and Eval scripts with [shebang](/topics/eval-intro#eval-flags)) against a stale replica when the `replica-serve-stale-data` config is set to `no` . Redis can be set to prevent data consistency problems from using old data by having stale replicas return a runtime error. For scripts that do not access the data, this flag can be set to allow stale Redis replicas to run the script. Note however that the script will still be unable to execute any command that accesses stale data. \* `no-cluster`: the flag causes the script to return an error in Redis cluster mode. Redis allows scripts to be executed both in standalone and cluster modes. Setting this flag prevents executing the script against nodes in the cluster. \* `allow-cross-slot-keys`: The flag that allows a script to access keys from multiple slots. Redis typically prevents any single command from accessing keys that hash to multiple slots. This flag allows scripts to break this rule and access keys within the script that access multiple slots. Declared keys to the script are still always required to hash to a single slot. Accessing keys from multiple slots is discouraged as applications should be designed to only access keys from a single slot at a time, allowing slots to move between Redis servers. This flag has no effect when cluster mode is disabled. Please refer to [Function Flags](/docs/manual/programmability/functions-intro/#function-flags) and [Eval Flags](/docs/manual/programmability/eval-intro/#eval-flags) for a detailed example. ### `redis.REDIS\_VERSION` \* Since version: 7.0.0 \* Available in scripts: yes \* Available in functions: yes Returns the current Redis server version as a Lua string. The reply's format is `MM.mm.PP`, where: \* \*\*MM:\*\* is the major version. \* \*\*mm:\*\* is the minor version. \* \*\*PP:\*\* is the patch level. ### `redis.REDIS\_VERSION\_NUM` \* Since version: 7.0.0 \* Available in scripts: yes \* Available in functions: yes Returns the current Redis server version as a number. The reply is a hexadecimal value structured as `0x00MMmmPP`, where: \* \*\*MM:\*\* is the major version. \* \*\*mm:\*\* is the minor version. \* \*\*PP:\*\* is the patch level. ## Data type conversion Unless a runtime exception is raised, `redis.call()` and `redis.pcall()` return the reply from the executed command to the Lua script. Redis' replies from these functions are converted automatically into Lua's native data types. Similarly, when a Lua script returns a reply with the `return` keyword, that reply is automatically converted to Redis' protocol. Put differently; there's a one-to-one mapping between Redis' replies and Lua's data types and a one-to-one mapping between Lua's data types and the [Redis Protocol](/topics/protocol) data types. The underlying design is such that if a Redis type is converted into a Lua type and converted back into a Redis type, the result is the same as the initial value. Type conversion from Redis protocol replies (i.e., the replies from `redis.call()` and `redis.pcall()`) to Lua data types depends on the Redis Serialization Protocol version used by the script. The default protocol version during script executions is RESP2. The script may switch the replies' protocol versions by calling the `redis.setresp()` function. Type conversion from a script's returned Lua data type depends | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.054195061326026917,
-0.003929153550416231,
-0.12211066484451294,
0.06266556680202484,
0.06821189820766449,
-0.08945769816637039,
-0.014856857247650623,
-0.02478494495153427,
0.013633571565151215,
0.00024280635989271104,
-0.005061801057308912,
0.08893848955631256,
-0.016239790245890617,
... | 0.1851 |
`redis.call()` and `redis.pcall()`) to Lua data types depends on the Redis Serialization Protocol version used by the script. The default protocol version during script executions is RESP2. The script may switch the replies' protocol versions by calling the `redis.setresp()` function. Type conversion from a script's returned Lua data type depends on the user's choice of protocol (see the `HELLO` command). The following sections describe the type conversion rules between Lua and Redis per the protocol's version. ### RESP2 to Lua type conversion The following type conversion rules apply to the execution's context by default as well as after calling `redis.setresp(2)`: \* [RESP2 integer reply](/topics/protocol#resp-integers) -> Lua number \* [RESP2 bulk string reply](/topics/protocol#resp-bulk-strings) -> Lua string \* [RESP2 array reply](/topics/protocol#resp-arrays) -> Lua table (may have other Redis data types nested) \* [RESP2 status reply](/topics/protocol#resp-simple-strings) -> Lua table with a single \_ok\_ field containing the status string \* [RESP2 error reply](/topics/protocol#resp-errors) -> Lua table with a single \_err\_ field containing the error string \* [RESP2 null bulk reply](/topics/protocol#null-elements-in-arrays) and [null multi bulk reply](/topics/protocol#resp-arrays) -> Lua false boolean type ## Lua to RESP2 type conversion The following type conversion rules apply by default as well as after the user had called `HELLO 2`: \* Lua number -> [RESP2 integer reply](/topics/protocol#resp-integers) (the number is converted into an integer) \* Lua string -> [RESP bulk string reply](/topics/protocol#resp-bulk-strings) \* Lua table (indexed, non-associative array) -> [RESP2 array reply](/topics/protocol#resp-arrays) (truncated at the first Lua `nil` value encountered in the table, if any) \* Lua table with a single \_ok\_ field -> [RESP2 status reply](/topics/protocol#resp-simple-strings) \* Lua table with a single \_err\_ field -> [RESP2 error reply](/topics/protocol#resp-errors) \* Lua boolean false -> [RESP2 null bulk reply](/topics/protocol#null-elements-in-arrays) There is an additional Lua-to-Redis conversion rule that has no corresponding Redis-to-Lua conversion rule: \* Lua Boolean `true` -> [RESP2 integer reply](/topics/protocol#resp-integers) with value of 1. There are three additional rules to note about converting Lua to Redis data types: \* Lua has a single numerical type, Lua numbers. There is no distinction between integers and floats. So we always convert Lua numbers into integer replies, removing the decimal part of the number, if any. \*\*If you want to return a Lua float, it should be returned as a string\*\*, exactly like Redis itself does (see, for instance, the `ZSCORE` command). \* There's [no simple way to have nils inside Lua arrays](http://www.lua.org/pil/19.1.html) due to Lua's table semantics. Therefore, when Redis converts a Lua array to RESP, the conversion stops when it encounters a Lua `nil` value. \* When a Lua table is an associative array that contains keys and their respective values, the converted Redis reply will \*\*not\*\* include them. Lua to RESP2 type conversion examples: ``` redis> EVAL "return 10" 0 (integer) 10 redis> EVAL "return { 1, 2, { 3, 'Hello World!' } }" 0 1) (integer) 1 2) (integer) 2 3) 1) (integer) 3 1) "Hello World!" redis> EVAL "return redis.call('get','foo')" 0 "bar" ``` The last example demonstrates receiving and returning the exact return value of `redis.call()` (or `redis.pcall()`) in Lua as it would be returned if the command had been called directly. The following example shows how floats and arrays that cont nils and keys are handled: ``` redis> EVAL "return { 1, 2, 3.3333, somekey = 'somevalue', 'foo', nil , 'bar' }" 0 1) (integer) 1 2) (integer) 2 3) (integer) 3 4) "foo" ``` As you can see, the float value of \_3.333\_ gets converted to an integer \_3\_, the \_somekey\_ key and its value are omitted, and the string "bar" isn't returned as there is a `nil` value that precedes it. ### RESP3 to Lua | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.03146845102310181,
-0.05909375101327896,
-0.045716121792793274,
0.016604604199528694,
-0.03665132820606232,
-0.10550683736801147,
0.033833183348178864,
0.07065010070800781,
0.0053150951862335205,
-0.0010396207217127085,
-0.030440598726272583,
0.01625748537480831,
-0.019625568762421608,
... | 0.063803 |
(integer) 2 3) (integer) 3 4) "foo" ``` As you can see, the float value of \_3.333\_ gets converted to an integer \_3\_, the \_somekey\_ key and its value are omitted, and the string "bar" isn't returned as there is a `nil` value that precedes it. ### RESP3 to Lua type conversion [RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md) is a newer version of the [Redis Serialization Protocol](/topics/protocol). It is available as an opt-in choice as of Redis v6.0. An executing script may call the [`redis.setresp`](#redis.setresp) function during its execution and switch the protocol version that's used for returning replies from Redis' commands (that can be invoked via [`redis.call()`](#redis.call) or [`redis.pcall()`](#redis.pcall)). Once Redis' replies are in RESP3 protocol, all of the [RESP2 to Lua conversion](#resp2-to-lua-type-conversion) rules apply, with the following additions: \* [RESP3 map reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#map-type) -> Lua table with a single \_map\_ field containing a Lua table representing the fields and values of the map. \* [RESP set reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#set-reply) -> Lua table with a single \_set\_ field containing a Lua table representing the elements of the set as fields, each with the Lua Boolean value of `true`. \* [RESP3 null](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#null-reply) -> Lua `nil`. \* [RESP3 true reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) -> Lua true boolean value. \* [RESP3 false reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) -> Lua false boolean value. \* [RESP3 double reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#double-type) -> Lua table with a single \_double\_ field containing a Lua number representing the double value. \* [RESP3 big number reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#big-number-type) -> Lua table with a single \_big\_number\_ field containing a Lua string representing the big number value. \* [Redis verbatim string reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#verbatim-string-type) -> Lua table with a single \_verbatim\_string\_ field containing a Lua table with two fields, \_string\_ and \_format\_, representing the verbatim string and its format, respectively. \*\*Note:\*\* the RESP3 [big number](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#big-number-type) and [verbatim strings](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#verbatim-string-type) replies are only supported as of Redis v7.0 and greater. Also, presently, RESP3's [attributes](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#attribute-type), [streamed strings](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#streamed-strings) and [streamed aggregate data types](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#streamed-aggregate-data-types) are not supported by the Redis Lua API. ### Lua to RESP3 type conversion Regardless of the script's choice of protocol version set for replies with the [`redis.setresp()` function] when it calls `redis.call()` or `redis.pcall()`, the user may opt-in to using RESP3 (with the `HELLO 3` command) for the connection. Although the default protocol for incoming client connections is RESP2, the script should honor the user's preference and return adequately-typed RESP3 replies, so the following rules apply on top of those specified in the [Lua to RESP2 type conversion](#lua-to-resp2-type-conversion) section when that is the case. \* Lua Boolean -> [RESP3 Boolean reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) (note that this is a change compared to the RESP2, in which returning a Boolean Lua `true` returned the number 1 to the Redis client, and returning a `false` used to return a `null`. \* Lua table with a single \_map\_ field set to an associative Lua table -> [RESP3 map reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#map-type). \* Lua table with a single \_set\_ field set to an associative Lua table -> [RESP3 set reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#set-type). Values can be set to anything and are discarded anyway. \* Lua table with a single \_double\_ field to an associative Lua table -> [RESP3 double reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#double-type). \* Lua nil -> [RESP3 null](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#null-reply). However, if the connection is set use the RESP2 protocol, and even if the script replies with RESP3-typed responses, Redis will automatically perform a RESP3 to RESP2 conversion of the reply as is the case for regular commands. That means, for example, that returning the RESP3 map type to a RESP2 connection will result in the reply being converted to a flat RESP2 array that consists of alternating field names and their values, rather than a RESP3 map. ## Additional notes about scripting ### Using `SELECT` inside scripts | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.03140877187252045,
-0.01740170083940029,
-0.09774012118577957,
-0.013381432741880417,
-0.030999839305877686,
-0.06636054813861847,
0.04909369722008705,
0.02144569531083107,
0.03350085765123367,
-0.024510860443115234,
-0.014408132061362267,
0.009811334311962128,
0.02725154533982277,
-0.0... | 0.100185 |
That means, for example, that returning the RESP3 map type to a RESP2 connection will result in the reply being converted to a flat RESP2 array that consists of alternating field names and their values, rather than a RESP3 map. ## Additional notes about scripting ### Using `SELECT` inside scripts You can call the `SELECT` command from your Lua scripts, like you can with any normal client connection. However, one subtle aspect of the behavior changed between Redis versions 2.8.11 and 2.8.12. Prior to Redis version 2.8.12, the database selected by the Lua script was \*set as the current database\* for the client connection that had called it. As of Redis version 2.8.12, the database selected by the Lua script only affects the execution context of the script, and does not modify the database that's selected by the client calling the script. This semantic change between patch level releases was required since the old behavior was inherently incompatible with Redis' replication and introduced bugs. ## Runtime libraries The Redis Lua runtime context always comes with several pre-imported libraries. The following [standard Lua libraries](https://www.lua.org/manual/5.1/manual.html#5) are available to use: \* The [\_String Manipulation (string)\_ library](https://www.lua.org/manual/5.1/manual.html#5.4) \* The [\_Table Manipulation (table)\_ library](https://www.lua.org/manual/5.1/manual.html#5.5) \* The [\_Mathematical Functions (math)\_ library](https://www.lua.org/manual/5.1/manual.html#5.6) \* The [\_Operating System Facilities (os)\_ library](#os-library) In addition, the following external libraries are loaded and accessible to scripts: \* The [\_struct\_ library](#struct-library) \* The [\_cjson\_ library](#cjson-library) \* The [\_cmsgpack\_ library](#cmsgpack-library) \* The [\_bitop\_ library](#bitop-library) ### \_os\_ library \* Since version: 8.0.0 \* Available in scripts: yes \* Available in functions: yes \_os\_ provides a set of functions for dealing with date, time, and system commands. More details can be found in the [Operating System Facilities](https://www.lua.org/manual/5.1/manual.html#5.8). Note that for sandbox security, currently only the following os functions is exposed: \* `os.clock()` ### \_struct\_ library \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes \_struct\_ is a library for packing and unpacking C-like structures in Lua. It provides the following functions: \* [`struct.pack()`](#struct.pack) \* [`struct.unpack()`](#struct.unpack) \* [`struct.size()`](#struct.size) All of \_struct\_'s functions expect their first argument to be a [format string](#struct-formats). #### \_struct\_ formats The following are valid format strings for \_struct\_'s functions: \* `>`: big endian \* `<`: little endian \* `![num]`: alignment \* `x`: padding \* `b/B`: signed/unsigned byte \* `h/H`: signed/unsigned short \* `l/L`: signed/unsigned long \* `T`: size\_t \* `i/In`: signed/unsigned integer with size \_n\_ (defaults to the size of int) \* `cn`: sequence of \_n\_ chars (from/to a string); when packing, n == 0 means the whole string; when unpacking, n == 0 means use the previously read number as the string's length. \* `s`: zero-terminated string \* `f`: float \* `d`: double \* ` ` (space): ignored #### `struct.pack(x)` This function returns a struct-encoded string from values. It accepts a [\_struct\_ format string](#struct-formats) as its first argument, followed by the values that are to be encoded. Usage example: ``` redis> EVAL "return struct.pack('HH', 1, 2)" 0 "\x01\x00\x02\x00" ``` #### `struct.unpack(x)` This function returns the decoded values from a struct. It accepts a [\_struct\_ format string](#struct-formats) as its first argument, followed by encoded struct's string. Usage example: ``` redis> EVAL "return { struct.unpack('HH', ARGV[1]) }" 0 "\x01\x00\x02\x00" 1) (integer) 1 2) (integer) 2 3) (integer) 5 ``` #### `struct.size(x)` This function returns the size, in bytes, of a struct. It accepts a [\_struct\_ format string](#struct-formats) as its only argument. Usage example: ``` redis> EVAL "return struct.size('HH')" 0 (integer) 4 ``` ### \_cjson\_ library \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes The \_cjson\_ library provides fast [JSON](https://json.org) encoding and decoding from | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.04829169437289238,
-0.08021091669797897,
-0.0331641361117363,
0.03359398618340492,
0.024665910750627518,
-0.07836820930242538,
0.013704081997275352,
0.04120812937617302,
0.02869013510644436,
0.01249780971556902,
0.008298450149595737,
0.06609908491373062,
0.010797249153256416,
-0.0918279... | 0.087593 |
a struct. It accepts a [\_struct\_ format string](#struct-formats) as its only argument. Usage example: ``` redis> EVAL "return struct.size('HH')" 0 (integer) 4 ``` ### \_cjson\_ library \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes The \_cjson\_ library provides fast [JSON](https://json.org) encoding and decoding from Lua. It provides these functions. #### `cjson.encode(x)` This function returns a JSON-encoded string for the Lua data type provided as its argument. Usage example: ``` redis> EVAL "return cjson.encode({ ['foo'] = 'bar' })" 0 "{\"foo\":\"bar\"}" ``` #### `cjson.decode(x)` This function returns a Lua data type from the JSON-encoded string provided as its argument. Usage example: ``` redis> EVAL "return cjson.decode(ARGV[1])['foo']" 0 '{"foo":"bar"}' "bar" ``` ### \_cmsgpack\_ library \* Since version: 2.6.0 \* Available in scripts: yes \* Available in functions: yes The \_cmsgpack\_ library provides fast [MessagePack](https://msgpack.org/index.html) encoding and decoding from Lua. It provides these functions. #### `cmsgpack.pack(x)` This function returns the packed string encoding of the Lua data type it is given as an argument. Usage example: ``` redis> EVAL "return cmsgpack.pack({'foo', 'bar', 'baz'})" 0 "\x93\xa3foo\xa3bar\xa3baz" ``` #### `cmsgpack.unpack(x)` This function returns the unpacked values from decoding its input string argument. Usage example: ``` redis> EVAL "return cmsgpack.unpack(ARGV[1])" 0 "\x93\xa3foo\xa3bar\xa3baz" 1) "foo" 2) "bar" 3) "baz" ``` ### \_bit\_ library \* Since version: 2.8.18 \* Available in scripts: yes \* Available in functions: yes The \_bit\_ library provides bitwise operations on numbers. Its documentation resides at [Lua BitOp documentation](http://bitop.luajit.org/api.html) It provides the following functions. #### `bit.tobit(x)` Normalizes a number to the numeric range for bit operations and returns it. Usage example: ``` redis> EVAL 'return bit.tobit(1)' 0 (integer) 1 ``` #### `bit.tohex(x [,n])` Converts its first argument to a hex string. The number of hex digits is given by the absolute value of the optional second argument. Usage example: ``` redis> EVAL 'return bit.tohex(422342)' 0 "000671c6" ``` #### `bit.bnot(x)` Returns the bitwise \*\*not\*\* of its argument. #### `bit.bnot(x)` `bit.bor(x1 [,x2...])`, `bit.band(x1 [,x2...])` and `bit.bxor(x1 [,x2...])` Returns either the bitwise \*\*or\*\*, bitwise \*\*and\*\*, or bitwise \*\*xor\*\* of all of its arguments. Note that more than two arguments are allowed. Usage example: ``` redis> EVAL 'return bit.bor(1,2,4,8,16,32,64,128)' 0 (integer) 255 ``` #### `bit.lshift(x, n)`, `bit.rshift(x, n)` and `bit.arshift(x, n)` Returns either the bitwise logical \*\*left-shift\*\*, bitwise logical \*\*right-shift\*\*, or bitwise \*\*arithmetic right-shift\*\* of its first argument by the number of bits given by the second argument. #### `bit.rol(x, n)` and `bit.ror(x, n)` Returns either the bitwise \*\*left rotation\*\*, or bitwise \*\*right rotation\*\* of its first argument by the number of bits given by the second argument. Bits shifted out on one side are shifted back in on the other side. #### `bit.bswap(x)` Swaps the bytes of its argument and returns it. This can be used to convert little-endian 32-bit numbers to big-endian 32-bit numbers and vice versa. | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/lua-api.md | master | redis | [
-0.07900869846343994,
-0.0024147825315594673,
-0.07653435319662094,
0.05469142645597458,
0.0031863776966929436,
-0.0696609616279602,
0.010156345553696156,
0.05972908064723015,
0.03862767666578293,
-0.07313968986272812,
-0.002280713524669409,
0.01065816916525364,
-0.10921866446733475,
0.052... | 0.100861 |
Redis provides a programming interface that lets you execute custom scripts on the server itself. In Redis 7 and beyond, you can use [Redis Functions](/docs/manual/programmability/functions-intro) to manage and run your scripts. In Redis 6.2 and below, you use [Lua scripting with the EVAL command](/docs/manual/programmability/eval-intro) to program the server. ## Background Redis is, by [definition](https://github.com/redis/redis/blob/unstable/MANIFESTO#L7), a \_"domain-specific language for abstract data types"\_. The language that Redis speaks consists of its [commands](/commands). Most the commands specialize at manipulating core [data types](/topics/data-types-intro) in different ways. In many cases, these commands provide all the functionality that a developer requires for managing application data in Redis. The term \*\*programmability\*\* in Redis means having the ability to execute arbitrary user-defined logic by the server. We refer to such pieces of logic as \*\*scripts\*\*. In our case, scripts enable processing the data where it lives, a.k.a \_data locality\_. Furthermore, the responsible embedding of programmatic workflows in the Redis server can help in reducing network traffic and improving overall performance. Developers can use this capability for implementing robust, application-specific APIs. Such APIs can encapsulate business logic and maintain a data model across multiple keys and different data structures. User scripts are executed in Redis by an embedded, sandboxed scripting engine. Presently, Redis supports a single scripting engine, the [Lua 5.1](https://www.lua.org/) interpreter. Please refer to the [Redis Lua API Reference](/topics/lua-api) page for complete documentation. ## Running scripts Redis provides two means for running scripts. Firstly, and ever since Redis 2.6.0, the `EVAL` command enables running server-side scripts. Eval scripts provide a quick and straightforward way to have Redis run your scripts ad-hoc. However, using them means that the scripted logic is a part of your application (not an extension of the Redis server). Every applicative instance that runs a script must have the script's source code readily available for loading at any time. That is because scripts are only cached by the server and are volatile. As your application grows, this approach can become harder to develop and maintain. Secondly, added in v7.0, Redis Functions are essentially scripts that are first-class database elements. As such, functions decouple scripting from application logic and enable independent development, testing, and deployment of scripts. To use functions, they need to be loaded first, and then they are available for use by all connected clients. In this case, loading a function to the database becomes an administrative deployment task (such as loading a Redis module, for example), which separates the script from the application. Please refer to the following pages for more information: \* [Redis Eval Scripts](/topics/eval-intro) \* [Redis Functions](/topics/functions-intro) When running a script or a function, Redis guarantees its atomic execution. The script's execution blocks all server activities during its entire time, similarly to the semantics of [transactions](/topics/transactions). These semantics mean that all of the script's effects either have yet to happen or had already happened. The blocking semantics of an executed script apply to all connected clients at all times. Note that the potential downside of this blocking approach is that executing slow scripts is not a good idea. It is not hard to create fast scripts because scripting's overhead is very low. However, if you intend to use a slow script in your application, be aware that all other clients are blocked and can't execute any command while it is running. ## Read-only scripts A read-only script is a script that only executes commands that don't modify any keys within Redis. Read-only scripts can be executed either by adding the `no-writes` [flag](/topics/lua-api#script\_flags) to the script or by executing the script with one of the read-only script command variants: `EVAL\_RO`, `EVALSHA\_RO`, or | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/_index.md | master | redis | [
-0.057997364550828934,
-0.046897031366825104,
-0.08376790583133698,
0.02064819075167179,
-0.00470940675586462,
-0.11198637634515762,
0.03734571114182472,
0.06275278329849243,
0.0038153750356286764,
-0.002580178203061223,
-0.03064553439617157,
0.044616859406232834,
0.014480605721473694,
-0.... | 0.191924 |
## Read-only scripts A read-only script is a script that only executes commands that don't modify any keys within Redis. Read-only scripts can be executed either by adding the `no-writes` [flag](/topics/lua-api#script\_flags) to the script or by executing the script with one of the read-only script command variants: `EVAL\_RO`, `EVALSHA\_RO`, or `FCALL\_RO`. They have the following properties: \* They can always be executed on replicas. \* They can always be killed by the `SCRIPT KILL` command. \* They never fail with OOM error when redis is over the memory limit. \* They are not blocked during write pauses, such as those that occur during coordinated failovers. \* They cannot execute any command that may modify the data set. \* Currently `PUBLISH`, `SPUBLISH` and `PFCOUNT` are also considered write commands in scripts, because they could attempt to propagate commands to replicas and AOF file. In addition to the benefits provided by all read-only scripts, the read-only script commands have the following advantages: \* They can be used to configure an ACL user to only be able to execute read-only scripts. \* Many clients also better support routing the read-only script commands to replicas for applications that want to use replicas for read scaling. #### Read-only script history Read-only scripts and read-only script commands were introduced in Redis 7.0 \* Before Redis 7.0.1 `PUBLISH`, `SPUBLISH` and `PFCOUNT` were not considered write commands in scripts \* Before Redis 7.0.1 the `no-writes` [flag](/topics/lua-api#script\_flags) did not imply `allow-oom` \* Before Redis 7.0.1 the `no-writes` flag did not permit the script to run during write pauses. The recommended approach is to use the standard scripting commands with the `no-writes` flag unless you need one of the previously mentioned features. ## Sandboxed script context Redis places the engine that executes user scripts inside a sandbox. The sandbox attempts to prevent accidental misuse and reduce potential threats from the server's environment. Scripts should never try to access the Redis server's underlying host systems, such as the file system, network, or attempt to perform any other system call other than those supported by the API. Scripts should operate solely on data stored in Redis and data provided as arguments to their execution. ## Maximum execution time Scripts are subject to a maximum execution time (set by default to five seconds). This default timeout is enormous since a script usually runs in less than a millisecond. The limit is in place to handle accidental infinite loops created during development. It is possible to modify the maximum time a script can be executed with millisecond precision, either via `redis.conf` or by using the `CONFIG SET` command. The configuration parameter affecting max execution time is called `busy-reply-threshold`. When a script reaches the timeout threshold, it isn't terminated by Redis automatically. Doing so would violate the contract between Redis and the scripting engine that ensures that scripts are atomic. Interrupting the execution of a script has the potential of leaving the dataset with half-written changes. Therefore, when a script executes longer than the configured timeout, the following happens: \* Redis logs that a script is running for too long. \* It starts accepting commands again from other clients but will reply with a BUSY error to all the clients sending normal commands. The only commands allowed in this state are `SCRIPT KILL`, `FUNCTION KILL`, and `SHUTDOWN NOSAVE`. \* It is possible to terminate a script that only executes read-only commands using the `SCRIPT KILL` and `FUNCTION KILL` commands. These commands do not violate the scripting semantic as no data was written to the dataset by the script yet. \* If the script had already | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/_index.md | master | redis | [
-0.02806159108877182,
-0.07335402071475983,
-0.11396907269954681,
0.09218411892652512,
0.07200895249843597,
-0.07066129148006439,
0.002492789411917329,
0.04506555572152138,
0.07416298985481262,
0.056262485682964325,
0.04429062083363533,
0.08880141377449036,
-0.005763450171798468,
-0.108008... | 0.164741 |
and `SHUTDOWN NOSAVE`. \* It is possible to terminate a script that only executes read-only commands using the `SCRIPT KILL` and `FUNCTION KILL` commands. These commands do not violate the scripting semantic as no data was written to the dataset by the script yet. \* If the script had already performed even a single write operation, the only command allowed is `SHUTDOWN NOSAVE` that stops the server without saving the current data set on disk (basically, the server is aborted). | https://github.com/redis/redis-doc/blob/master//docs/interact/programmability/_index.md | master | redis | [
0.018597645685076714,
0.08312907814979553,
-0.026480967178940773,
0.03211573138833046,
0.026171470060944557,
-0.024857524782419205,
-0.038266897201538086,
-0.02790151722729206,
0.03587102144956589,
0.08166507631540298,
0.029824234545230865,
0.1026405319571495,
-0.018040837720036507,
-0.101... | 0.163515 |
This document provides information about how Redis handles clients at the network layer level: connections, timeouts, buffers, and other similar topics are covered here. The information contained in this document is \*\*only applicable to Redis version 2.6 or greater\*\*. ## Accepting Client Connections Redis accepts clients connections on the configured TCP port and on the Unix socket if enabled. When a new client connection is accepted the following operations are performed: \* The client socket is put in the non-blocking state since Redis uses multiplexing and non-blocking I/O. \* The `TCP\_NODELAY` option is set in order to ensure that there are no delays to the connection. \* A \*readable\* file event is created so that Redis is able to collect the client queries as soon as new data is available to read on the socket. After the client is initialized, Redis checks if it is already at the limit configured for the number of simultaneous clients (configured using the `maxclients` configuration directive, see the next section of this document for further information). When Redis can't accept a new client connection because the maximum number of clients has been reached, it tries to send an error to the client in order to make it aware of this condition, closing the connection immediately. The error message will reach the client even if the connection is closed immediately by Redis because the new socket output buffer is usually big enough to contain the error, so the kernel will handle transmission of the error. ## What Order are Client Requests Served In? The order is determined by a combination of the client socket file descriptor number and order in which the kernel reports events, so the order should be considered as unspecified. However, Redis does the following two things when serving clients: \* It only performs a single `read()` system call every time there is something new to read from the client socket. This ensures that if we have multiple clients connected, and a few send queries at a high rate, other clients are not penalized and will not experience latency issues. \* However once new data is read from a client, all the queries contained in the current buffers are processed sequentially. This improves locality and does not need iterating a second time to see if there are clients that need some processing time. ## Maximum Concurrent Connected Clients In Redis 2.4 there was a hard-coded limit for the maximum number of clients that could be handled simultaneously. In Redis 2.6 and newer, this limit is configurable using the `maxclients` directive in `redis.conf`. The default is 10,000 clients. However, Redis checks with the kernel what the maximum number of file descriptors that we are able to open is (the \*soft limit\* is checked). If the limit is less than the maximum number of clients we want to handle, plus 32 (that is the number of file descriptors Redis reserves for internal uses), then the maximum number of clients is updated to match the number of clients it is \*really able to handle\* under the current operating system limit. When `maxclients` is set to a number greater than Redis can support, a message is logged at startup: ``` $ ./redis-server --maxclients 100000 [41422] 23 Jan 11:28:33.179 # Unable to set the max number of files limit to 100032 (Invalid argument), setting the max clients configuration to 10112. ``` When Redis is configured in order to handle a specific number of clients it is a good idea to make sure that the operating system limit for the maximum number of file descriptors per process is | https://github.com/redis/redis-doc/blob/master//docs/reference/clients.md | master | redis | [
-0.03887481987476349,
-0.04040313512086868,
-0.042299605906009674,
0.03844774514436722,
-0.0032826089300215244,
-0.02922048047184944,
0.02973412349820137,
-0.036218512803316116,
0.08810272067785263,
0.0012091657845303416,
-0.023378664627671242,
0.10569611191749573,
0.024482380598783493,
-0... | 0.147411 |
files limit to 100032 (Invalid argument), setting the max clients configuration to 10112. ``` When Redis is configured in order to handle a specific number of clients it is a good idea to make sure that the operating system limit for the maximum number of file descriptors per process is also set accordingly. Under Linux these limits can be set both in the current session and as a system-wide setting with the following commands: \* `ulimit -Sn 100000 # This will only work if hard limit is big enough.` \* `sysctl -w fs.file-max=100000` ## Output Buffer Limits Redis needs to handle a variable-length output buffer for every client, since a command can produce a large amount of data that needs to be transferred to the client. However it is possible that a client sends more commands producing more output to serve at a faster rate than that which Redis can send the existing output to the client. This is especially true with Pub/Sub clients in case a client is not able to process new messages fast enough. Both conditions will cause the client output buffer to grow and consume more and more memory. For this reason by default Redis sets limits to the output buffer size for different kind of clients. When the limit is reached the client connection is closed and the event logged in the Redis log file. There are two kind of limits Redis uses: \* The \*\*hard limit\*\* is a fixed limit that when reached will make Redis close the client connection as soon as possible. \* The \*\*soft limit\*\* instead is a limit that depends on the time, for instance a soft limit of 32 megabytes per 10 seconds means that if the client has an output buffer bigger than 32 megabytes for, continuously, 10 seconds, the connection gets closed. Different kind of clients have different default limits: \* \*\*Normal clients\*\* have a default limit of 0, that means, no limit at all, because most normal clients use blocking implementations sending a single command and waiting for the reply to be completely read before sending the next command, so it is always not desirable to close the connection in case of a normal client. \* \*\*Pub/Sub clients\*\* have a default hard limit of 32 megabytes and a soft limit of 8 megabytes per 60 seconds. \* \*\*Replicas\*\* have a default hard limit of 256 megabytes and a soft limit of 64 megabyte per 60 seconds. It is possible to change the limit at runtime using the `CONFIG SET` command or in a permanent way using the Redis configuration file `redis.conf`. See the example `redis.conf` in the Redis distribution for more information about how to set the limit. ## Query Buffer Hard Limit Every client is also subject to a query buffer limit. This is a non-configurable hard limit that will close the connection when the client query buffer (that is the buffer we use to accumulate commands from the client) reaches 1 GB, and is actually only an extreme limit to avoid a server crash in case of client or server software bugs. ## Client Eviction Redis is built to handle a very large number of client connections. Client connections tend to consume memory, and when there are many of them, the aggregate memory consumption can be extremely high, leading to data eviction or out-of-memory errors. These cases can be mitigated to an extent using [output buffer limits](#output-buffer-limits), but Redis allows us a more robust configuration to limit the aggregate memory used by all clients' connections. This mechanism is called \*\*client eviction\*\*, and it's essentially | https://github.com/redis/redis-doc/blob/master//docs/reference/clients.md | master | redis | [
-0.020809438079595566,
-0.03330555930733681,
-0.08944597095251083,
-0.02966015413403511,
-0.001113434205763042,
-0.07709221541881561,
0.0012719958322122693,
0.08609124273061752,
0.04464000090956688,
0.019605491310358047,
-0.05852193757891655,
0.07843684405088425,
0.000931816000957042,
-0.0... | 0.118476 |
can be extremely high, leading to data eviction or out-of-memory errors. These cases can be mitigated to an extent using [output buffer limits](#output-buffer-limits), but Redis allows us a more robust configuration to limit the aggregate memory used by all clients' connections. This mechanism is called \*\*client eviction\*\*, and it's essentially a safety mechanism that will disconnect clients once the aggregate memory usage of all clients is above a threshold. The mechanism first attempts to disconnect clients that use the most memory. It disconnects the minimal number of clients needed to return below the `maxmemory-clients` threshold. `maxmemory-clients` defines the maximum aggregate memory usage of all clients connected to Redis. The aggregation takes into account all the memory used by the client connections: the [query buffer](#query-buffer-hard-limit), the output buffer, and other intermediate buffers. Note that replica and master connections aren't affected by the client eviction mechanism. Therefore, such connections are never evicted. `maxmemory-clients` can be set permanently in the configuration file (`redis.conf`) or via the `CONFIG SET` command. This setting can either be 0 (meaning no limit), a size in bytes (possibly with `mb`/`gb` suffix), or a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%` would mean 10% of the `maxmemory` configuration). The default setting is 0, meaning client eviction is turned off by default. However, for any large production deployment, it is highly recommended to configure some non-zero `maxmemory-clients` value. A value `5%`, for example, can be a good place to start. It is possible to flag a specific client connection to be excluded from the client eviction mechanism. This is useful for control path connections. If, for example, you have an application that monitors the server via the `INFO` command and alerts you in case of a problem, you might want to make sure this connection isn't evicted. You can do so using the following command (from the relevant client's connection): `CLIENT NO-EVICT` `on` And you can revert that with: `CLIENT NO-EVICT` `off` For more information and an example refer to the `maxmemory-clients` section in the default `redis.conf` file. Client eviction is available from Redis 7.0. ## Client Timeouts By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever. However if you don't like this behavior, you can configure a timeout, so that if the client is idle for more than the specified number of seconds, the client connection will be closed. You can configure this limit via `redis.conf` or simply using `CONFIG SET timeout `. Note that the timeout only applies to normal clients and it \*\*does not apply to Pub/Sub clients\*\*, since a Pub/Sub connection is a \*push style\* connection so a client that is idle is the norm. Even if by default connections are not subject to timeout, there are two conditions when it makes sense to set a timeout: \* Mission critical applications where a bug in the client software may saturate the Redis server with idle connections, causing service disruption. \* As a debugging mechanism in order to be able to connect with the server if a bug in the client software saturates the server with idle connections, making it impossible to interact with the server. Timeouts are not to be considered very precise: Redis avoids setting timer events or running O(N) algorithms in order to check idle clients, so the check is performed incrementally from time to time. This means that it is possible that while the timeout is set to 10 seconds, the client connection will be closed, for instance, after | https://github.com/redis/redis-doc/blob/master//docs/reference/clients.md | master | redis | [
0.017185227945446968,
-0.043848879635334015,
-0.08108370751142502,
0.048900265246629715,
-0.041450854390859604,
-0.08481929451227188,
0.02739696204662323,
0.03936443105340004,
0.053743813186883926,
0.024344993755221367,
-0.023522740229964256,
0.11307571828365326,
0.06617970019578934,
-0.04... | 0.107898 |
precise: Redis avoids setting timer events or running O(N) algorithms in order to check idle clients, so the check is performed incrementally from time to time. This means that it is possible that while the timeout is set to 10 seconds, the client connection will be closed, for instance, after 12 seconds if many clients are connected at the same time. ## The CLIENT Command The Redis `CLIENT` command allows you to inspect the state of every connected client, to kill a specific client, and to name connections. It is a very powerful debugging tool if you use Redis at scale. `CLIENT LIST` is used in order to obtain a list of connected clients and their state: ``` redis 127.0.0.1:6379> client list addr=127.0.0.1:52555 fd=5 name= age=855 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client addr=127.0.0.1:52787 fd=6 name= age=6 idle=5 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=ping ``` In the above example two clients are connected to the Redis server. Let's look at what some of the data returned represents: \* \*\*addr\*\*: The client address, that is, the client IP and the remote port number it used to connect with the Redis server. \* \*\*fd\*\*: The client socket file descriptor number. \* \*\*name\*\*: The client name as set by `CLIENT SETNAME`. \* \*\*age\*\*: The number of seconds the connection existed for. \* \*\*idle\*\*: The number of seconds the connection is idle. \* \*\*flags\*\*: The kind of client (N means normal client, check the [full list of flags](https://redis.io/commands/client-list)). \* \*\*omem\*\*: The amount of memory used by the client for the output buffer. \* \*\*cmd\*\*: The last executed command. See the [`CLIENT LIST`](https://redis.io/commands/client-list) documentation for the full listing of fields and their purpose. Once you have the list of clients, you can close a client's connection using the `CLIENT KILL` command, specifying the client address as its argument. The commands `CLIENT SETNAME` and `CLIENT GETNAME` can be used to set and get the connection name. Starting with Redis 4.0, the client name is shown in the `SLOWLOG` output, to help identify clients that create latency issues. ## TCP keepalive From version 3.2 onwards, Redis has TCP keepalive (`SO\_KEEPALIVE` socket option) enabled by default and set to about 300 seconds. This option is useful in order to detect dead peers (clients that cannot be reached even if they look connected). Moreover, if there is network equipment between clients and servers that need to see some traffic in order to take the connection open, the option will prevent unexpected connection closed events. | https://github.com/redis/redis-doc/blob/master//docs/reference/clients.md | master | redis | [
-0.012503174133598804,
0.02201871946454048,
-0.13386182487010956,
0.048691652715206146,
0.0363803431391716,
-0.05464077368378639,
0.05954570695757866,
-0.010379364714026451,
0.07739946991205215,
0.0033337962813675404,
-0.03128954768180847,
0.06385579705238342,
-0.0006459848373197019,
-0.04... | 0.183257 |
Redis Sentinel is a monitoring solution for Redis instances that handles automatic failover of Redis masters and service discovery (who is the current master for a given group of instances?). Since Sentinel is both responsible for reconfiguring instances during failovers, and providing configurations to clients connecting to Redis masters or replicas, clients are required to have explicit support for Redis Sentinel. This document is targeted at Redis clients developers that want to support Sentinel in their clients implementation with the following goals: \* Automatic configuration of clients via Sentinel. \* Improved safety of Redis Sentinel automatic failover. For details about how Redis Sentinel works, please check the [Redis Documentation](/topics/sentinel), as this document only contains information needed for Redis client developers, and it is expected that readers are familiar with the way Redis Sentinel works. ## Redis service discovery via Sentinel Redis Sentinel identifies every master with a name like "stats" or "cache". Every name actually identifies a \*group of instances\*, composed of a master and a variable number of replicas. The address of the Redis master that is used for a specific purpose inside a network may change after events like an automatic failover, a manually triggered failover (for instance in order to upgrade a Redis instance), and other reasons. Normally Redis clients have some kind of hard-coded configuration that specifies the address of a Redis master instance within a network as IP address and port number. However if the master address changes, manual intervention in every client is needed. A Redis client supporting Sentinel can automatically discover the address of a Redis master from the master name using Redis Sentinel. So instead of a hard coded IP address and port, a client supporting Sentinel should optionally be able to take as input: \* A list of ip:port pairs pointing to known Sentinel instances. \* The name of the service, like "cache" or "timelines". This is the procedure a client should follow in order to obtain the master address starting from the list of Sentinels and the service name. Step 1: connecting to the first Sentinel --- The client should iterate the list of Sentinel addresses. For every address it should try to connect to the Sentinel, using a short timeout (in the order of a few hundreds of milliseconds). On errors or timeouts the next Sentinel address should be tried. If all the Sentinel addresses were tried without success, an error should be returned to the client. The first Sentinel replying to the client request should be put at the start of the list, so that at the next reconnection, we'll try first the Sentinel that was reachable in the previous connection attempt, minimizing latency. Step 2: ask for master address --- Once a connection with a Sentinel is established, the client should retry to execute the following command on the Sentinel: SENTINEL get-master-addr-by-name master-name Where \*master-name\* should be replaced with the actual service name specified by the user. The result from this call can be one of the following two replies: \* An ip:port pair. \* A null reply. This means Sentinel does not know this master. If an ip:port pair is received, this address should be used to connect to the Redis master. Otherwise if a null reply is received, the client should try the next Sentinel in the list. Step 3: call the ROLE command in the target instance --- Once the client discovered the address of the master instance, it should attempt a connection with the master, and call the `ROLE` command in order to verify the role of the instance is actually a master. | https://github.com/redis/redis-doc/blob/master//docs/reference/sentinel-clients.md | master | redis | [
-0.0432569719851017,
-0.0942917987704277,
0.00297378096729517,
0.058850254863500595,
0.042811740189790726,
-0.0704076737165451,
0.04998595640063286,
-0.023004522547125816,
0.03405090048909187,
0.008036650717258453,
-0.08842827379703522,
0.046914827078580856,
0.056308403611183167,
-0.038488... | 0.21444 |
in the list. Step 3: call the ROLE command in the target instance --- Once the client discovered the address of the master instance, it should attempt a connection with the master, and call the `ROLE` command in order to verify the role of the instance is actually a master. If the `ROLE` commands is not available (it was introduced in Redis 2.8.12), a client may resort to the `INFO replication` command parsing the `role:` field of the output. If the instance is not a master as expected, the client should wait a short amount of time (a few hundreds of milliseconds) and should try again starting from Step 1. Handling reconnections === Once the service name is resolved into the master address and a connection is established with the Redis master instance, every time a reconnection is needed, the client should resolve again the address using Sentinels restarting from Step 1. For instance Sentinel should contacted again the following cases: \* If the client reconnects after a timeout or socket error. \* If the client reconnects because it was explicitly closed or reconnected by the user. In the above cases and any other case where the client lost the connection with the Redis server, the client should resolve the master address again. Sentinel failover disconnection === Starting with Redis 2.8.12, when Redis Sentinel changes the configuration of an instance, for example promoting a replica to a master, demoting a master to replicate to the new master after a failover, or simply changing the master address of a stale replica instance, it sends a `CLIENT KILL type normal` command to the instance in order to make sure all the clients are disconnected from the reconfigured instance. This will force clients to resolve the master address again. If the client will contact a Sentinel with yet not updated information, the verification of the Redis instance role via the `ROLE` command will fail, allowing the client to detect that the contacted Sentinel provided stale information, and will try again. Note: it is possible that a stale master returns online at the same time a client contacts a stale Sentinel instance, so the client may connect with a stale master, and yet the ROLE output will match. However when the master is back again Sentinel will try to demote it to replica, triggering a new disconnection. The same reasoning applies to connecting to stale replicas that will get reconfigured to replicate with a different master. Connecting to replicas === Sometimes clients are interested to connect to replicas, for example in order to scale read requests. This protocol supports connecting to replicas by modifying step 2 slightly. Instead of calling the following command: SENTINEL get-master-addr-by-name master-name The clients should call instead: SENTINEL replicas master-name In order to retrieve a list of replica instances. Symmetrically the client should verify with the `ROLE` command that the instance is actually a replica, in order to avoid scaling read queries with the master. Connection pools === For clients implementing connection pools, on reconnection of a single connection, the Sentinel should be contacted again, and in case of a master address change all the existing connections should be closed and connected to the new address. Error reporting === The client should correctly return the information to the user in case of errors. Specifically: \* If no Sentinel can be contacted (so that the client was never able to get the reply to `SENTINEL get-master-addr-by-name`), an error that clearly states that Redis Sentinel is unreachable should be returned. \* If all the Sentinels in the pool replied with a null | https://github.com/redis/redis-doc/blob/master//docs/reference/sentinel-clients.md | master | redis | [
-0.01101786270737648,
-0.10228133946657181,
-0.0191381573677063,
0.04047700762748718,
0.0008042276022024453,
-0.06243782117962837,
0.014034708961844444,
-0.04505384713411331,
0.013333913870155811,
0.004231676924973726,
-0.020862095057964325,
0.07174823433160782,
0.08577339351177216,
-0.002... | 0.051921 |
case of errors. Specifically: \* If no Sentinel can be contacted (so that the client was never able to get the reply to `SENTINEL get-master-addr-by-name`), an error that clearly states that Redis Sentinel is unreachable should be returned. \* If all the Sentinels in the pool replied with a null reply, the user should be informed with an error that Sentinels don't know this master name. Sentinels list automatic refresh === Optionally once a successful reply to `get-master-addr-by-name` is received, a client may update its internal list of Sentinel nodes following this procedure: \* Obtain a list of other Sentinels for this master using the command `SENTINEL sentinels `. \* Add every ip:port pair not already existing in our list at the end of the list. It is not needed for a client to be able to make the list persistent updating its own configuration. The ability to upgrade the in-memory representation of the list of Sentinels can be already useful to improve reliability. Subscribe to Sentinel events to improve responsiveness === The [Sentinel documentation](/topics/sentinel) shows how clients can connect to Sentinel instances using Pub/Sub in order to subscribe to changes in the Redis instances configurations. This mechanism can be used in order to speedup the reconfiguration of clients, that is, clients may listen to Pub/Sub in order to know when a configuration change happened in order to run the three steps protocol explained in this document in order to resolve the new Redis master (or replica) address. However update messages received via Pub/Sub should not substitute the above procedure, since there is no guarantee that a client is able to receive all the update messages. Additional information === For additional information or to discuss specific aspects of this guidelines, please drop a message to the [Redis Google Group](https://groups.google.com/group/redis-db). | https://github.com/redis/redis-doc/blob/master//docs/reference/sentinel-clients.md | master | redis | [
-0.06376650184392929,
-0.086258664727211,
-0.00564764766022563,
0.0830836221575737,
0.0483354777097702,
-0.08538150787353516,
-0.004627417307347059,
-0.018619069829583168,
0.012936553917825222,
0.0482880137860775,
-0.02939695306122303,
0.0059709507040679455,
0.01725260354578495,
-0.0334830... | 0.176183 |
Redis versions 4.0 and above support the ARM processor in general, and the Raspberry Pi specifically, as a main platform. Every new release of Redis is tested on the Pi environment, and we update this documentation page with information about supported devices and other useful information. While Redis does run on Android, in the future we look forward to extend our testing efforts to Android to also make it an officially supported platform. We believe that Redis is ideal for IoT and embedded devices for several reasons: \* Redis has a very small memory footprint and CPU requirements. It can run in small devices like the Raspberry Pi Zero without impacting the overall performance, using a small amount of memory while delivering good performance for many use cases. \* The data structures of Redis are often an ideal way to model IoT/embedded use cases. Some examples include accumulating time series data, receiving or queuing commands to execute or respond to send back to the remote servers, and so forth. \* Modeling data inside Redis can be very useful in order to make in-device decisions for appliances that must respond very quickly or when the remote servers are offline. \* Redis can be used as a communication system between the processes running in the device. \* The append-only file storage of Redis is well suited for SSD cards. \* The stream data structure included in Redis versions 5.0 and higher was specifically designed for time series applications and has a very low memory overhead. ## Redis /proc/cpu/alignment requirements Linux on ARM allows to trap unaligned accesses and fix them inside the kernel in order to continue the execution of the offending program instead of generating a `SIGBUS`. Redis 4.0 and greater are fixed in order to avoid any kind of unaligned access, so there is no need to have a specific value for this kernel configuration. Even when kernel alignment fixing set as disabled Redis should run as expected. ## Building Redis in the Pi \* Download Redis version 4.0 or higher. \* Use `make` as usual to create the executable. There is nothing special in the process. The only difference is that by default, Redis uses the `libc` allocator instead of defaulting to `jemalloc` as it does in other Linux based environments. This is because we believe that for the small use cases inside embedded devices, memory fragmentation is unlikely to be a problem. Moreover `jemalloc` on ARM may not be as tested as the `libc` allocator. ## Performance Performance testing of Redis was performed on the Raspberry Pi 3 and Pi 1 model B. The difference between the two Pis in terms of delivered performance is quite big. The benchmarks were performed via the loopback interface, since most use cases will probably use Redis from within the device and not via the network. The following numbers were obtained using Redis 4.0. Raspberry Pi 3: \* Test 1 : 5 millions writes with 1 million keys (even distribution among keys). No persistence, no pipelining. 28,000 ops/sec. \* Test 2: Like test 1 but with pipelining using groups of 8 operations: 80,000 ops/sec. \* Test 3: Like test 1 but with AOF enabled, fsync 1 sec: 23,000 ops/sec \* Test 4: Like test 3, but with an AOF rewrite in progress: 21,000 ops/sec Raspberry Pi 1 model B: \* Test 1 : 5 millions writes with 1 million keys (even distribution among keys). No persistence, no pipelining. 2,200 ops/sec. \* Test 2: Like test 1 but with pipelining using groups of 8 operations: 8,500 ops/sec. \* Test 3: Like test 1 | https://github.com/redis/redis-doc/blob/master//docs/reference/arm.md | master | redis | [
-0.033771008253097534,
-0.020326625555753708,
-0.020765583962202072,
-0.01394415833055973,
-0.0072373454459011555,
-0.1027771532535553,
-0.025601694360375404,
0.050007592886686325,
0.04601433873176575,
-0.005775737576186657,
-0.024488825350999832,
0.0677175372838974,
0.05320509150624275,
-... | 0.273876 |
21,000 ops/sec Raspberry Pi 1 model B: \* Test 1 : 5 millions writes with 1 million keys (even distribution among keys). No persistence, no pipelining. 2,200 ops/sec. \* Test 2: Like test 1 but with pipelining using groups of 8 operations: 8,500 ops/sec. \* Test 3: Like test 1 but with AOF enabled, fsync 1 sec: 1,820 ops/sec \* Test 4: Like test 3, but with an AOF rewrite in progress: 1,000 ops/sec The benchmarks above are referring to simple `SET`/`GET` operations. The performance is similar for all the Redis fast operations (not running in linear time). However sorted sets may show slightly slower numbers. | https://github.com/redis/redis-doc/blob/master//docs/reference/arm.md | master | redis | [
-0.01933586597442627,
-0.06062944233417511,
-0.0714583545923233,
0.05799116939306259,
-0.08067145943641663,
-0.13146734237670898,
-0.06876736879348755,
0.02699124626815319,
0.036282557994127274,
0.006086410488933325,
-0.005573950242251158,
0.08107369393110275,
-0.010612787678837776,
-0.091... | 0.114778 |
Command tips are an array of strings. These provide Redis clients with additional information about the command. The information can instruct Redis Cluster clients as to how the command should be executed and its output processed in a clustered deployment. Unlike the command's flags (see the 3rd element of `COMMAND`'s reply), which are strictly internal to the server's operation, tips don't serve any purpose other than being reported to clients. Command tips are arbitrary strings. However, the following sections describe proposed tips and demonstrate the conventions they are likely to adhere to. ## nondeterministic\_output This tip indicates that the command's output isn't deterministic. That means that calls to the command may yield different results with the same arguments and data. That difference could be the result of the command's random nature (e.g., `RANDOMKEY` and `SPOP`); the call's timing (e.g., `TTL`); or generic differences that relate to the server's state (e.g., `INFO` and `CLIENT LIST`). \*\*Note:\*\* Prior to Redis 7.0, this tip was the \_random\_ command flag. ## nondeterministic\_output\_order The existence of this tip indicates that the command's output is deterministic, but its ordering is random (e.g., `HGETALL` and `SMEMBERS`). \*\*Note:\*\* Prior to Redis 7.0, this tip was the \_sort\_\\_\_for\_\\_\_script\_ flag. ## request\_policy This tip can help clients determine the shards to send the command in clustering mode. The default behavior a client should implement for commands without the \_request\_policy\_ tip is as follows: 1. The command doesn't accept key name arguments: the client can execute the command on an arbitrary shard. 1. For commands that accept one or more key name arguments: the client should route the command to a single shard, as determined by the hash slot of the input keys. In cases where the client should adopt a behavior different than the default, the \_request\_policy\_ tip can be one of: - \*\*all\_nodes:\*\* the client should execute the command on all nodes - masters and replicas alike. An example is the `CONFIG SET` command. This tip is in-use by commands that don't accept key name arguments. The command operates atomically per shard. \* \*\*all\_shards:\*\* the client should execute the command on all master shards (e.g., the `DBSIZE` command). This tip is in-use by commands that don't accept key name arguments. The command operates atomically per shard. - \*\*multi\_shard:\*\* the client should execute the command on several shards. The client should split the inputs according to the hash slots of its input key name arguments. For example, the command `DEL {foo} {foo}1 bar` should be split to `DEL {foo} {foo}1` and `DEL bar`. If the keys are hashed to more than a single slot, the command must be split even if all the slots are managed by the same shard. Examples for such commands include `MSET`, `MGET` and `DEL`. However, note that `SUNIONSTORE` isn't considered as \_multi\_shard\_ because all of its keys must belong to the same hash slot. - \*\*special:\*\* indicates a non-trivial form of the client's request policy, such as the `SCAN` command. ## response\_policy This tip can help clients determine the aggregate they need to compute from the replies of multiple shards in a cluster. The default behavior for commands without a \_request\_policy\_ tip only applies to replies with of nested types (i.e., an array, a set, or a map). The client's implementation for the default behavior should be as follows: 1. The command doesn't accept key name arguments: the client can aggregate all replies within a single nested data structure. For example, the array replies we get from calling `KEYS` against all shards. These should be packed in a single in no particular order. 1. For | https://github.com/redis/redis-doc/blob/master//docs/reference/command-tips.md | master | redis | [
-0.022734610363841057,
-0.06840421259403229,
-0.05992893502116203,
0.0565481074154377,
0.03901849687099457,
-0.06899844855070114,
0.10114376246929169,
-0.0011306077940389514,
0.053354404866695404,
-0.0019461001502349973,
0.05013681575655937,
0.007555238902568817,
0.03956326097249985,
-0.09... | 0.135766 |
should be as follows: 1. The command doesn't accept key name arguments: the client can aggregate all replies within a single nested data structure. For example, the array replies we get from calling `KEYS` against all shards. These should be packed in a single in no particular order. 1. For commands that accept one or more key name arguments: the client needs to retain the same order of replies as the input key names. For example, `MGET`'s aggregated reply. The \_response\_policy\_ tip is set for commands that reply with scalar data types, or when it's expected that clients implement a non-default aggregate. This tip can be one of: \* \*\*one\_succeeded:\*\* the clients should return success if at least one shard didn't reply with an error. The client should reply with the first non-error reply it obtains. If all shards return an error, the client can reply with any one of these. For example, consider a `SCRIPT KILL` command that's sent to all shards. Although the script should be loaded in all of the cluster's shards, the `SCRIPT KILL` will typically run only on one at a given time. \* \*\*all\_succeeded:\*\* the client should return successfully only if there are no error replies. Even a single error reply should disqualify the aggregate and be returned. Otherwise, the client should return one of the non-error replies. As an example, consider the `CONFIG SET`, `SCRIPT FLUSH` and `SCRIPT LOAD` commands. \* \*\*agg\_logical\_and:\*\* the client should return the result of a logical \_AND\_ operation on all replies (only applies to integer replies, usually from commands that return either \_0\_ or \_1\_). Consider the `SCRIPT EXISTS` command as an example. It returns an array of \_0\_'s and \_1\_'s that denote the existence of its given SHA1 sums in the script cache. The aggregated response should be \_1\_ only when all shards had reported that a given script SHA1 sum is in their respective cache. \* \*\*agg\_logical\_or:\*\* the client should return the result of a logical \_AND\_ operation on all replies (only applies to integer replies, usually from commands that return either \_0\_ or \_1\_). \* \*\*agg\_min:\*\* the client should return the minimal value from the replies (only applies to numerical replies). The aggregate reply from a cluster-wide `WAIT` command, for example, should be the minimal value (number of synchronized replicas) from all shards. \* \*\*agg\_max:\*\* the client should return the maximal value from the replies (only applies to numerical replies). \* \*\*agg\_sum:\*\* the client should return the sum of replies (only applies to numerical replies). Example: `DBSIZE`. \* \*\*special:\*\* this type of tip indicates a non-trivial form of reply policy. `INFO` is an excellent example of that. ## Example ``` redis> command info ping 1) 1) "ping" 2) (integer) -1 3) 1) fast 4) (integer) 0 5) (integer) 0 6) (integer) 0 7) 1) @fast 2) @connection 8) 1) "request\_policy:all\_shards" 2) "response\_policy:all\_succeeded" 9) (empty array) 10) (empty array) ``` | https://github.com/redis/redis-doc/blob/master//docs/reference/command-tips.md | master | redis | [
-0.019518190994858742,
-0.013452793471515179,
0.02880895882844925,
0.024413466453552246,
-0.03189147636294365,
-0.050078168511390686,
0.07254965603351593,
-0.011383146978914738,
0.06859534233808517,
0.05752786993980408,
0.0024179178290069103,
-0.022938748821616173,
0.08821691572666168,
-0.... | 0.108871 |
\*\* Note: Support for Gopher was removed in Redis 7.0 \*\* Redis contains an implementation of the Gopher protocol, as specified in the [RFC 1436](https://www.ietf.org/rfc/rfc1436.txt). The Gopher protocol was very popular in the late '90s. It is an alternative to the web, and the implementation both server and client side is so simple that the Redis server has just 100 lines of code in order to implement this support. What do you do with Gopher nowadays? Well Gopher never \*really\* died, and lately there is a movement in order for the Gopher more hierarchical content composed of just plain text documents to be resurrected. Some want a simpler internet, others believe that the mainstream internet became too much controlled, and it's cool to create an alternative space for people that want a bit of fresh air. Anyway, for the 10th birthday of the Redis, we gave it the Gopher protocol as a gift. ## How it works The Redis Gopher support uses the inline protocol of Redis, and specifically two kind of inline requests that were anyway illegal: an empty request or any request that starts with "/" (there are no Redis commands starting with such a slash). Normal RESP2/RESP3 requests are completely out of the path of the Gopher protocol implementation and are served as usually as well. If you open a connection to Redis when Gopher is enabled and send it a string like "/foo", if there is a key named "/foo" it is served via the Gopher protocol. In order to create a real Gopher "hole" (the name of a Gopher site in Gopher talking), you likely need a script such as the one in [https://github.com/antirez/gopher2redis](https://github.com/antirez/gopher2redis). ## SECURITY WARNING If you plan to put Redis on the internet in a publicly accessible address to server Gopher pages \*\*make sure to set a password\*\* to the instance. Once a password is set: 1. The Gopher server (when enabled, not by default) will kill serve content via Gopher. 2. However other commands cannot be called before the client will authenticate. So use the `requirepass` option to protect your instance. To enable Gopher support use the following configuration line. gopher-enabled yes Accessing keys that are not strings or do not exit will generate an error in Gopher protocol format. | https://github.com/redis/redis-doc/blob/master//docs/reference/gopher.md | master | redis | [
-0.13021500408649445,
0.04916086047887802,
0.0019399289740249515,
-0.041759926825761795,
-0.012775140814483166,
-0.03455694019794464,
-0.012009473517537117,
-0.0311574749648571,
0.02614300511777401,
0.04042006656527519,
0.00843927264213562,
0.08647552877664566,
0.058330100029706955,
0.0272... | 0.174182 |
This document provides information about how Redis reacts to different POSIX signals such as `SIGTERM` and `SIGSEGV`. The information in this document \*\*only applies to Redis version 2.6 or greater\*\*. ## SIGTERM and SIGINT The `SIGTERM` and `SIGINT` signals tell Redis to shut down gracefully. When the server receives this signal, it does not immediately exit. Instead, it schedules a shutdown similar to the one performed by the `SHUTDOWN` command. The scheduled shutdown starts as soon as possible, specifically as long as the current command in execution terminates (if any), with a possible additional delay of 0.1 seconds or less. If the server is blocked by a long-running Lua script, kill the script with `SCRIPT KILL` if possible. The scheduled shutdown will run just after the script is killed or terminates spontaneously. This shutdown process includes the following actions: \* If there are any replicas lagging behind in replication: \* Pause clients attempting to write with `CLIENT PAUSE` and the `WRITE` option. \* Wait up to the configured `shutdown-timeout` (default 10 seconds) for replicas to catch up with the master's replication offset. \* If a background child is saving the RDB file or performing an AOF rewrite, the child process is killed. \* If the AOF is active, Redis calls the `fsync` system call on the AOF file descriptor to flush the buffers on disk. \* If Redis is configured to persist on disk using RDB files, a synchronous (blocking) save is performed. Since the save is synchronous, it doesn't use any additional memory. \* If the server is daemonized, the PID file is removed. \* If the Unix domain socket is enabled, it gets removed. \* The server exits with an exit code of zero. IF the RDB file can't be saved, the shutdown fails, and the server continues to run in order to ensure no data loss. Likewise, if the user just turned on AOF, and the server triggered the first AOF rewrite in order to create the initial AOF file but this file can't be saved, the shutdown fails and the server continues to run. Since Redis 2.6.11, no further attempt to shut down will be made unless a new `SIGTERM` is received or the `SHUTDOWN` command is issued. Since Redis 7.0, the server waits for lagging replicas up to a configurable `shutdown-timeout`, 10 seconds by default, before shutting down. This provides a best effort to minimize the risk of data loss in a situation where no save points are configured and AOF is deactivated. Before version 7.0, shutting down a heavily loaded master node in a diskless setup was more likely to result in data loss. To minimize the risk of data loss in such setups, trigger a manual `FAILOVER` (or `CLUSTER FAILOVER`) to demote the master to a replica and promote one of the replicas to a new master before shutting down a master node. ## SIGSEGV, SIGBUS, SIGFPE and SIGILL The following signals are handled as a Redis crash: \* SIGSEGV \* SIGBUS \* SIGFPE \* SIGILL Once one of these signals is trapped, Redis stops any current operation and performs the following actions: \* Adds a bug report to the log file. This includes a stack trace, dump of registers, and information about the state of clients. \* Since Redis 2.8, a fast memory test is performed as a first check of the reliability of the crashing system. \* If the server was daemonized, the PID file is removed. \* Finally the server unregisters its own signal handler for the received signal and resends the same signal to itself to make sure that | https://github.com/redis/redis-doc/blob/master//docs/reference/signals.md | master | redis | [
-0.037271443754434586,
-0.04434247687458992,
-0.0279680322855711,
0.06218690797686577,
0.04621439427137375,
-0.06305048614740372,
-0.009393777698278427,
0.03193699195981026,
0.11738660931587219,
0.016601750627160072,
-0.04343743994832039,
0.09640969336032867,
-0.040371693670749664,
-0.0629... | 0.166414 |
memory test is performed as a first check of the reliability of the crashing system. \* If the server was daemonized, the PID file is removed. \* Finally the server unregisters its own signal handler for the received signal and resends the same signal to itself to make sure that the default action is performed, such as dumping the core on the file system. ## What happens when a child process gets killed When the child performing the Append Only File rewrite gets killed by a signal, Redis handles this as an error and discards the (probably partial or corrupted) AOF file. It will attempt the rewrite again later. When the child performing an RDB save is killed, Redis handles the condition as a more severe error. While the failure of an AOF file rewrite can cause AOF file enlargement, failed RDB file creation reduces durability. As a result of the child producing the RDB file being killed by a signal, or when the child exits with an error (non zero exit code), Redis enters a special error condition where no further write command is accepted. \* Redis will continue to reply to read commands. \* Redis will reply to all write commands with a `MISCONFIG` error. This error condition will persist until it becomes possible to create an RDB file successfully. ## Kill the RDB file without errors Sometimes the user may want to kill the RDB-saving child process without generating an error. Since Redis version 2.6.10, this can be done using the signal `SIGUSR1`. This signal is handled in a special way: it kills the child process like any other signal, but the parent process will not detect this as a critical error and will continue to serve write requests. | https://github.com/redis/redis-doc/blob/master//docs/reference/signals.md | master | redis | [
-0.009691356681287289,
-0.03390065208077431,
-0.06568272411823273,
0.03421001881361008,
0.022446591407060623,
-0.01650286093354225,
-0.01885111816227436,
0.054677169770002365,
0.047000396996736526,
0.08461026847362518,
0.014834284782409668,
0.12145338207483292,
0.01609133556485176,
-0.0591... | 0.154259 |
Welcome to the \*\*Redis Cluster Specification\*\*. Here you'll find information about the algorithms and design rationales of Redis Cluster. This document is a work in progress as it is continuously synchronized with the actual implementation of Redis. ## Main properties and rationales of the design ### Redis Cluster goals Redis Cluster is a distributed implementation of Redis with the following goals in order of importance in the design: \* High performance and linear scalability up to 1000 nodes. There are no proxies, asynchronous replication is used, and no merge operations are performed on values. \* Acceptable degree of write safety: the system tries (in a best-effort way) to retain all the writes originating from clients connected with the majority of the master nodes. Usually there are small windows where acknowledged writes can be lost. Windows to lose acknowledged writes are larger when clients are in a minority partition. \* Availability: Redis Cluster is able to survive partitions where the majority of the master nodes are reachable and there is at least one reachable replica for every master node that is no longer reachable. Moreover using \*replicas migration\*, masters no longer replicated by any replica will receive one from a master which is covered by multiple replicas. What is described in this document is implemented in Redis 3.0 or greater. ### Implemented subset Redis Cluster implements all the single key commands available in the non-distributed version of Redis. Commands performing complex multi-key operations like set unions and intersections are implemented for cases where all of the keys involved in the operation hash to the same slot. Redis Cluster implements a concept called \*\*hash tags\*\* that can be used to force certain keys to be stored in the same hash slot. However, during manual resharding, multi-key operations may become unavailable for some time while single-key operations are always available. Redis Cluster does not support multiple databases like the standalone version of Redis. We only support database `0`; the `SELECT` command is not allowed. ## Client and Server roles in the Redis cluster protocol In Redis Cluster, nodes are responsible for holding the data, and taking the state of the cluster, including mapping keys to the right nodes. Cluster nodes are also able to auto-discover other nodes, detect non-working nodes, and promote replica nodes to master when needed in order to continue to operate when a failure occurs. To perform their tasks all the cluster nodes are connected using a TCP bus and a binary protocol, called the \*\*Redis Cluster Bus\*\*. Every node is connected to every other node in the cluster using the cluster bus. Nodes use a gossip protocol to propagate information about the cluster in order to discover new nodes, to send ping packets to make sure all the other nodes are working properly, and to send cluster messages needed to signal specific conditions. The cluster bus is also used in order to propagate Pub/Sub messages across the cluster and to orchestrate manual failovers when requested by users (manual failovers are failovers which are not initiated by the Redis Cluster failure detector, but by the system administrator directly). Since cluster nodes are not able to proxy requests, clients may be redirected to other nodes using redirection errors `-MOVED` and `-ASK`. The client is in theory free to send requests to all the nodes in the cluster, getting redirected if needed, so the client is not required to hold the state of the cluster. However clients that are able to cache the map between keys and nodes can improve the performance in a sensible way. ### Write safety Redis Cluster uses | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.05347496271133423,
-0.0280357263982296,
-0.06082656979560852,
0.050603386014699936,
0.005241336300969124,
-0.0670040026307106,
-0.02641412615776062,
0.0040040952153503895,
0.018601587042212486,
0.043147772550582886,
-0.05612083896994591,
0.06557991355657578,
0.08187489956617355,
-0.0784... | 0.233376 |
all the nodes in the cluster, getting redirected if needed, so the client is not required to hold the state of the cluster. However clients that are able to cache the map between keys and nodes can improve the performance in a sensible way. ### Write safety Redis Cluster uses asynchronous replication between nodes, and \*\*last failover wins\*\* implicit merge function. This means that the last elected master dataset eventually replaces all the other replicas. There is always a window of time when it is possible to lose writes during partitions. However these windows are very different in the case of a client that is connected to the majority of masters, and a client that is connected to the minority of masters. Redis Cluster tries harder to retain writes that are performed by clients connected to the majority of masters, compared to writes performed in the minority side. The following are examples of scenarios that lead to loss of acknowledged writes received in the majority partitions during failures: 1. A write may reach a master, but while the master may be able to reply to the client, the write may not be propagated to replicas via the asynchronous replication used between master and replica nodes. If the master dies without the write reaching the replicas, the write is lost forever if the master is unreachable for a long enough period that one of its replicas is promoted. This is usually hard to observe in the case of a total, sudden failure of a master node since masters try to reply to clients (with the acknowledge of the write) and replicas (propagating the write) at about the same time. However it is a real world failure mode. 2. Another theoretically possible failure mode where writes are lost is the following: \* A master is unreachable because of a partition. \* It gets failed over by one of its replicas. \* After some time it may be reachable again. \* A client with an out-of-date routing table may write to the old master before it is converted into a replica (of the new master) by the cluster. The second failure mode is unlikely to happen because master nodes unable to communicate with the majority of the other masters for enough time to be failed over will no longer accept writes, and when the partition is fixed writes are still refused for a small amount of time to allow other nodes to inform about configuration changes. This failure mode also requires that the client's routing table has not yet been updated. Writes targeting the minority side of a partition have a larger window in which to get lost. For example, Redis Cluster loses a non-trivial number of writes on partitions where there is a minority of masters and at least one or more clients, since all the writes sent to the masters may potentially get lost if the masters are failed over in the majority side. Specifically, for a master to be failed over it must be unreachable by the majority of masters for at least `NODE\_TIMEOUT`, so if the partition is fixed before that time, no writes are lost. When the partition lasts for more than `NODE\_TIMEOUT`, all the writes performed in the minority side up to that point may be lost. However the minority side of a Redis Cluster will start refusing writes as soon as `NODE\_TIMEOUT` time has elapsed without contact with the majority, so there is a maximum window after which the minority becomes no longer available. Hence, no writes are accepted or lost after that time. ### Availability | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.020422056317329407,
-0.023373831063508987,
0.015199157409369946,
0.06631611287593842,
0.03443998470902443,
-0.03706958144903183,
-0.027084365487098694,
0.015038909390568733,
0.05508052185177803,
0.027909496799111366,
-0.0585026741027832,
0.07197493314743042,
0.080654077231884,
-0.0946082... | 0.113251 |
However the minority side of a Redis Cluster will start refusing writes as soon as `NODE\_TIMEOUT` time has elapsed without contact with the majority, so there is a maximum window after which the minority becomes no longer available. Hence, no writes are accepted or lost after that time. ### Availability Redis Cluster is not available in the minority side of the partition. In the majority side of the partition assuming that there are at least the majority of masters and a replica for every unreachable master, the cluster becomes available again after `NODE\_TIMEOUT` time plus a few more seconds required for a replica to get elected and failover its master (failovers are usually executed in a matter of 1 or 2 seconds). This means that Redis Cluster is designed to survive failures of a few nodes in the cluster, but it is not a suitable solution for applications that require availability in the event of large net splits. In the example of a cluster composed of N master nodes where every node has a single replica, the majority side of the cluster will remain available as long as a single node is partitioned away, and will remain available with a probability of `1-(1/(N\*2-1))` when two nodes are partitioned away (after the first node fails we are left with `N\*2-1` nodes in total, and the probability of the only master without a replica to fail is `1/(N\*2-1))`. For example, in a cluster with 5 nodes and a single replica per node, there is a `1/(5\*2-1) = 11.11%` probability that after two nodes are partitioned away from the majority, the cluster will no longer be available. Thanks to a Redis Cluster feature called \*\*replicas migration\*\* the Cluster availability is improved in many real world scenarios by the fact that replicas migrate to orphaned masters (masters no longer having replicas). So at every successful failure event, the cluster may reconfigure the replicas layout in order to better resist the next failure. ### Performance In Redis Cluster nodes don't proxy commands to the right node in charge for a given key, but instead they redirect clients to the right nodes serving a given portion of the key space. Eventually clients obtain an up-to-date representation of the cluster and which node serves which subset of keys, so during normal operations clients directly contact the right nodes in order to send a given command. Because of the use of asynchronous replication, nodes do not wait for other nodes' acknowledgment of writes (if not explicitly requested using the `WAIT` command). Also, because multi-key commands are only limited to \*near\* keys, data is never moved between nodes except when resharding. Normal operations are handled exactly as in the case of a single Redis instance. This means that in a Redis Cluster with N master nodes you can expect the same performance as a single Redis instance multiplied by N as the design scales linearly. At the same time the query is usually performed in a single round trip, since clients usually retain persistent connections with the nodes, so latency figures are also the same as the single standalone Redis node case. Very high performance and scalability while preserving weak but reasonable forms of data safety and availability is the main goal of Redis Cluster. ### Why merge operations are avoided The Redis Cluster design avoids conflicting versions of the same key-value pair in multiple nodes as in the case of the Redis data model this is not always desirable. Values in Redis are often very large; it is common to see lists or sorted sets with millions of | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.01712992787361145,
-0.04740919545292854,
0.020215334370732307,
0.08900357037782669,
0.04714054614305496,
-0.04770415276288986,
-0.05926906317472458,
0.005089274607598782,
0.05123290792107582,
0.02163104899227619,
-0.06210410222411156,
0.04448705166578293,
0.031234152615070343,
-0.068567... | 0.080421 |
are avoided The Redis Cluster design avoids conflicting versions of the same key-value pair in multiple nodes as in the case of the Redis data model this is not always desirable. Values in Redis are often very large; it is common to see lists or sorted sets with millions of elements. Also data types are semantically complex. Transferring and merging these kind of values can be a major bottleneck and/or may require the non-trivial involvement of application-side logic, additional memory to store meta-data, and so forth. There are no strict technological limits here. CRDTs or synchronously replicated state machines can model complex data types similar to Redis. However, the actual run time behavior of such systems would not be similar to Redis Cluster. Redis Cluster was designed in order to cover the exact use cases of the non-clustered Redis version. ## Overview of Redis Cluster main components ### Key distribution model The cluster's key space is split into 16384 slots, effectively setting an upper limit for the cluster size of 16384 master nodes (however, the suggested max size of nodes is on the order of ~ 1000 nodes). Each master node in a cluster handles a subset of the 16384 hash slots. The cluster is \*\*stable\*\* when there is no cluster reconfiguration in progress (i.e. where hash slots are being moved from one node to another). When the cluster is stable, a single hash slot will be served by a single node (however the serving node can have one or more replicas that will replace it in the case of net splits or failures, and that can be used in order to scale read operations where reading stale data is acceptable). The base algorithm used to map keys to hash slots is the following (read the next paragraph for the hash tag exception to this rule): HASH\_SLOT = CRC16(key) mod 16384 The CRC16 is specified as follows: \* Name: XMODEM (also known as ZMODEM or CRC-16/ACORN) \* Width: 16 bit \* Poly: 1021 (That is actually x^16 + x^12 + x^5 + 1) \* Initialization: 0000 \* Reflect Input byte: False \* Reflect Output CRC: False \* Xor constant to output CRC: 0000 \* Output for "123456789": 31C3 14 out of 16 CRC16 output bits are used (this is why there is a modulo 16384 operation in the formula above). In our tests CRC16 behaved remarkably well in distributing different kinds of keys evenly across the 16384 slots. \*\*Note\*\*: A reference implementation of the CRC16 algorithm used is available in the Appendix A of this document. ### Hash tags There is an exception for the computation of the hash slot that is used in order to implement \*\*hash tags\*\*. Hash tags are a way to ensure that multiple keys are allocated in the same hash slot. This is used in order to implement multi-key operations in Redis Cluster. To implement hash tags, the hash slot for a key is computed in a slightly different way in certain conditions. If the key contains a "{...}" pattern only the substring between `{` and `}` is hashed in order to obtain the hash slot. However since it is possible that there are multiple occurrences of `{` or `}` the algorithm is well specified by the following rules: \* IF the key contains a `{` character. \* AND IF there is a `}` character to the right of `{`. \* AND IF there are one or more characters between the first occurrence of `{` and the first occurrence of `}`. Then instead of hashing the key, only what is between the first occurrence of | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.010777589865028858,
-0.06286261975765228,
-0.05760108679533005,
0.003245101310312748,
-0.008311821147799492,
-0.07943493872880936,
-0.0722118690609932,
0.0043673282489180565,
0.04083095118403435,
-0.018336182460188866,
-0.03842226788401604,
0.06775180995464325,
0.08988266438245773,
-0.05... | 0.106141 |
`{` character. \* AND IF there is a `}` character to the right of `{`. \* AND IF there are one or more characters between the first occurrence of `{` and the first occurrence of `}`. Then instead of hashing the key, only what is between the first occurrence of `{` and the following first occurrence of `}` is hashed. Examples: \* The two keys `{user1000}.following` and `{user1000}.followers` will hash to the same hash slot since only the substring `user1000` will be hashed in order to compute the hash slot. \* For the key `foo{}{bar}` the whole key will be hashed as usually since the first occurrence of `{` is followed by `}` on the right without characters in the middle. \* For the key `foo{{bar}}zap` the substring `{bar` will be hashed, because it is the substring between the first occurrence of `{` and the first occurrence of `}` on its right. \* For the key `foo{bar}{zap}` the substring `bar` will be hashed, since the algorithm stops at the first valid or invalid (without bytes inside) match of `{` and `}`. \* What follows from the algorithm is that if the key starts with `{}`, it is guaranteed to be hashed as a whole. This is useful when using binary data as key names. #### Glob-style patterns Commands accepting a glob-style pattern, including `KEYS`, `SCAN` and `SORT`, are optimized for patterns that imply a single slot. This means that if all keys that can match a pattern must belong to a specific slot, only this slot is searched for keys matching the pattern. The pattern slot optimization is introduced in Redis 8.0. The optimization kicks in when the pattern meets the following conditions: \* the pattern contains a hashtag, \* there are no wildcards or escape characters before the hashtag, and \* the hashtag within curly braces doesn't contain any wildcards or escape characters. For example, `SCAN 0 MATCH {abc}\*` can successfully recognize the hashtag and scans only the slot corresponding to `abc`. However, the patterns `\*{abc}`, `{a\*c}`, or `{a\\*bc}` cannot recognize the hashtag, so all slots need to be scanned. #### Hash slot example code Adding the hash tags exception, the following is an implementation of the `HASH\_SLOT` function in Ruby and C language. Ruby example code: def HASH\_SLOT(key) s = key.index "{" if s e = key.index "}",s+1 if e && e != s+1 key = key[s+1..e-1] end end crc16(key) % 16384 end C example code: unsigned int HASH\_SLOT(char \*key, int keylen) { int s, e; /\* start-end indexes of { and } \*/ /\* Search the first occurrence of '{'. \*/ for (s = 0; s < keylen; s++) if (key[s] == '{') break; /\* No '{' ? Hash the whole key. This is the base case. \*/ if (s == keylen) return crc16(key,keylen) & 16383; /\* '{' found? Check if we have the corresponding '}'. \*/ for (e = s+1; e < keylen; e++) if (key[e] == '}') break; /\* No '}' or nothing between {} ? Hash the whole key. \*/ if (e == keylen || e == s+1) return crc16(key,keylen) & 16383; /\* If we are here there is both a { and a } on its right. Hash \* what is in the middle between { and }. \*/ return crc16(key+s+1,e-s-1) & 16383; } ### Cluster node attributes Every node has a unique name in the cluster. The node name is the hex representation of a 160 bit random number, obtained the first time a node is started (usually using /dev/urandom). The node will save its ID in the node configuration file, and | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.023513728752732277,
0.010585611686110497,
0.017733285203576088,
-0.09508726745843887,
-0.07912953943014145,
-0.02347017638385296,
0.05263572186231613,
-0.010961772873997688,
0.026837926357984543,
-0.023338615894317627,
0.053354088217020035,
0.04857600852847099,
0.15410369634628296,
-0.1... | 0.040325 |
} ### Cluster node attributes Every node has a unique name in the cluster. The node name is the hex representation of a 160 bit random number, obtained the first time a node is started (usually using /dev/urandom). The node will save its ID in the node configuration file, and will use the same ID forever, or at least as long as the node configuration file is not deleted by the system administrator, or a \*hard reset\* is requested via the `CLUSTER RESET` command. The node ID is used to identify every node across the whole cluster. It is possible for a given node to change its IP address without any need to also change the node ID. The cluster is also able to detect the change in IP/port and reconfigure using the gossip protocol running over the cluster bus. The node ID is not the only information associated with each node, but is the only one that is always globally consistent. Every node has also the following set of information associated. Some information is about the cluster configuration detail of this specific node, and is eventually consistent across the cluster. Some other information, like the last time a node was pinged, is instead local to each node. Every node maintains the following information about other nodes that it is aware of in the cluster: The node ID, IP and port of the node, a set of flags, what is the master of the node if it is flagged as `replica`, last time the node was pinged and the last time the pong was received, the current \*configuration epoch\* of the node (explained later in this specification), the link state and finally the set of hash slots served. A detailed [explanation of all the node fields](https://redis.io/commands/cluster-nodes) is described in the `CLUSTER NODES` documentation. The `CLUSTER NODES` command can be sent to any node in the cluster and provides the state of the cluster and the information for each node according to the local view the queried node has of the cluster. The following is sample output of the `CLUSTER NODES` command sent to a master node in a small cluster of three nodes. $ redis-cli cluster nodes d1861060fe6a534d42d8a19aeb36600e18785e04 127.0.0.1:6379 myself - 0 1318428930 1 connected 0-1364 3886e65cc906bfd9b1f7e7bde468726a052d1dae 127.0.0.1:6380 master - 1318428930 1318428931 2 connected 1365-2729 d289c575dcbc4bdd2931585fd4339089e461a27d 127.0.0.1:6381 master - 1318428931 1318428931 3 connected 2730-4095 In the above listing the different fields are in order: node id, address:port, flags, last ping sent, last pong received, configuration epoch, link state, slots. Details about the above fields will be covered as soon as we talk of specific parts of Redis Cluster. ### The cluster bus Every Redis Cluster node has an additional TCP port for receiving incoming connections from other Redis Cluster nodes. This port will be derived by adding 10000 to the data port or it can be specified with the cluster-port config. Example 1: If a Redis node is listening for client connections on port 6379, and you do not add cluster-port parameter in redis.conf, the Cluster bus port 16379 will be opened. Example 2: If a Redis node is listening for client connections on port 6379, and you set cluster-port 20000 in redis.conf, the Cluster bus port 20000 will be opened. Node-to-node communication happens exclusively using the Cluster bus and the Cluster bus protocol: a binary protocol composed of frames of different types and sizes. The Cluster bus binary protocol is not publicly documented since it is not intended for external software devices to talk with Redis Cluster nodes using this protocol. However you can obtain more details about the | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.01688964292407036,
0.024571575224399567,
0.011275374330580235,
0.03875421732664108,
0.06695222854614258,
0.03161991015076637,
0.01797838881611824,
-0.10112543404102325,
0.03877304121851921,
0.003200797364115715,
-0.020767753943800926,
0.0485575832426548,
0.061939138919115067,
-0.1035824... | 0.22987 |
the Cluster bus protocol: a binary protocol composed of frames of different types and sizes. The Cluster bus binary protocol is not publicly documented since it is not intended for external software devices to talk with Redis Cluster nodes using this protocol. However you can obtain more details about the Cluster bus protocol by reading the `cluster.h` and `cluster.c` files in the Redis Cluster source code. ### Cluster topology Redis Cluster is a full mesh where every node is connected with every other node using a TCP connection. In a cluster of N nodes, every node has N-1 outgoing TCP connections, and N-1 incoming connections. These TCP connections are kept alive all the time and are not created on demand. When a node expects a pong reply in response to a ping in the cluster bus, before waiting long enough to mark the node as unreachable, it will try to refresh the connection with the node by reconnecting from scratch. While Redis Cluster nodes form a full mesh, \*\*nodes use a gossip protocol and a configuration update mechanism in order to avoid exchanging too many messages between nodes during normal conditions\*\*, so the number of messages exchanged is not exponential. ### Node handshake Nodes always accept connections on the cluster bus port, and even reply to pings when received, even if the pinging node is not trusted. However, all other packets will be discarded by the receiving node if the sending node is not considered part of the cluster. A node will accept another node as part of the cluster only in two ways: \* If a node presents itself with a `MEET` message (`CLUSTER MEET` command). A meet message is exactly like a `PING` message, but forces the receiver to accept the node as part of the cluster. Nodes will send `MEET` messages to other nodes \*\*only if\*\* the system administrator requests this via the following command: CLUSTER MEET ip port \* A node will also register another node as part of the cluster if a node that is already trusted will gossip about this other node. So if A knows B, and B knows C, eventually B will send gossip messages to A about C. When this happens, A will register C as part of the network, and will try to connect with C. This means that as long as we join nodes in any connected graph, they'll eventually form a fully connected graph automatically. This means that the cluster is able to auto-discover other nodes, but only if there is a trusted relationship that was forced by the system administrator. This mechanism makes the cluster more robust but prevents different Redis clusters from accidentally mixing after change of IP addresses or other network related events. ## Redirection and resharding ### MOVED Redirection A Redis client is free to send queries to every node in the cluster, including replica nodes. The node will analyze the query, and if it is acceptable (that is, only a single key is mentioned in the query, or the multiple keys mentioned are all to the same hash slot) it will lookup what node is responsible for the hash slot where the key or keys belong. If the hash slot is served by the node, the query is simply processed, otherwise the node will check its internal hash slot to node map, and will reply to the client with a MOVED error, like in the following example: GET x -MOVED 3999 127.0.0.1:6381 The error includes the hash slot of the key (3999) and the endpoint:port of the instance that can serve the query. | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.0036594001576304436,
-0.038393162190914154,
-0.0584426149725914,
0.06964798271656036,
0.033208053559064865,
-0.01958739385008812,
0.028067538514733315,
-0.03105749748647213,
0.08319979906082153,
-0.005398157984018326,
-0.03486163914203644,
0.0361996553838253,
0.04522307217121124,
-0.077... | 0.229675 |
will check its internal hash slot to node map, and will reply to the client with a MOVED error, like in the following example: GET x -MOVED 3999 127.0.0.1:6381 The error includes the hash slot of the key (3999) and the endpoint:port of the instance that can serve the query. The client needs to reissue the query to the specified node's endpoint address and port. The endpoint can be either an IP address, a hostname, or it can be empty (e.g. `-MOVED 3999 :6380`). An empty endpoint indicates that the server node has an unknown endpoint, and the client should send the next request to the same endpoint as the current request but with the provided port. Note that even if the client waits a long time before reissuing the query, and in the meantime the cluster configuration changed, the destination node will reply again with a MOVED error if the hash slot 3999 is now served by another node. The same happens if the contacted node had no updated information. So while from the point of view of the cluster nodes are identified by IDs we try to simplify our interface with the client just exposing a map between hash slots and Redis nodes identified by endpoint:port pairs. The client is not required to, but should try to memorize that hash slot 3999 is served by 127.0.0.1:6381. This way once a new command needs to be issued it can compute the hash slot of the target key and have a greater chance of choosing the right node. An alternative is to just refresh the whole client-side cluster layout using the `CLUSTER SHARDS`, or the deprecated `CLUSTER SLOTS`, command when a MOVED redirection is received. When a redirection is encountered, it is likely multiple slots were reconfigured rather than just one, so updating the client configuration as soon as possible is often the best strategy. Note that when the Cluster is stable (no ongoing changes in the configuration), eventually all the clients will obtain a map of hash slots -> nodes, making the cluster efficient, with clients directly addressing the right nodes without redirections, proxies or other single point of failure entities. A client \*\*must be also able to handle -ASK redirections\*\* that are described later in this document, otherwise it is not a complete Redis Cluster client. ### Live reconfiguration Redis Cluster supports the ability to add and remove nodes while the cluster is running. Adding or removing a node is abstracted into the same operation: moving a hash slot from one node to another. This means that the same basic mechanism can be used in order to rebalance the cluster, add or remove nodes, and so forth. \* To add a new node to the cluster an empty node is added to the cluster and some set of hash slots are moved from existing nodes to the new node. \* To remove a node from the cluster the hash slots assigned to that node are moved to other existing nodes. \* To rebalance the cluster a given set of hash slots are moved between nodes. The core of the implementation is the ability to move hash slots around. From a practical point of view a hash slot is just a set of keys, so what Redis Cluster really does during \*resharding\* is to move keys from an instance to another instance. Moving a hash slot means moving all the keys that happen to hash into this hash slot. To understand how this works we need to show the `CLUSTER` subcommands that are used to manipulate the slots translation | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.003021023701876402,
0.029437409713864326,
0.018880972638726234,
0.04640644043684006,
0.02891439013183117,
-0.007107047829777002,
0.029078705236315727,
-0.03972587361931801,
0.038389336317777634,
0.014908543787896633,
0.01554778404533863,
0.01449348870664835,
0.0726044699549675,
-0.092459... | 0.148147 |
does during \*resharding\* is to move keys from an instance to another instance. Moving a hash slot means moving all the keys that happen to hash into this hash slot. To understand how this works we need to show the `CLUSTER` subcommands that are used to manipulate the slots translation table in a Redis Cluster node. The following subcommands are available (among others not useful in this case): \* `CLUSTER ADDSLOTS` slot1 [slot2] ... [slotN] \* `CLUSTER DELSLOTS` slot1 [slot2] ... [slotN] \* `CLUSTER ADDSLOTSRANGE` start-slot1 end-slot1 [start-slot2 end-slot2] ... [start-slotN end-slotN] \* `CLUSTER DELSLOTSRANGE` start-slot1 end-slot1 [start-slot2 end-slot2] ... [start-slotN end-slotN] \* `CLUSTER SETSLOT` slot NODE node \* `CLUSTER SETSLOT` slot MIGRATING node \* `CLUSTER SETSLOT` slot IMPORTING node The first four commands, `ADDSLOTS`, `DELSLOTS`, `ADDSLOTSRANGE` and `DELSLOTSRANGE`, are simply used to assign (or remove) slots to a Redis node. Assigning a slot means to tell a given master node that it will be in charge of storing and serving content for the specified hash slot. After the hash slots are assigned they will propagate across the cluster using the gossip protocol, as specified later in the \*configuration propagation\* section. The `ADDSLOTS` and `ADDSLOTSRANGE` commands are usually used when a new cluster is created from scratch to assign each master node a subset of all the 16384 hash slots available. The `DELSLOTS` and `DELSLOTSRANGE` are mainly used for manual modification of a cluster configuration or for debugging tasks: in practice it is rarely used. The `SETSLOT` subcommand is used to assign a slot to a specific node ID if the `SETSLOT NODE` form is used. Otherwise the slot can be set in the two special states `MIGRATING` and `IMPORTING`. Those two special states are used in order to migrate a hash slot from one node to another. \* When a slot is set as MIGRATING, the node will accept all queries that are about this hash slot, but only if the key in question exists, otherwise the query is forwarded using a `-ASK` redirection to the node that is target of the migration. \* When a slot is set as IMPORTING, the node will accept all queries that are about this hash slot, but only if the request is preceded by an `ASKING` command. If the `ASKING` command was not given by the client, the query is redirected to the real hash slot owner via a `-MOVED` redirection error, as would happen normally. Let's make this clearer with an example of hash slot migration. Assume that we have two Redis master nodes, called A and B. We want to move hash slot 8 from A to B, so we issue commands like this: \* We send B: CLUSTER SETSLOT 8 IMPORTING A \* We send A: CLUSTER SETSLOT 8 MIGRATING B All the other nodes will continue to point clients to node "A" every time they are queried with a key that belongs to hash slot 8, so what happens is that: \* All queries about existing keys are processed by "A". \* All queries about non-existing keys in A are processed by "B", because "A" will redirect clients to "B". This way we no longer create new keys in "A". In the meantime, `redis-cli` used during reshardings and Redis Cluster configuration will migrate existing keys in hash slot 8 from A to B. This is performed using the following command: CLUSTER GETKEYSINSLOT slot count The above command will return `count` keys in the specified hash slot. For keys returned, `redis-cli` sends node "A" a `MIGRATE` command, that will migrate the specified keys from A to B in an | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.003985898569226265,
-0.06038268655538559,
-0.06361224502325058,
-0.014470221474766731,
-0.007003471255302429,
-0.045385636389255524,
0.03536943718791008,
-0.032886408269405365,
-0.0015506964409723878,
0.012281158939003944,
0.045661959797143936,
-0.011975943110883236,
0.05345418304204941,
... | 0.165822 |
8 from A to B. This is performed using the following command: CLUSTER GETKEYSINSLOT slot count The above command will return `count` keys in the specified hash slot. For keys returned, `redis-cli` sends node "A" a `MIGRATE` command, that will migrate the specified keys from A to B in an atomic way (both instances are locked for the time (usually very small time) needed to migrate keys so there are no race conditions). This is how `MIGRATE` works: MIGRATE target\_host target\_port "" target\_database id timeout KEYS key1 key2 ... `MIGRATE` will connect to the target instance, send a serialized version of the key, and once an OK code is received, the old key from its own dataset will be deleted. From the point of view of an external client a key exists either in A or B at any given time. In Redis Cluster there is no need to specify a database other than 0, but `MIGRATE` is a general command that can be used for other tasks not involving Redis Cluster. `MIGRATE` is optimized to be as fast as possible even when moving complex keys such as long lists, but in Redis Cluster reconfiguring the cluster where big keys are present is not considered a wise procedure if there are latency constraints in the application using the database. When the migration process is finally finished, the `SETSLOT NODE ` command is sent to the two nodes involved in the migration in order to set the slots to their normal state again. The same command is usually sent to all other nodes to avoid waiting for the natural propagation of the new configuration across the cluster. ### ASK redirection In the previous section, we briefly talked about ASK redirection. Why can't we simply use MOVED redirection? Because while MOVED means that we think the hash slot is permanently served by a different node and the next queries should be tried against the specified node. ASK means to send only the next query to the specified node. This is needed because the next query about hash slot 8 can be about a key that is still in A, so we always want the client to try A and then B if needed. Since this happens only for one hash slot out of 16384 available, the performance hit on the cluster is acceptable. We need to force that client behavior, so to make sure that clients will only try node B after A was tried, node B will only accept queries of a slot that is set as IMPORTING if the client sends the ASKING command before sending the query. Basically the ASKING command sets a one-time flag on the client that forces a node to serve a query about an IMPORTING slot. The full semantics of ASK redirection from the point of view of the client is as follows: \* If ASK redirection is received, send only the query that was redirected to the specified node but continue sending subsequent queries to the old node. \* Start the redirected query with the ASKING command. \* Don't yet update local client tables to map hash slot 8 to B. Once hash slot 8 migration is completed, A will send a MOVED message and the client may permanently map hash slot 8 to the new endpoint and port pair. Note that if a buggy client performs the map earlier this is not a problem since it will not send the ASKING command before issuing the query, so B will redirect the client to A using a MOVED redirection error. Slots migration is explained | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.003511197166517377,
-0.07548832893371582,
-0.09237596392631531,
0.009223761968314648,
0.0036633892450481653,
-0.04591181501746178,
-0.005859849043190479,
-0.04957380145788193,
0.04662374407052994,
-0.0026033027097582817,
0.0418907105922699,
0.03549380227923393,
0.09532558172941208,
-0.1... | 0.114715 |
the new endpoint and port pair. Note that if a buggy client performs the map earlier this is not a problem since it will not send the ASKING command before issuing the query, so B will redirect the client to A using a MOVED redirection error. Slots migration is explained in similar terms but with different wording (for the sake of redundancy in the documentation) in the `CLUSTER SETSLOT` command documentation. ### Client connections and redirection handling To be efficient, Redis Cluster clients maintain a map of the current slot configuration. However, this configuration is not \*required\* to be up to date. When contacting the wrong node results in a redirection, the client can update its internal slot map accordingly. Clients usually need to fetch a complete list of slots and mapped node addresses in two different situations: \* At startup, to populate the initial slots configuration \* When the client receives a `MOVED` redirection Note that a client may handle the `MOVED` redirection by updating just the moved slot in its table; however this is usually not efficient because often the configuration of multiple slots will be modified at once. For example, if a replica is promoted to master, all of the slots served by the old master will be remapped). It is much simpler to react to a `MOVED` redirection by fetching the full map of slots to nodes from scratch. Client can issue a `CLUSTER SLOTS` command to retrieve an array of slot ranges and the associated master and replica nodes serving the specified ranges. The following is an example of output of `CLUSTER SLOTS`: ``` 127.0.0.1:7000> cluster slots 1) 1) (integer) 5461 2) (integer) 10922 3) 1) "127.0.0.1" 2) (integer) 7001 4) 1) "127.0.0.1" 2) (integer) 7004 2) 1) (integer) 0 2) (integer) 5460 3) 1) "127.0.0.1" 2) (integer) 7000 4) 1) "127.0.0.1" 2) (integer) 7003 3) 1) (integer) 10923 2) (integer) 16383 3) 1) "127.0.0.1" 2) (integer) 7002 4) 1) "127.0.0.1" 2) (integer) 7005 ``` The first two sub-elements of every element of the returned array are the start and end slots of the range. The additional elements represent address-port pairs. The first address-port pair is the master serving the slot, and the additional address-port pairs are the replicas serving the same slot. Replicas will be listed only when not in an error condition (i.e., when their FAIL flag is not set). The first element in the output above says that slots from 5461 to 10922 (start and end included) are served by 127.0.0.1:7001, and it is possible to scale read-only load contacting the replica at 127.0.0.1:7004. `CLUSTER SLOTS` is not guaranteed to return ranges that cover the full 16384 slots if the cluster is misconfigured, so clients should initialize the slots configuration map filling the target nodes with NULL objects, and report an error if the user tries to execute commands about keys that belong to unassigned slots. Before returning an error to the caller when a slot is found to be unassigned, the client should try to fetch the slots configuration again to check if the cluster is now configured properly. ### Multi-keys operations Using hash tags, clients are free to use multi-key operations. For example the following operation is valid: MSET {user:1000}.name Angela {user:1000}.surname White Multi-key operations may become unavailable when a resharding of the hash slot the keys belong to is in progress. More specifically, even during a resharding the multi-key operations targeting keys that all exist and all still hash to the same slot (either the source or destination node) are still available. Operations on keys that don't exist or | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.019915029406547546,
-0.06012566015124321,
-0.006680031307041645,
0.010587497614324093,
-0.03646146133542061,
-0.018418041989207268,
-0.012148714624345303,
-0.023316534236073494,
-0.015166770666837692,
0.035675495862960815,
0.008865095674991608,
0.000668264168780297,
0.01078755035996437,
... | 0.112667 |
resharding of the hash slot the keys belong to is in progress. More specifically, even during a resharding the multi-key operations targeting keys that all exist and all still hash to the same slot (either the source or destination node) are still available. Operations on keys that don't exist or are - during the resharding - split between the source and destination nodes, will generate a `-TRYAGAIN` error. The client can try the operation after some time, or report back the error. As soon as migration of the specified hash slot has terminated, all multi-key operations are available again for that hash slot. ### Scaling reads using replica nodes Normally replica nodes will redirect clients to the authoritative master for the hash slot involved in a given command, however clients can use replicas in order to scale reads using the `READONLY` command. `READONLY` tells a Redis Cluster replica node that the client is ok reading possibly stale data and is not interested in running write queries. When the connection is in readonly mode, the cluster will send a redirection to the client only if the operation involves keys not served by the replica's master node. This may happen because: 1. The client sent a command about hash slots never served by the master of this replica. 2. The cluster was reconfigured (for example resharded) and the replica is no longer able to serve commands for a given hash slot. When this happens the client should update its hash slot map as explained in the previous sections. The readonly state of the connection can be cleared using the `READWRITE` command. ## Fault Tolerance ### Heartbeat and gossip messages Redis Cluster nodes continuously exchange ping and pong packets. Those two kinds of packets have the same structure, and both carry important configuration information. The only actual difference is the message type field. We'll refer to the sum of ping and pong packets as \*heartbeat packets\*. Usually nodes send ping packets that will trigger the receivers to reply with pong packets. However this is not necessarily true. It is possible for nodes to just send pong packets to send information to other nodes about their configuration, without triggering a reply. This is useful, for example, in order to broadcast a new configuration as soon as possible. Usually a node will ping a few random nodes every second so that the total number of ping packets sent (and pong packets received) by each node is a constant amount regardless of the number of nodes in the cluster. However every node makes sure to ping every other node that hasn't sent a ping or received a pong for longer than half the `NODE\_TIMEOUT` time. Before `NODE\_TIMEOUT` has elapsed, nodes also try to reconnect the TCP link with another node to make sure nodes are not believed to be unreachable only because there is a problem in the current TCP connection. The number of messages globally exchanged can be sizable if `NODE\_TIMEOUT` is set to a small figure and the number of nodes (N) is very large, since every node will try to ping every other node for which they don't have fresh information every half the `NODE\_TIMEOUT` time. For example in a 100 node cluster with a node timeout set to 60 seconds, every node will try to send 99 pings every 30 seconds, with a total amount of pings of 3.3 per second. Multiplied by 100 nodes, this is 330 pings per second in the total cluster. There are ways to lower the number of messages, however there have been no reported issues with | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.011139259673655033,
-0.06127399951219559,
-0.023075107485055923,
0.041472651064395905,
-0.005897190887480974,
-0.08326530456542969,
-0.03444276377558708,
-0.0006096655852161348,
0.045686665922403336,
0.027146365493535995,
0.0030979784205555916,
0.10668492317199707,
0.055256251245737076,
... | 0.107295 |
will try to send 99 pings every 30 seconds, with a total amount of pings of 3.3 per second. Multiplied by 100 nodes, this is 330 pings per second in the total cluster. There are ways to lower the number of messages, however there have been no reported issues with the bandwidth currently used by Redis Cluster failure detection, so for now the obvious and direct design is used. Note that even in the above example, the 330 packets per second exchanged are evenly divided among 100 different nodes, so the traffic each node receives is acceptable. ### Heartbeat packet content Ping and pong packets contain a header that is common to all types of packets (for instance packets to request a failover vote), and a special gossip section that is specific to Ping and Pong packets. The common header has the following information: \* Node ID, a 160 bit pseudorandom string that is assigned the first time a node is created and remains the same for all the life of a Redis Cluster node. \* The `currentEpoch` and `configEpoch` fields of the sending node that are used to mount the distributed algorithms used by Redis Cluster (this is explained in detail in the next sections). If the node is a replica the `configEpoch` is the last known `configEpoch` of its master. \* The node flags, indicating if the node is a replica, a master, and other single-bit node information. \* A bitmap of the hash slots served by the sending node, or if the node is a replica, a bitmap of the slots served by its master. \* The sender TCP base port that is the port used by Redis to accept client commands. \* The cluster port that is the port used by Redis for node-to-node communication. \* The state of the cluster from the point of view of the sender (down or ok). \* The master node ID of the sending node, if it is a replica. Ping and pong packets also contain a gossip section. This section offers to the receiver a view of what the sender node thinks about other nodes in the cluster. The gossip section only contains information about a few random nodes among the set of nodes known to the sender. The number of nodes mentioned in a gossip section is proportional to the cluster size. For every node added in the gossip section the following fields are reported: \* Node ID. \* IP and port of the node. \* Node flags. Gossip sections allow receiving nodes to get information about the state of other nodes from the point of view of the sender. This is useful both for failure detection and to discover other nodes in the cluster. ### Failure detection Redis Cluster failure detection is used to recognize when a master or replica node is no longer reachable by the majority of nodes and then respond by promoting a replica to the role of master. When replica promotion is not possible the cluster is put in an error state to stop receiving queries from clients. As already mentioned, every node takes a list of flags associated with other known nodes. There are two flags that are used for failure detection that are called `PFAIL` and `FAIL`. `PFAIL` means \*Possible failure\*, and is a non-acknowledged failure type. `FAIL` means that a node is failing and that this condition was confirmed by a majority of masters within a fixed amount of time. \*\*PFAIL flag:\*\* A node flags another node with the `PFAIL` flag when the node is not reachable for more than | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.0057485224679112434,
0.001267602201551199,
-0.042182356119155884,
0.0353090725839138,
0.018008483573794365,
-0.051176585257053375,
0.04735150188207626,
-0.03612609952688217,
-0.00394716439768672,
0.018976157531142235,
-0.058646637946367264,
-0.019966939464211464,
0.05175568163394928,
-0.... | 0.16722 |
and is a non-acknowledged failure type. `FAIL` means that a node is failing and that this condition was confirmed by a majority of masters within a fixed amount of time. \*\*PFAIL flag:\*\* A node flags another node with the `PFAIL` flag when the node is not reachable for more than `NODE\_TIMEOUT` time. Both master and replica nodes can flag another node as `PFAIL`, regardless of its type. The concept of non-reachability for a Redis Cluster node is that we have an \*\*active ping\*\* (a ping that we sent for which we have yet to get a reply) pending for longer than `NODE\_TIMEOUT`. For this mechanism to work the `NODE\_TIMEOUT` must be large compared to the network round trip time. In order to add reliability during normal operations, nodes will try to reconnect with other nodes in the cluster as soon as half of the `NODE\_TIMEOUT` has elapsed without a reply to a ping. This mechanism ensures that connections are kept alive so broken connections usually won't result in false failure reports between nodes. \*\*FAIL flag:\*\* The `PFAIL` flag alone is just local information every node has about other nodes, but it is not sufficient to trigger a replica promotion. For a node to be considered down the `PFAIL` condition needs to be escalated to a `FAIL` condition. As outlined in the node heartbeats section of this document, every node sends gossip messages to every other node including the state of a few random known nodes. Every node eventually receives a set of node flags for every other node. This way every node has a mechanism to signal other nodes about failure conditions they have detected. A `PFAIL` condition is escalated to a `FAIL` condition when the following set of conditions are met: \* Some node, that we'll call A, has another node B flagged as `PFAIL`. \* Node A collected, via gossip sections, information about the state of B from the point of view of the majority of masters in the cluster. \* The majority of masters signaled the `PFAIL` or `FAIL` condition within `NODE\_TIMEOUT \* FAIL\_REPORT\_VALIDITY\_MULT` time. (The validity factor is set to 2 in the current implementation, so this is just two times the `NODE\_TIMEOUT` time). If all the above conditions are true, Node A will: \* Mark the node as `FAIL`. \* Send a `FAIL` message (as opposed to a `FAIL` condition within a heartbeat message) to all the reachable nodes. The `FAIL` message will force every receiving node to mark the node in `FAIL` state, whether or not it already flagged the node in `PFAIL` state. Note that \*the FAIL flag is mostly one way\*. That is, a node can go from `PFAIL` to `FAIL`, but a `FAIL` flag can only be cleared in the following situations: \* The node is already reachable and is a replica. In this case the `FAIL` flag can be cleared as replicas are not failed over. \* The node is already reachable and is a master not serving any slot. In this case the `FAIL` flag can be cleared as masters without slots do not really participate in the cluster and are waiting to be configured in order to join the cluster. \* The node is already reachable and is a master, but a long time (N times the `NODE\_TIMEOUT`) has elapsed without any detectable replica promotion. It's better for it to rejoin the cluster and continue in this case. It is useful to note that while the `PFAIL` -> `FAIL` transition uses a form of agreement, the agreement used is weak: 1. Nodes collect views of other nodes over | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.004121474921703339,
-0.0531829409301281,
-0.04103889688849449,
0.08426985889673233,
0.058848168700933456,
-0.0413244292140007,
-0.005207414273172617,
0.004707229323685169,
0.01969015598297119,
0.01846088096499443,
-0.02366594225168228,
0.03173721209168434,
0.022806763648986816,
-0.050255... | 0.160757 |
has elapsed without any detectable replica promotion. It's better for it to rejoin the cluster and continue in this case. It is useful to note that while the `PFAIL` -> `FAIL` transition uses a form of agreement, the agreement used is weak: 1. Nodes collect views of other nodes over some time period, so even if the majority of master nodes need to "agree", actually this is just state that we collected from different nodes at different times and we are not sure, nor we require, that at a given moment the majority of masters agreed. However we discard failure reports which are old, so the failure was signaled by the majority of masters within a window of time. 2. While every node detecting the `FAIL` condition will force that condition on other nodes in the cluster using the `FAIL` message, there is no way to ensure the message will reach all the nodes. For instance a node may detect the `FAIL` condition and because of a partition will not be able to reach any other node. However the Redis Cluster failure detection has a liveness requirement: eventually all the nodes should agree about the state of a given node. There are two cases that can originate from split brain conditions. Either some minority of nodes believe the node is in `FAIL` state, or a minority of nodes believe the node is not in `FAIL` state. In both the cases eventually the cluster will have a single view of the state of a given node: \*\*Case 1\*\*: If a majority of masters have flagged a node as `FAIL`, because of failure detection and the \*chain effect\* it generates, every other node will eventually flag the master as `FAIL`, since in the specified window of time enough failures will be reported. \*\*Case 2\*\*: When only a minority of masters have flagged a node as `FAIL`, the replica promotion will not happen (as it uses a more formal algorithm that makes sure everybody knows about the promotion eventually) and every node will clear the `FAIL` state as per the `FAIL` state clearing rules above (i.e. no promotion after N times the `NODE\_TIMEOUT` has elapsed). \*\*The `FAIL` flag is only used as a trigger to run the safe part of the algorithm\*\* for the replica promotion. In theory a replica may act independently and start a replica promotion when its master is not reachable, and wait for the masters to refuse to provide the acknowledgment if the master is actually reachable by the majority. However the added complexity of the `PFAIL -> FAIL` state, the weak agreement, and the `FAIL` message forcing the propagation of the state in the shortest amount of time in the reachable part of the cluster, have practical advantages. Because of these mechanisms, usually all the nodes will stop accepting writes at about the same time if the cluster is in an error state. This is a desirable feature from the point of view of applications using Redis Cluster. Also erroneous election attempts initiated by replicas that can't reach its master due to local problems (the master is otherwise reachable by the majority of other master nodes) are avoided. ## Configuration handling, propagation, and failovers ### Cluster current epoch Redis Cluster uses a concept similar to the Raft algorithm "term". In Redis Cluster the term is called epoch instead, and it is used in order to give incremental versioning to events. When multiple nodes provide conflicting information, it becomes possible for another node to understand which state is the most up to date. The `currentEpoch` is a 64 bit | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.04728451743721962,
-0.09430833160877228,
0.029610056430101395,
0.0371684730052948,
0.09820178896188736,
-0.01625230349600315,
-0.024631446227431297,
0.0023678771685808897,
0.0059905084781348705,
0.02468668296933174,
-0.008114615455269814,
-0.012362257577478886,
0.08386929333209991,
-0.0... | 0.049019 |
"term". In Redis Cluster the term is called epoch instead, and it is used in order to give incremental versioning to events. When multiple nodes provide conflicting information, it becomes possible for another node to understand which state is the most up to date. The `currentEpoch` is a 64 bit unsigned number. At node creation every Redis Cluster node, both replicas and master nodes, set the `currentEpoch` to 0. Every time a packet is received from another node, if the epoch of the sender (part of the cluster bus messages header) is greater than the local node epoch, the `currentEpoch` is updated to the sender epoch. Because of these semantics, eventually all the nodes will agree to the greatest `currentEpoch` in the cluster. This information is used when the state of the cluster is changed and a node seeks agreement in order to perform some action. Currently this happens only during replica promotion, as described in the next section. Basically the epoch is a logical clock for the cluster and dictates that given information wins over one with a smaller epoch. ### Configuration epoch Every master always advertises its `configEpoch` in ping and pong packets along with a bitmap advertising the set of slots it serves. The `configEpoch` is set to zero in masters when a new node is created. A new `configEpoch` is created during replica election. replicas trying to replace failing masters increment their epoch and try to get authorization from a majority of masters. When a replica is authorized, a new unique `configEpoch` is created and the replica turns into a master using the new `configEpoch`. As explained in the next sections the `configEpoch` helps to resolve conflicts when different nodes claim divergent configurations (a condition that may happen because of network partitions and node failures). replica nodes also advertise the `configEpoch` field in ping and pong packets, but in the case of replicas the field represents the `configEpoch` of its master as of the last time they exchanged packets. This allows other instances to detect when a replica has an old configuration that needs to be updated (master nodes will not grant votes to replicas with an old configuration). Every time the `configEpoch` changes for some known node, it is permanently stored in the nodes.conf file by all the nodes that receive this information. The same also happens for the `currentEpoch` value. These two variables are guaranteed to be saved and `fsync-ed` to disk when updated before a node continues its operations. The `configEpoch` values generated using a simple algorithm during failovers are guaranteed to be new, incremental, and unique. ### Replica election and promotion Replica election and promotion is handled by replica nodes, with the help of master nodes that vote for the replica to promote. A replica election happens when a master is in `FAIL` state from the point of view of at least one of its replicas that has the prerequisites in order to become a master. In order for a replica to promote itself to master, it needs to start an election and win it. All the replicas for a given master can start an election if the master is in `FAIL` state, however only one replica will win the election and promote itself to master. A replica starts an election when the following conditions are met: \* The replica's master is in `FAIL` state. \* The master was serving a non-zero number of slots. \* The replica replication link was disconnected from the master for no longer than a given amount of time, in order to ensure the promoted replica's data | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.005824318155646324,
-0.026349753141403198,
-0.023488381877541542,
0.037530846893787384,
0.04541095346212387,
-0.044682372361421585,
0.01134585216641426,
-0.03893464803695679,
0.0799618661403656,
-0.021840695291757584,
0.006164433900266886,
0.015640821307897568,
0.035195667296648026,
-0.0... | 0.214443 |
the following conditions are met: \* The replica's master is in `FAIL` state. \* The master was serving a non-zero number of slots. \* The replica replication link was disconnected from the master for no longer than a given amount of time, in order to ensure the promoted replica's data is reasonably fresh. This time is user configurable. In order to be elected, the first step for a replica is to increment its `currentEpoch` counter, and request votes from master instances. Votes are requested by the replica by broadcasting a `FAILOVER\_AUTH\_REQUEST` packet to every master node of the cluster. Then it waits for a maximum time of two times the `NODE\_TIMEOUT` for replies to arrive (but always for at least 2 seconds). Once a master has voted for a given replica, replying positively with a `FAILOVER\_AUTH\_ACK`, it can no longer vote for another replica of the same master for a period of `NODE\_TIMEOUT \* 2`. In this period it will not be able to reply to other authorization requests for the same master. This is not needed to guarantee safety, but useful for preventing multiple replicas from getting elected (even if with a different `configEpoch`) at around the same time, which is usually not wanted. A replica discards any `AUTH\_ACK` replies with an epoch that is less than the `currentEpoch` at the time the vote request was sent. This ensures it doesn't count votes intended for a previous election. Once the replica receives ACKs from the majority of masters, it wins the election. Otherwise if the majority is not reached within the period of two times `NODE\_TIMEOUT` (but always at least 2 seconds), the election is aborted and a new one will be tried again after `NODE\_TIMEOUT \* 4` (and always at least 4 seconds). ### Replica rank As soon as a master is in `FAIL` state, a replica waits a short period of time before trying to get elected. That delay is computed as follows: DELAY = 500 milliseconds + random delay between 0 and 500 milliseconds + REPLICA\_RANK \* 1000 milliseconds. The fixed delay ensures that we wait for the `FAIL` state to propagate across the cluster, otherwise the replica may try to get elected while the masters are still unaware of the `FAIL` state, refusing to grant their vote. The random delay is used to desynchronize replicas so they're unlikely to start an election at the same time. The `REPLICA\_RANK` is the rank of this replica regarding the amount of replication data it has processed from the master. Replicas exchange messages when the master is failing in order to establish a (best effort) rank: the replica with the most updated replication offset is at rank 0, the second most updated at rank 1, and so forth. In this way the most updated replicas try to get elected before others. Rank order is not strictly enforced; if a replica of higher rank fails to be elected, the others will try shortly. Once a replica wins the election, it obtains a new unique and incremental `configEpoch` which is higher than that of any other existing master. It starts advertising itself as master in ping and pong packets, providing the set of served slots with a `configEpoch` that will win over the past ones. In order to speedup the reconfiguration of other nodes, a pong packet is broadcast to all the nodes of the cluster. Currently unreachable nodes will eventually be reconfigured when they receive a ping or pong packet from another node or will receive an `UPDATE` packet from another node if the information it publishes via heartbeat packets | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.07147952169179916,
-0.011806783266365528,
-0.008020075969398022,
0.007633022498339415,
0.03833000361919403,
-0.006469327956438065,
-0.023570440709590912,
-0.023853356018662453,
0.024717018008232117,
0.0680513083934784,
0.003280223812907934,
0.004684082698076963,
0.13202644884586334,
-0.... | 0.130776 |
of other nodes, a pong packet is broadcast to all the nodes of the cluster. Currently unreachable nodes will eventually be reconfigured when they receive a ping or pong packet from another node or will receive an `UPDATE` packet from another node if the information it publishes via heartbeat packets are detected to be out of date. The other nodes will detect that there is a new master serving the same slots served by the old master but with a greater `configEpoch`, and will upgrade their configuration. Replicas of the old master (or the failed over master if it rejoins the cluster) will not just upgrade the configuration but will also reconfigure to replicate from the new master. How nodes rejoining the cluster are configured is explained in the next sections. ### Masters reply to replica vote request In the previous section, we discussed how replicas try to get elected. This section explains what happens from the point of view of a master that is requested to vote for a given replica. Masters receive requests for votes in form of `FAILOVER\_AUTH\_REQUEST` requests from replicas. For a vote to be granted the following conditions need to be met: 1. A master only votes a single time for a given epoch, and refuses to vote for older epochs: every master has a lastVoteEpoch field and will refuse to vote again as long as the `currentEpoch` in the auth request packet is not greater than the lastVoteEpoch. When a master replies positively to a vote request, the lastVoteEpoch is updated accordingly, and safely stored on disk. 2. A master votes for a replica only if the replica's master is flagged as `FAIL`. 3. Auth requests with a `currentEpoch` that is less than the master `currentEpoch` are ignored. Because of this the master reply will always have the same `currentEpoch` as the auth request. If the same replica asks again to be voted, incrementing the `currentEpoch`, it is guaranteed that an old delayed reply from the master can not be accepted for the new vote. Example of the issue caused by not using rule number 3: Master `currentEpoch` is 5, lastVoteEpoch is 1 (this may happen after a few failed elections) \* Replica `currentEpoch` is 3. \* Replica tries to be elected with epoch 4 (3+1), master replies with an ok with `currentEpoch` 5, however the reply is delayed. \* Replica will try to be elected again, at a later time, with epoch 5 (4+1), the delayed reply reaches the replica with `currentEpoch` 5, and is accepted as valid. 4. Masters don't vote for a replica of the same master before `NODE\_TIMEOUT \* 2` has elapsed if a replica of that master was already voted for. This is not strictly required as it is not possible for two replicas to win the election in the same epoch. However, in practical terms it ensures that when a replica is elected it has plenty of time to inform the other replicas and avoid the possibility that another replica will win a new election, performing an unnecessary second failover. 5. Masters make no effort to select the best replica in any way. If the replica's master is in `FAIL` state and the master did not vote in the current term, a positive vote is granted. The best replica is the most likely to start an election and win it before the other replicas, since it will usually be able to start the voting process earlier because of its \*higher rank\* as explained in the previous section. 6. When a master refuses to vote for a given replica | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.026729362085461617,
-0.05216622352600098,
0.05864743888378143,
-0.022853877395391464,
0.03508008271455765,
-0.04383717104792595,
-0.02517368644475937,
-0.0864027738571167,
-0.02275663986802101,
-0.0011655738344416022,
-0.030051376670598984,
0.038632187992334366,
0.05801274999976158,
-0.... | 0.060335 |
replica is the most likely to start an election and win it before the other replicas, since it will usually be able to start the voting process earlier because of its \*higher rank\* as explained in the previous section. 6. When a master refuses to vote for a given replica there is no negative response, the request is simply ignored. 7. Masters don't vote for replicas sending a `configEpoch` that is less than any `configEpoch` in the master table for the slots claimed by the replica. Remember that the replica sends the `configEpoch` of its master, and the bitmap of the slots served by its master. This means that the replica requesting the vote must have a configuration for the slots it wants to failover that is newer or equal the one of the master granting the vote. ### Practical example of configuration epoch usefulness during partitions This section illustrates how the epoch concept is used to make the replica promotion process more resistant to partitions. \* A master is no longer reachable indefinitely. The master has three replicas A, B, C. \* Replica A wins the election and is promoted to master. \* A network partition makes A not available for the majority of the cluster. \* Replica B wins the election and is promoted as master. \* A partition makes B not available for the majority of the cluster. \* The previous partition is fixed, and A is available again. At this point B is down and A is available again with a role of master (actually `UPDATE` messages would reconfigure it promptly, but here we assume all `UPDATE` messages were lost). At the same time, replica C will try to get elected in order to fail over B. This is what happens: 1. C will try to get elected and will succeed, since for the majority of masters its master is actually down. It will obtain a new incremental `configEpoch`. 2. A will not be able to claim to be the master for its hash slots, because the other nodes already have the same hash slots associated with a higher configuration epoch (the one of B) compared to the one published by A. 3. So, all the nodes will upgrade their table to assign the hash slots to C, and the cluster will continue its operations. As you'll see in the next sections, a stale node rejoining a cluster will usually get notified as soon as possible about the configuration change because as soon as it pings any other node, the receiver will detect it has stale information and will send an `UPDATE` message. ### Hash slots configuration propagation An important part of Redis Cluster is the mechanism used to propagate the information about which cluster node is serving a given set of hash slots. This is vital to both the startup of a fresh cluster and the ability to upgrade the configuration after a replica was promoted to serve the slots of its failing master. The same mechanism allows nodes partitioned away for an indefinite amount of time to rejoin the cluster in a sensible way. There are two ways hash slot configurations are propagated: 1. Heartbeat messages. The sender of a ping or pong packet always adds information about the set of hash slots it (or its master, if it is a replica) serves. 2. `UPDATE` messages. Since in every heartbeat packet there is information about the sender `configEpoch` and set of hash slots served, if a receiver of a heartbeat packet finds the sender information is stale, it will send a packet with new | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.06236635148525238,
-0.008581684902310371,
0.05141426995396614,
-0.03766174241900444,
0.029860537499189377,
0.0083747124299407,
-0.043719809502363205,
0.007268386427313089,
0.014165050350129604,
0.00043346438906155527,
-0.026142697781324387,
-0.002144496189430356,
0.11206717789173126,
-0... | 0.078368 |
(or its master, if it is a replica) serves. 2. `UPDATE` messages. Since in every heartbeat packet there is information about the sender `configEpoch` and set of hash slots served, if a receiver of a heartbeat packet finds the sender information is stale, it will send a packet with new information, forcing the stale node to update its info. The receiver of a heartbeat or `UPDATE` message uses certain simple rules in order to update its table mapping hash slots to nodes. When a new Redis Cluster node is created, its local hash slot table is simply initialized to `NULL` entries so that each hash slot is not bound or linked to any node. This looks similar to the following: ``` 0 -> NULL 1 -> NULL 2 -> NULL ... 16383 -> NULL ``` The first rule followed by a node in order to update its hash slot table is the following: \*\*Rule 1\*\*: If a hash slot is unassigned (set to `NULL`), and a known node claims it, I'll modify my hash slot table and associate the claimed hash slots to it. So if we receive a heartbeat from node A claiming to serve hash slots 1 and 2 with a configuration epoch value of 3, the table will be modified to: ``` 0 -> NULL 1 -> A [3] 2 -> A [3] ... 16383 -> NULL ``` When a new cluster is created, a system administrator needs to manually assign (using the `CLUSTER ADDSLOTS` command, via the redis-cli command line tool, or by any other means) the slots served by each master node only to the node itself, and the information will rapidly propagate across the cluster. However this rule is not enough. We know that hash slot mapping can change during two events: 1. A replica replaces its master during a failover. 2. A slot is resharded from a node to a different one. For now let's focus on failovers. When a replica fails over its master, it obtains a configuration epoch which is guaranteed to be greater than the one of its master (and more generally greater than any other configuration epoch generated previously). For example node B, which is a replica of A, may failover A with configuration epoch of 4. It will start to send heartbeat packets (the first time mass-broadcasting cluster-wide) and because of the following second rule, receivers will update their hash slot tables: \*\*Rule 2\*\*: If a hash slot is already assigned, and a known node is advertising it using a `configEpoch` that is greater than the `configEpoch` of the master currently associated with the slot, I'll rebind the hash slot to the new node. So after receiving messages from B that claim to serve hash slots 1 and 2 with configuration epoch of 4, the receivers will update their table in the following way: ``` 0 -> NULL 1 -> B [4] 2 -> B [4] ... 16383 -> NULL ``` Liveness property: because of the second rule, eventually all nodes in the cluster will agree that the owner of a slot is the one with the greatest `configEpoch` among the nodes advertising it. This mechanism in Redis Cluster is called \*\*last failover wins\*\*. The same happens during resharding. When a node importing a hash slot completes the import operation, its configuration epoch is incremented to make sure the change will be propagated throughout the cluster. ### UPDATE messages, a closer look With the previous section in mind, it is easier to see how update messages work. Node A may rejoin the cluster after some time. It will | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.004197545349597931,
-0.024276696145534515,
-0.039781101047992706,
0.019673243165016174,
0.020712105557322502,
-0.06765080988407135,
0.003400255460292101,
-0.07415138930082321,
0.07078087329864502,
0.01846165582537651,
0.03233737125992775,
0.038928546011447906,
0.056949079036712646,
-0.10... | 0.136239 |
import operation, its configuration epoch is incremented to make sure the change will be propagated throughout the cluster. ### UPDATE messages, a closer look With the previous section in mind, it is easier to see how update messages work. Node A may rejoin the cluster after some time. It will send heartbeat packets where it claims it serves hash slots 1 and 2 with configuration epoch of 3. All the receivers with updated information will instead see that the same hash slots are associated with node B having a higher configuration epoch. Because of this they'll send an `UPDATE` message to A with the new configuration for the slots. A will update its configuration because of the \*\*rule 2\*\* above. ### How nodes rejoin the cluster The same basic mechanism is used when a node rejoins a cluster. Continuing with the example above, node A will be notified that hash slots 1 and 2 are now served by B. Assuming that these two were the only hash slots served by A, the count of hash slots served by A will drop to 0! So A will \*\*reconfigure to be a replica of the new master\*\*. The actual rule followed is a bit more complex than this. In general it may happen that A rejoins after a lot of time, in the meantime it may happen that hash slots originally served by A are served by multiple nodes, for example hash slot 1 may be served by B, and hash slot 2 by C. So the actual \*Redis Cluster node role switch rule\* is: \*\*A master node will change its configuration to replicate (be a replica of) the node that stole its last hash slot\*\*. During reconfiguration, eventually the number of served hash slots will drop to zero, and the node will reconfigure accordingly. Note that in the base case this just means that the old master will be a replica of the replica that replaced it after a failover. However in the general form the rule covers all possible cases. Replicas do exactly the same: they reconfigure to replicate the node that stole the last hash slot of its former master. ### Replica migration Redis Cluster implements a concept called \*replica migration\* in order to improve the availability of the system. The idea is that in a cluster with a master-replica setup, if the map between replicas and masters is fixed availability is limited over time if multiple independent failures of single nodes happen. For example in a cluster where every master has a single replica, the cluster can continue operations as long as either the master or the replica fail, but not if both fail the same time. However there is a class of failures that are the independent failures of single nodes caused by hardware or software issues that can accumulate over time. For example: \* Master A has a single replica A1. \* Master A fails. A1 is promoted as new master. \* Three hours later A1 fails in an independent manner (unrelated to the failure of A). No other replica is available for promotion since node A is still down. The cluster cannot continue normal operations. If the map between masters and replicas is fixed, the only way to make the cluster more resistant to the above scenario is to add replicas to every master, however this is costly as it requires more instances of Redis to be executed, more memory, and so forth. An alternative is to create an asymmetry in the cluster, and let the cluster layout automatically change over time. For example the | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.06341084092855453,
-0.03105683997273445,
0.01682642474770546,
-0.02481522224843502,
0.012565301731228828,
-0.04002276435494423,
-0.00783319678157568,
-0.10864550620317459,
-0.006399170495569706,
-0.029036683961749077,
0.04264170303940773,
0.02275238186120987,
0.08271899074316025,
-0.105... | 0.142155 |
above scenario is to add replicas to every master, however this is costly as it requires more instances of Redis to be executed, more memory, and so forth. An alternative is to create an asymmetry in the cluster, and let the cluster layout automatically change over time. For example the cluster may have three masters A, B, C. A and B have a single replica each, A1 and B1. However the master C is different and has two replicas: C1 and C2. Replica migration is the process of automatic reconfiguration of a replica in order to \*migrate\* to a master that has no longer coverage (no working replicas). With replica migration the scenario mentioned above turns into the following: \* Master A fails. A1 is promoted. \* C2 migrates as replica of A1, that is otherwise not backed by any replica. \* Three hours later A1 fails as well. \* C2 is promoted as new master to replace A1. \* The cluster can continue the operations. ### Replica migration algorithm The migration algorithm does not use any form of agreement since the replica layout in a Redis Cluster is not part of the cluster configuration that needs to be consistent and/or versioned with config epochs. Instead it uses an algorithm to avoid mass-migration of replicas when a master is not backed. The algorithm guarantees that eventually (once the cluster configuration is stable) every master will be backed by at least one replica. This is how the algorithm works. To start we need to define what is a \*good replica\* in this context: a good replica is a replica not in `FAIL` state from the point of view of a given node. The execution of the algorithm is triggered in every replica that detects that there is at least a single master without good replicas. However among all the replicas detecting this condition, only a subset should act. This subset is actually often a single replica unless different replicas have in a given moment a slightly different view of the failure state of other nodes. The \*acting replica\* is the replica among the masters with the maximum number of attached replicas, that is not in FAIL state and has the smallest node ID. So for example if there are 10 masters with 1 replica each, and 2 masters with 5 replicas each, the replica that will try to migrate is - among the 2 masters having 5 replicas - the one with the lowest node ID. Given that no agreement is used, it is possible that when the cluster configuration is not stable, a race condition occurs where multiple replicas believe themselves to be the non-failing replica with the lower node ID (it is unlikely for this to happen in practice). If this happens, the result is multiple replicas migrating to the same master, which is harmless. If the race happens in a way that will leave the ceding master without replicas, as soon as the cluster is stable again the algorithm will be re-executed again and will migrate a replica back to the original master. Eventually every master will be backed by at least one replica. However, the normal behavior is that a single replica migrates from a master with multiple replicas to an orphaned master. The algorithm is controlled by a user-configurable parameter called `cluster-migration-barrier`: the number of good replicas a master must be left with before a replica can migrate away. For example, if this parameter is set to 2, a replica can try to migrate only if its master remains with two working replicas. ### configEpoch conflicts | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.04154449328780174,
-0.03098820336163044,
-0.03198964148759842,
0.0017187759513035417,
-0.007636161521077156,
-0.04032932221889496,
-0.08449491113424301,
-0.0655868798494339,
-0.01000076811760664,
0.029153022915124893,
-0.010183160193264484,
-0.016074340790510178,
0.09470362216234207,
-0... | 0.09779 |
by a user-configurable parameter called `cluster-migration-barrier`: the number of good replicas a master must be left with before a replica can migrate away. For example, if this parameter is set to 2, a replica can try to migrate only if its master remains with two working replicas. ### configEpoch conflicts resolution algorithm When new `configEpoch` values are created via replica promotion during failovers, they are guaranteed to be unique. However there are two distinct events where new configEpoch values are created in an unsafe way, just incrementing the local `currentEpoch` of the local node and hoping there are no conflicts at the same time. Both the events are system-administrator triggered: 1. `CLUSTER FAILOVER` command with `TAKEOVER` option is able to manually promote a replica node into a master \*without the majority of masters being available\*. This is useful, for example, in multi data center setups. 2. Migration of slots for cluster rebalancing also generates new configuration epochs inside the local node without agreement for performance reasons. Specifically, during manual resharding, when a hash slot is migrated from a node A to a node B, the resharding program will force B to upgrade its configuration to an epoch which is the greatest found in the cluster, plus 1 (unless the node is already the one with the greatest configuration epoch), without requiring agreement from other nodes. Usually a real world resharding involves moving several hundred hash slots (especially in small clusters). Requiring an agreement to generate new configuration epochs during resharding, for each hash slot moved, is inefficient. Moreover it requires a fsync in each of the cluster nodes every time in order to store the new configuration. Because of the way it is performed instead, we only need a new config epoch when the first hash slot is moved, making it much more efficient in production environments. However because of the two cases above, it is possible (though unlikely) to end with multiple nodes having the same configuration epoch. A resharding operation performed by the system administrator, and a failover happening at the same time (plus a lot of bad luck) could cause `currentEpoch` collisions if they are not propagated fast enough. Moreover, software bugs and filesystem corruptions can also contribute to multiple nodes having the same configuration epoch. When masters serving different hash slots have the same `configEpoch`, there are no issues. It is more important that replicas failing over a master have unique configuration epochs. That said, manual interventions or resharding may change the cluster configuration in different ways. The Redis Cluster main liveness property requires that slot configurations always converge, so under every circumstance we really want all the master nodes to have a different `configEpoch`. In order to enforce this, \*\*a conflict resolution algorithm\*\* is used in the event that two nodes end up with the same `configEpoch`. \* IF a master node detects another master node is advertising itself with the same `configEpoch`. \* AND IF the node has a lexicographically smaller Node ID compared to the other node claiming the same `configEpoch`. \* THEN it increments its `currentEpoch` by 1, and uses it as the new `configEpoch`. If there are any set of nodes with the same `configEpoch`, all the nodes but the one with the greatest Node ID will move forward, guaranteeing that, eventually, every node will pick a unique configEpoch regardless of what happened. This mechanism also guarantees that after a fresh cluster is created, all nodes start with a different `configEpoch` (even if this is not actually used) since `redis-cli` makes sure to use `CLUSTER SET-CONFIG-EPOCH` at startup. However if for | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.01451466977596283,
-0.04827610403299332,
0.01262094546109438,
-0.014125518500804901,
0.06334376335144043,
-0.030082108452916145,
0.004092214163392782,
-0.04963332787156105,
0.005318235605955124,
0.020847085863351822,
0.008788063190877438,
-0.032467398792505264,
0.095746248960495,
-0.087... | 0.085015 |
eventually, every node will pick a unique configEpoch regardless of what happened. This mechanism also guarantees that after a fresh cluster is created, all nodes start with a different `configEpoch` (even if this is not actually used) since `redis-cli` makes sure to use `CLUSTER SET-CONFIG-EPOCH` at startup. However if for some reason a node is left misconfigured, it will update its configuration to a different configuration epoch automatically. ### Node resets Nodes can be software reset (without restarting them) in order to be reused in a different role or in a different cluster. This is useful in normal operations, in testing, and in cloud environments where a given node can be reprovisioned to join a different set of nodes to enlarge or create a new cluster. In Redis Cluster nodes are reset using the `CLUSTER RESET` command. The command is provided in two variants: \* `CLUSTER RESET SOFT` \* `CLUSTER RESET HARD` The command must be sent directly to the node to reset. If no reset type is provided, a soft reset is performed. The following is a list of operations performed by a reset: 1. Soft and hard reset: If the node is a replica, it is turned into a master, and its dataset is discarded. If the node is a master and contains keys the reset operation is aborted. 2. Soft and hard reset: All the slots are released, and the manual failover state is reset. 3. Soft and hard reset: All the other nodes in the nodes table are removed, so the node no longer knows any other node. 4. Hard reset only: `currentEpoch`, `configEpoch`, and `lastVoteEpoch` are set to 0. 5. Hard reset only: the Node ID is changed to a new random ID. Master nodes with non-empty data sets can't be reset (since normally you want to reshard data to the other nodes). However, under special conditions when this is appropriate (e.g. when a cluster is totally destroyed with the intent of creating a new one), `FLUSHALL` must be executed before proceeding with the reset. ### Removing nodes from a cluster It is possible to practically remove a node from an existing cluster by resharding all its data to other nodes (if it is a master node) and shutting it down. However, the other nodes will still remember its node ID and address, and will attempt to connect with it. For this reason, when a node is removed we want to also remove its entry from all the other nodes tables. This is accomplished by using the `CLUSTER FORGET ` command. The command does two things: 1. It removes the node with the specified node ID from the nodes table. 2. It sets a 60 second ban which prevents a node with the same node ID from being re-added. The second operation is needed because Redis Cluster uses gossip in order to auto-discover nodes, so removing the node X from node A, could result in node B gossiping about node X to A again. Because of the 60 second ban, the Redis Cluster administration tools have 60 seconds in order to remove the node from all the nodes, preventing the re-addition of the node due to auto discovery. Further information is available in the `CLUSTER FORGET` documentation. ## Publish/Subscribe In a Redis Cluster, clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. The clients can send SUBSCRIBE to any node and can also send PUBLISH to any node. It will simply broadcast each published message to all other | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
-0.04871603474020958,
-0.08286582678556442,
-0.009178719483315945,
0.06621074676513672,
0.0511355884373188,
-0.05214919149875641,
0.020622190088033676,
-0.06563453376293182,
0.04229835048317909,
-0.026446174830198288,
0.051978252828121185,
0.058806948363780975,
0.03293271362781525,
-0.0492... | 0.132919 |
subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. The clients can send SUBSCRIBE to any node and can also send PUBLISH to any node. It will simply broadcast each published message to all other nodes. Redis 7.0 and later features sharded pub/sub, in which shard channels are assigned to slots by the same algorithm used to assign keys to slots. A shard message must be sent to a node that owns the slot the shard channel is hashed to. The cluster makes sure the published shard messages are forwarded to all nodes in the shard, so clients can subscribe to a shard channel by connecting to either the master responsible for the slot, or to any of its replicas. ## Appendix ### Appendix A: CRC16 reference implementation in ANSI C /\* \* Copyright 2001-2010 Georges Menie (www.menie.org) \* Copyright 2010 Salvatore Sanfilippo (adapted to Redis coding style) \* All rights reserved. \* Redistribution and use in source and binary forms, with or without \* modification, are permitted provided that the following conditions are met: \* \* \* Redistributions of source code must retain the above copyright \* notice, this list of conditions and the following disclaimer. \* \* Redistributions in binary form must reproduce the above copyright \* notice, this list of conditions and the following disclaimer in the \* documentation and/or other materials provided with the distribution. \* \* Neither the name of the University of California, Berkeley nor the \* names of its contributors may be used to endorse or promote products \* derived from this software without specific prior written permission. \* \* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY \* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED \* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE \* DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY \* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES \* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; \* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND \* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT \* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS \* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \*/ /\* CRC16 implementation according to CCITT standards. \* \* Note by @antirez: this is actually the XMODEM CRC 16 algorithm, using the \* following parameters: \* \* Name : "XMODEM", also known as "ZMODEM", "CRC-16/ACORN" \* Width : 16 bit \* Poly : 1021 (That is actually x^16 + x^12 + x^5 + 1) \* Initialization : 0000 \* Reflect Input byte : False \* Reflect Output CRC : False \* Xor constant to output CRC : 0000 \* Output for "123456789" : 31C3 \*/ static const uint16\_t crc16tab[256]= { 0x0000,0x1021,0x2042,0x3063,0x4084,0x50a5,0x60c6,0x70e7, 0x8108,0x9129,0xa14a,0xb16b,0xc18c,0xd1ad,0xe1ce,0xf1ef, 0x1231,0x0210,0x3273,0x2252,0x52b5,0x4294,0x72f7,0x62d6, 0x9339,0x8318,0xb37b,0xa35a,0xd3bd,0xc39c,0xf3ff,0xe3de, 0x2462,0x3443,0x0420,0x1401,0x64e6,0x74c7,0x44a4,0x5485, 0xa56a,0xb54b,0x8528,0x9509,0xe5ee,0xf5cf,0xc5ac,0xd58d, 0x3653,0x2672,0x1611,0x0630,0x76d7,0x66f6,0x5695,0x46b4, 0xb75b,0xa77a,0x9719,0x8738,0xf7df,0xe7fe,0xd79d,0xc7bc, 0x48c4,0x58e5,0x6886,0x78a7,0x0840,0x1861,0x2802,0x3823, 0xc9cc,0xd9ed,0xe98e,0xf9af,0x8948,0x9969,0xa90a,0xb92b, 0x5af5,0x4ad4,0x7ab7,0x6a96,0x1a71,0x0a50,0x3a33,0x2a12, 0xdbfd,0xcbdc,0xfbbf,0xeb9e,0x9b79,0x8b58,0xbb3b,0xab1a, 0x6ca6,0x7c87,0x4ce4,0x5cc5,0x2c22,0x3c03,0x0c60,0x1c41, 0xedae,0xfd8f,0xcdec,0xddcd,0xad2a,0xbd0b,0x8d68,0x9d49, 0x7e97,0x6eb6,0x5ed5,0x4ef4,0x3e13,0x2e32,0x1e51,0x0e70, 0xff9f,0xefbe,0xdfdd,0xcffc,0xbf1b,0xaf3a,0x9f59,0x8f78, 0x9188,0x81a9,0xb1ca,0xa1eb,0xd10c,0xc12d,0xf14e,0xe16f, 0x1080,0x00a1,0x30c2,0x20e3,0x5004,0x4025,0x7046,0x6067, 0x83b9,0x9398,0xa3fb,0xb3da,0xc33d,0xd31c,0xe37f,0xf35e, 0x02b1,0x1290,0x22f3,0x32d2,0x4235,0x5214,0x6277,0x7256, 0xb5ea,0xa5cb,0x95a8,0x8589,0xf56e,0xe54f,0xd52c,0xc50d, 0x34e2,0x24c3,0x14a0,0x0481,0x7466,0x6447,0x5424,0x4405, 0xa7db,0xb7fa,0x8799,0x97b8,0xe75f,0xf77e,0xc71d,0xd73c, 0x26d3,0x36f2,0x0691,0x16b0,0x6657,0x7676,0x4615,0x5634, 0xd94c,0xc96d,0xf90e,0xe92f,0x99c8,0x89e9,0xb98a,0xa9ab, 0x5844,0x4865,0x7806,0x6827,0x18c0,0x08e1,0x3882,0x28a3, 0xcb7d,0xdb5c,0xeb3f,0xfb1e,0x8bf9,0x9bd8,0xabbb,0xbb9a, 0x4a75,0x5a54,0x6a37,0x7a16,0x0af1,0x1ad0,0x2ab3,0x3a92, 0xfd2e,0xed0f,0xdd6c,0xcd4d,0xbdaa,0xad8b,0x9de8,0x8dc9, 0x7c26,0x6c07,0x5c64,0x4c45,0x3ca2,0x2c83,0x1ce0,0x0cc1, 0xef1f,0xff3e,0xcf5d,0xdf7c,0xaf9b,0xbfba,0x8fd9,0x9ff8, 0x6e17,0x7e36,0x4e55,0x5e74,0x2e93,0x3eb2,0x0ed1,0x1ef0 }; uint16\_t crc16(const char \*buf, int len) { int counter; uint16\_t crc = 0; for (counter = 0; counter < len; counter++) crc = (crc<<8) ^ crc16tab[((crc>>8) ^ \*buf++)&0x00FF]; return crc; } | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.03576647862792015,
-0.06963460147380829,
-0.09147673100233078,
0.006588314659893513,
-0.03669164329767227,
-0.013098758645355701,
0.014027006924152374,
-0.07230651378631592,
0.04892180860042572,
0.016474435105919838,
-0.028453852981328964,
0.02609403431415558,
0.11996182054281235,
-0.030... | 0.162446 |
crc = (crc<<8) ^ crc16tab[((crc>>8) ^ \*buf++)&0x00FF]; return crc; } | https://github.com/redis/redis-doc/blob/master//docs/reference/cluster-spec.md | master | redis | [
0.032208655029535294,
0.0968601182103157,
-0.058415062725543976,
-0.04503628611564636,
-0.06693808734416962,
0.014333060011267662,
-0.02281155437231064,
0.04470464587211609,
-0.044878337532281876,
0.035994596779346466,
-0.012713128700852394,
-0.11332781612873077,
0.04818252846598625,
0.028... | 0.016775 |
Many of the commands in Redis accept key names as input arguments. The 9th element in the reply of `COMMAND` (and `COMMAND INFO`) is an array that consists of the command's key specifications. A \_key specification\_ describes a rule for extracting the names of one or more keys from the arguments of a given command. Key specifications provide a robust and flexible mechanism, compared to the \_first key\_, \_last key\_ and \_step\_ scheme employed until Redis 7.0. Before introducing these specifications, Redis clients had no trivial programmatic means to extract key names for all commands. Cluster-aware Redis clients had to have the keys' extraction logic hard-coded in the cases of commands such as `EVAL` and `ZUNIONSTORE` that rely on a \_numkeys\_ argument or `SORT` and its many clauses. Alternatively, the `COMMAND GETKEYS` can be used to achieve a similar extraction effect but at a higher latency. A Redis client isn't obligated to support key specifications. It can continue using the legacy \_first key\_, \_last key\_ and \_step\_ scheme along with the [\_movablekeys\_ flag](/commands/command#flags) that remain unchanged. However, a Redis client that implements key specifications support can consolidate most of its keys' extraction logic. Even if the client encounters an unfamiliar type of key specification, it can always revert to the `COMMAND GETKEYS` command. That said, most cluster-aware clients only require a single key name to perform correct command routing, so it is possible that although a command features one unfamiliar specification, its other specification may still be usable by the client. Key specifications are maps with the following keys: 1. \*\*begin\_search:\*\*: the starting index for keys' extraction. 2. \*\*find\_keys:\*\* the rule for identifying the keys relative to the BS. 3. \*\*notes\*\*: notes about this key spec, if there are any. 4. \*\*flags\*\*: indicate the type of data access. ## begin\_search The \_begin\\_search\_ value of a specification informs the client of the extraction's beginning. The value is a map. There are three types of `begin\_search`: 1. \*\*index:\*\* key name arguments begin at a constant index. 2. \*\*keyword:\*\* key names start after a specific keyword (token). 3. \*\*unknown:\*\* an unknown type of specification - see the [incomplete flag section](#incomplete) for more details. ### index The \_index\_ type of `begin\_search` indicates that input keys appear at a constant index. It is a map under the \_spec\_ key with a single key: 1. \*\*index:\*\* the 0-based index from which the client should start extracting key names. ### keyword The \_keyword\_ type of `begin\_search` means a literal token precedes key name arguments. It is a map under the \_spec\_ with two keys: 1. \*\*keyword:\*\* the keyword (token) that marks the beginning of key name arguments. 2. \*\*startfrom:\*\* an index to the arguments array from which the client should begin searching. This can be a negative value, which means the search should start from the end of the arguments' array, in reverse order. For example, \_-2\_'s meaning is to search reverse from the penultimate argument. More examples of the \_keyword\_ search type include: \* `SET` has a `begin\_search` specification of type \_index\_ with a value of \_1\_. \* `XREAD` has a `begin\_search` specification of type \_keyword\_ with the values \_"STREAMS"\_ and \_1\_ as \_keyword\_ and \_startfrom\_, respectively. \* `MIGRATE` has a \_start\_search\_ specification of type \_keyword\_ with the values of \_"KEYS"\_ and \_-2\_. ## find\_keys The `find\_keys` value of a key specification tells the client how to continue the search for key names. `find\_keys` has three possible types: 1. \*\*range:\*\* keys stop at a specific index or relative to the last argument. 2. \*\*keynum:\*\* an additional argument specifies the number of input keys. 3. \*\*unknown:\*\* an | https://github.com/redis/redis-doc/blob/master//docs/reference/key-specs.md | master | redis | [
-0.05241534486413002,
-0.03418673202395439,
-0.05930687114596367,
0.03921442851424217,
-0.004744996782392263,
-0.06389328092336655,
-0.00659819645807147,
0.01719469390809536,
0.03693907707929611,
0.002322282874956727,
-0.013661477714776993,
0.02081570029258728,
0.07191808521747589,
-0.0456... | 0.159496 |
`find\_keys` value of a key specification tells the client how to continue the search for key names. `find\_keys` has three possible types: 1. \*\*range:\*\* keys stop at a specific index or relative to the last argument. 2. \*\*keynum:\*\* an additional argument specifies the number of input keys. 3. \*\*unknown:\*\* an unknown type of specification - see the [incomplete flag section](#incomplete) for more details. ### range The \_range\_ type of `find\_keys` is a map under the \_spec\_ key with three keys: 1. \*\*lastkey:\*\* the index, relative to `begin\_search`, of the last key argument. This can be a negative value, in which case it isn't relative. For example, \_-1\_ indicates to keep extracting keys until the last argument, \_-2\_ until one before the last, and so on. 2. \*\*keystep:\*\* the number of arguments that should be skipped, after finding a key, to find the next one. 3. \*\*limit:\*\* if \_lastkey\_ is has the value of \_-1\_, we use the \_limit\_ to stop the search by a factor. \_0\_ and \_1\_ mean no limit. \_2\_ means half of the remaining arguments, 3 means a third, and so on. ### keynum The \_keynum\_ type of `find\_keys` is a map under the \_spec\_ key with three keys: \* \*\*keynumidx:\*\* the index, relative to `begin\_search`, of the argument containing the number of keys. \* \*\*firstkey:\*\* the index, relative to `begin\_search`, of the first key. This is usually the next argument after \_keynumidx\_, and its value, in this case, is greater by one. \* \*\*keystep:\*\* Tthe number of arguments that should be skipped, after finding a key, to find the next one. Examples: \* The `SET` command has a \_range\_ of \_0\_, \_1\_ and \_0\_. \* The `MSET` command has a \_range\_ of \_-1\_, \_2\_ and \_0\_. \* The `XREAD` command has a \_range\_ of \_-1\_, \_1\_ and \_2\_. \* The `ZUNION` command has a \_start\_search\_ type \_index\_ with the value \_1\_, and `find\_keys` of type \_keynum\_ with values of \_0\_, \_1\_ and \_1\_. \* The [`AI.DAGRUN`](https://oss.redislabs.com/redisai/master/commands/#aidagrun) command has a \_start\_search\_ of type \_keyword\_ with values of \_"LOAD"\_ and \_1\_, and `find\_keys` of type \_keynum\_ with values of \_0\_, \_1\_ and \_1\_. \*\*Note:\*\* this isn't a perfect solution as the module writers can come up with anything. However, this mechanism should allow the extraction of key name arguments for the vast majority of commands. ## notes Notes about non-obvious key specs considerations, if applicable. ## flags A key specification can have additional flags that provide more details about the key. These flags are divided into three groups, as described below. ### Access type flags The following flags declare the type of access the command uses to a key's value or its metadata. A key's metadata includes LRU/LFU counters, type, and cardinality. These flags do not relate to the reply sent back to the client. Every key specification has precisely one of the following flags: \* \*\*RW:\*\* the read-write flag. The command modifies the data stored in the value of the key or its metadata. This flag marks every operation that isn't distinctly a delete, an overwrite, or read-only. \* \*\*RO:\*\* the read-only flag. The command only reads the value of the key (although it doesn't necessarily return it). \* \*\*OW:\*\* the overwrite flag. The command overwrites the data stored in the value of the key. \* \*\*RM:\*\* the remove flag. The command deletes the key. ### Logical operation flags The following flags declare the type of operations performed on the data stored as the key's value and its TTL (if any), not the metadata. These flags describe the logical operation that the command executes on data, driven | https://github.com/redis/redis-doc/blob/master//docs/reference/key-specs.md | master | redis | [
-0.08080519735813141,
0.02692357636988163,
0.01793680526316166,
0.010599508881568909,
-0.008267920464277267,
-0.02388647012412548,
0.06043107435107231,
0.018482260406017303,
0.024921245872974396,
-0.007187471259385347,
0.060761839151382446,
-0.02298586070537567,
0.09621458500623703,
-0.083... | 0.095409 |
the remove flag. The command deletes the key. ### Logical operation flags The following flags declare the type of operations performed on the data stored as the key's value and its TTL (if any), not the metadata. These flags describe the logical operation that the command executes on data, driven by the input arguments. The flags do not relate to modifying or returning metadata (such as a key's type, cardinality, or existence). Every key specification may include the following flag: \* \*\*access:\*\* the access flag. This flag indicates that the command returns, copies, or somehow uses the user's data that's stored in the key. In addition, the specification may include precisely one of the following: \* \*\*update:\*\* the update flag. The command updates the data stored in the key's value. The new value may depend on the old value. This flag marks every operation that isn't distinctly an insert or a delete. \* \*\*insert:\*\* the insert flag. The command only adds data to the value; existing data isn't modified or deleted. \* \*\*delete:\*\* the delete flag. The command explicitly deletes data from the value stored at the key. ### Miscellaneous flags Key specifications may have the following flags: \* \*\*not\_key:\*\* this flag indicates that the specified argument isn't a key. This argument is treated the same as a key when computing which slot a command should be assigned to for Redis cluster. For all other purposes this argument should not be considered a key. \* \*\*incomplete:\*\* this flag is explained below. \* \*\*variable\_flags:\*\* this flag is explained below. ### incomplete Some commands feature exotic approaches when it comes to specifying their keys, which makes extraction difficult. Consider, for example, what would happen with a call to `MIGRATE` that includes the literal string \_"KEYS"\_ as an argument to its \_AUTH\_ clause. Our key specifications would miss the mark, and extraction would begin at the wrong index. Thus, we recognize that key specifications are incomplete and may fail to extract all keys. However, we assure that even incomplete specifications never yield the wrong names of keys, providing that the command is syntactically correct. In the case of `MIGRATE`, the search begins at the end (\_startfrom\_ has the value of \_-1\_). If and when we encounter a key named \_"KEYS"\_, we'll only extract the subset of the key name arguments after it. That's why `MIGRATE` has the \_incomplete\_ flag in its key specification. Another case of incompleteness is the `SORT` command. Here, the `begin\_search` and `find\_keys` are of type \_unknown\_. The client should revert to calling the `COMMAND GETKEYS` command to extract key names from the arguments, short of implementing it natively. The difficulty arises, for example, because the string \_"STORE"\_ is both a keyword (token) and a valid literal argument for `SORT`. \*\*Note:\*\* the only commands with \_incomplete\_ key specifications are `SORT` and `MIGRATE`. We don't expect the addition of such commands in the future. ### variable\_flags In some commands, the flags for the same key name argument can depend on other arguments. For example, consider the `SET` command and its optional \_GET\_ argument. Without the \_GET\_ argument, `SET` is write-only, but it becomes a read and write command with it. When this flag is present, it means that the key specification flags cover all possible options, but the effective flags depend on other arguments. ## Examples ### `SET`'s key specifications ``` 1) 1) "flags" 2) 1) RW 2) access 3) update 3) "begin\_search" 4) 1) "type" 2) "index" 3) "spec" 4) 1) "index" 2) (integer) 1 5) "find\_keys" 6) 1) "type" 2) "range" 3) "spec" 4) 1) "lastkey" 2) (integer) 0 | https://github.com/redis/redis-doc/blob/master//docs/reference/key-specs.md | master | redis | [
-0.0015084762126207352,
0.05275523290038109,
-0.06297733634710312,
0.03250013664364815,
-0.030530115589499474,
-0.063810333609581,
0.09165532886981964,
-0.08358585834503174,
0.06049327552318573,
0.022632328793406487,
0.06550104171037674,
0.034020621329545975,
0.010587557218968868,
-0.10201... | 0.121916 |
on other arguments. ## Examples ### `SET`'s key specifications ``` 1) 1) "flags" 2) 1) RW 2) access 3) update 3) "begin\_search" 4) 1) "type" 2) "index" 3) "spec" 4) 1) "index" 2) (integer) 1 5) "find\_keys" 6) 1) "type" 2) "range" 3) "spec" 4) 1) "lastkey" 2) (integer) 0 3) "keystep" 4) (integer) 1 5) "limit" 6) (integer) 0 ``` ### `ZUNION`'s key specifications ``` 1) 1) "flags" 2) 1) RO 2) access 3) "begin\_search" 4) 1) "type" 2) "index" 3) "spec" 4) 1) "index" 2) (integer) 1 5) "find\_keys" 6) 1) "type" 2) "keynum" 3) "spec" 4) 1) "keynumidx" 2) (integer) 0 3) "firstkey" 4) (integer) 1 5) "keystep" 6) (integer) 1 ``` | https://github.com/redis/redis-doc/blob/master//docs/reference/key-specs.md | master | redis | [
-0.038114335387945175,
0.06504612416028976,
-0.08659320324659348,
0.027988234534859657,
-0.04385616257786751,
-0.003845610423013568,
0.12510617077350616,
0.012267077341675758,
-0.04465175420045853,
0.05295296013355255,
0.014725536108016968,
-0.010605726391077042,
0.12189427018165588,
-0.08... | 0.121182 |
To communicate with the Redis server, Redis clients use a protocol called REdis Serialization Protocol (RESP). While the protocol was designed specifically for Redis, you can use it for other client-server software projects. RESP is a compromise among the following considerations: \* Simple to implement. \* Fast to parse. \* Human readable. RESP can serialize different data types including integers, strings, and arrays. It also features an error-specific type. A client sends a request to the Redis server as an array of strings. The array's contents are the command and its arguments that the server should execute. The server's reply type is command-specific. RESP is binary-safe and uses prefixed length to transfer bulk data so it does not require processing bulk data transferred from one process to another. RESP is the protocol you should implement in your Redis client. {{% alert title="Note" color="info" %}} The protocol outlined here is used only for client-server communication. [Redis Cluster](/docs/reference/cluster-spec) uses a different binary protocol for exchanging messages between nodes. {{% /alert %}} ## RESP versions Support for the first version of the RESP protocol was introduced in Redis 1.2. Using RESP with Redis 1.2 was optional and had mainly served the purpose of working the kinks out of the protocol. In Redis 2.0, the protocol's next version, a.k.a RESP2, became the standard communication method for clients with the Redis server. [RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md) is a superset of RESP2 that mainly aims to make a client author's life a little bit easier. Redis 6.0 introduced experimental opt-in support of RESP3's features (excluding streaming strings and streaming aggregates). In addition, the introduction of the `HELLO` command allows clients to handshake and upgrade the connection's protocol version (see [Client handshake](#client-handshake)). Up to and including Redis 7, both RESP2 and RESP3 clients can invoke all core commands. However, commands may return differently typed replies for different protocol versions. Future versions of Redis may change the default protocol version, but it is unlikely that RESP2 will become entirely deprecated. It is possible, however, that new features in upcoming versions will require the use of RESP3. ## Network layer A client connects to a Redis server by creating a TCP connection to its port (the default is 6379). While RESP is technically non-TCP specific, the protocol is used exclusively with TCP connections (or equivalent stream-oriented connections like Unix sockets) in the context of Redis. ## Request-Response model The Redis server accepts commands composed of different arguments. Then, the server processes the command and sends the reply back to the client. This is the simplest model possible; however, there are some exceptions: \* Redis requests can be [pipelined](#multiple-commands-and-pipelining). Pipelining enables clients to send multiple commands at once and wait for replies later. \* When a RESP2 connection subscribes to a [Pub/Sub](/docs/manual/pubsub) channel, the protocol changes semantics and becomes a \*push\* protocol. The client no longer requires sending commands because the server will automatically send new messages to the client (for the channels the client is subscribed to) as soon as they are received. \* The `MONITOR` command. Invoking the `MONITOR` command switches the connection to an ad-hoc push mode. The protocol of this mode is not specified but is obvious to parse. \* [Protected mode](/docs/management/security/#protected-mode). Connections opened from a non-loopback address to a Redis while in protected mode are denied and terminated by the server. Before terminating the connection, Redis unconditionally sends a `-DENIED` reply, regardless of whether the client writes to the socket. \* The [RESP3 Push type](#resp3-pushes). As the name suggests, a push type allows the server to send out-of-band data to the connection. The server may push data at | https://github.com/redis/redis-doc/blob/master//docs/reference/protocol-spec.md | master | redis | [
0.0009497689898125827,
0.00020220584701746702,
-0.07211824506521225,
0.029417850077152252,
-0.004153682384639978,
-0.07003112882375717,
0.04339086264371872,
0.040169935673475266,
0.03812994062900543,
-0.026917770504951477,
-0.057753726840019226,
0.032731641083955765,
0.03544645011425018,
-... | 0.163878 |
by the server. Before terminating the connection, Redis unconditionally sends a `-DENIED` reply, regardless of whether the client writes to the socket. \* The [RESP3 Push type](#resp3-pushes). As the name suggests, a push type allows the server to send out-of-band data to the connection. The server may push data at any time, and the data isn't necessarily related to specific commands executed by the client. Excluding these exceptions, the Redis protocol is a simple request-response protocol. ## RESP protocol description RESP is essentially a serialization protocol that supports several data types. In RESP, the first byte of data determines its type. Redis generally uses RESP as a [request-response](#request-response-model) protocol in the following way: \* Clients send commands to a Redis server as an [array](#arrays) of [bulk strings](#bulk-strings). The first (and sometimes also the second) bulk string in the array is the command's name. Subsequent elements of the array are the arguments for the command. \* The server replies with a RESP type. The reply's type is determined by the command's implementation and possibly by the client's protocol version. RESP is a binary protocol that uses control sequences encoded in standard ASCII. The `A` character, for example, is encoded with the binary byte of value 65. Similarly, the characters CR (`\r`), LF (`\n`) and SP (` `) have binary byte values of 13, 10 and 32, respectively. The `\r\n` (CRLF) is the protocol's \_terminator\_, which \*\*always\*\* separates its parts. The first byte in an RESP-serialized payload always identifies its type. Subsequent bytes constitute the type's contents. We categorize every RESP data type as either \_simple\_, \_bulk\_ or \_aggregate\_. Simple types are similar to scalars in programming languages that represent plain literal values. Booleans and Integers are such examples. RESP strings are either \_simple\_ or \_bulk\_. Simple strings never contain carriage return (`\r`) or line feed (`\n`) characters. Bulk strings can contain any binary data and may also be referred to as \_binary\_ or \_blob\_. Note that bulk strings may be further encoded and decoded, e.g. with a wide multi-byte encoding, by the client. Aggregates, such as Arrays and Maps, can have varying numbers of sub-elements and nesting levels. The following table summarizes the RESP data types that Redis supports: | RESP data type | Minimal protocol version | Category | First byte | | --- | --- | --- | --- | | [Simple strings](#simple-strings) | RESP2 | Simple | `+` | | [Simple Errors](#simple-errors) | RESP2 | Simple | `-` | | [Integers](#integers) | RESP2 | Simple | `:` | | [Bulk strings](#bulk-strings) | RESP2 | Aggregate | `$` | | [Arrays](#arrays) | RESP2 | Aggregate | `\*` | | [Nulls](#nulls) | RESP3 | Simple | `\_` | | [Booleans](#booleans) | RESP3 | Simple | `#` | | [Doubles](#doubles) | RESP3 | Simple | `,` | | [Big numbers](#big-numbers) | RESP3 | Simple | `(` | | [Bulk errors](#bulk-errors) | RESP3 | Aggregate | `!` | | [Verbatim strings](#verbatim-strings) | RESP3 | Aggregate | `=` | | [Maps](#maps) | RESP3 | Aggregate | `%` | | [Sets](#sets) | RESP3 | Aggregate | `~` | | [Pushes](#pushes) | RESP3 | Aggregate | `>` | ### Simple strings Simple strings are encoded as a plus (`+`) character, followed by a string. The string mustn't contain a CR (`\r`) or LF (`\n`) character and is terminated by CRLF (i.e., `\r\n`). Simple strings transmit short, non-binary strings with minimal overhead. For example, many Redis commands reply with just "OK" on success. The encoding of this Simple String is the following 5 bytes: +OK\r\n When Redis replies with a simple string, a client library should | https://github.com/redis/redis-doc/blob/master//docs/reference/protocol-spec.md | master | redis | [
-0.039866235107183456,
-0.04145263880491257,
-0.04390731826424599,
0.03250344842672348,
-0.04090552777051926,
-0.0982537493109703,
0.034385796636343,
-0.009052537381649017,
0.07287445664405823,
0.020129719749093056,
-0.05565475672483444,
0.0610598623752594,
0.014283644035458565,
-0.0484772... | 0.161393 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.