content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
port number. \* Copy the template configuration file you'll find in the root directory of the Redis distribution into `/etc/redis/` using the port number as the name, for instance: ``` sudo cp redis.conf /etc/redis/6379.conf ``` \* Create a directory inside `/var/redis` that will work as both data and working directory for this Redis instance: ``` sudo mkdir /var/redis/6379 ``` \* Edit the configuration file, making sure to perform the following changes: \* Set \*\*daemonize\*\* to yes (by default it is set to no). \* Set the \*\*pidfile\*\* to `/var/run/redis\_6379.pid`, modifying the port as necessary. \* Change the \*\*port\*\* accordingly. In our example it is not needed as the default port is already `6379`. \* Set your preferred \*\*loglevel\*\*. \* Set the \*\*logfile\*\* to `/var/log/redis\_6379.log`. \* Set the \*\*dir\*\* to `/var/redis/6379` (very important step!). \* Finally, add the new Redis init script to all the default runlevels using the following command: ``` sudo update-rc.d redis\_6379 defaults ``` You are done! Now you can try running your instance with: ``` sudo /etc/init.d/redis\_6379 start ``` Make sure that everything is working as expected: 1. Try pinging your instance within a `redis-cli` session using the `PING` command. 2. Do a test save with `redis-cli save` and check that a dump file is correctly saved to `/var/redis/6379/dump.rdb`. 3. Check that your Redis instance is logging to the `/var/log/redis\_6379.log` file. 4. If it's a new machine where you can try it without problems, make sure that after a reboot everything is still working. {{% alert title="Note" color="warning" %}} The above instructions don't include all of the Redis configuration parameters that you could change. For example, to use AOF persistence instead of RDB persistence, or to set up replication, and so forth. {{% /alert %}} You should also read the example [redis.conf](/docs/management/config-file/) file, which is heavily annotated to help guide you on making changes. Further details can also be found in the [configuration article on this site](/docs/management/config/). ---
https://github.com/redis/redis-doc/blob/master//docs/install/install-redis/_index.md
master
redis
[ 0.01440864335745573, -0.06046341732144356, -0.1243775486946106, -0.07624644041061401, -0.05792854353785515, -0.030447693541646004, -0.020038483664393425, 0.05240250751376152, -0.025242934003472328, 0.002321334555745125, -0.009788503870368004, 0.019602561369538307, 0.005515875294804573, -0....
0.030812
This tutorial shows you how to install RedisInsight on an AWS EC2 instance and manage ElastiCache Redis instances using RedisInsight. To complete this tutorial you must have access to the AWS Console and permissions to launch EC2 instances. Step 1: Create a new IAM Role (optional) -------------- RedisInsight needs read-only access to S3 and ElastiCache APIs. This is an optional step. 1. Log in to the AWS Console and navigate to the IAM screen. 1. Create a new IAM Role. 1. Under \*Select type of trusted entity\*, choose EC2. The role is used by an EC2 instance. 1. Assign the following permissions: \* AmazonS3ReadOnlyAccess \* AmazonElastiCacheReadOnlyAccess Step 2: Launch EC2 Instance -------------- Next, launch an EC2 instance. 1. Navigate to EC2 under AWS Console. 1. Click Launch Instance. 1. Choose 64-bit Amazon Linux AMI. 1. Choose at least a t2.medium instance. The size of the instance depends on the memory used by your ElastiCache instance that you want to analyze. 1. Under Configure Instance: \* Choose the VPC that has your ElastiCache instances. \* Choose a subnet that has network access to your ElastiCache instances. \* Ensure that your EC2 instance has a public IP Address. \* Assign the IAM role that you created in Step 1. 1. Under the storage section, allocate at least 100 GiB storage. 1. Under security group, ensure that: \* Incoming traffic is allowed on port 5540 \* Incoming traffic is allowed on port 22 only during installation 1. Review and launch the ec2 instance. Step 3: Verify permissions and connectivity ---------- Next, verify that the EC2 instance has the required IAM permissions and can connect to ElastiCache Redis instances. 1. SSH into the newly launched EC2 instance. 1. Open a command prompt. 1. Run the command `aws s3 ls`. This should list all S3 buckets. 1. If the `aws` command cannot be found, make sure your EC2 instance is based of Amazon Linux. 1. Next, find the hostname of the ElastiCache instance you want to analyze and run the command `echo info | nc 6379`. 1. If you see some details about the ElastiCache Redis instance, you can proceed to the next step. 1. If you cannot connect to redis, you should review your VPC, subnet, and security group settings. Step 4: Install Docker on EC2 ------- Next, install Docker on the EC2 instance. Run the following commands: 1. `sudo yum update -y` 1. `sudo yum install -y docker` 1. `sudo service docker start` 1. `sudo usermod -a -G docker ec2-user` 1. Log out and log back in again to pick up the new docker group permissions. 1. To verify, run `docker ps`. You should see some output without having to run `sudo`. Step 5: Run RedisInsight in the Docker container ------- Finally, install RedisInsight using one of the options described below. 1. If you do not want to persist your RedisInsight data: ```bash docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest ``` 2. If you want to persist your RedisInsight data, first attach the Docker volume to the `/data` path and then run the following command: ```bash docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest -v redisinsight:/data ``` If the previous command returns a permission error, ensure that the user with `ID = 1000` has the necessary permission to access the volume provided (`redisinsight` in the command above). Find the IP Address of your EC2 instances and launch your browser at `http://:5540`. Accept the EULA and start using RedisInsight. RedisInsight also provides a health check endpoint at `http://:5540/api/health/` to monitor the health of the running container. Summary ------ In this guide, we installed
https://github.com/redis/redis-doc/blob/master//docs/install/install-redisinsight/install-on-aws.md
master
redis
[ 0.03192877396941185, -0.07435872405767441, -0.10656822472810745, -0.034815575927495956, 0.0022500657942146063, -0.015465723350644112, -0.03149986267089844, 0.009738398715853691, 0.03145140781998634, 0.030534589663147926, -0.0000592709329794161, -0.060531120747327805, 0.0011774880113080144, ...
0.0639
(`redisinsight` in the command above). Find the IP Address of your EC2 instances and launch your browser at `http://:5540`. Accept the EULA and start using RedisInsight. RedisInsight also provides a health check endpoint at `http://:5540/api/health/` to monitor the health of the running container. Summary ------ In this guide, we installed RedisInsight on an AWS EC2 instance running Docker. As a next step, you should add an ElastiCache Redis Instance and then run the memory analysis.
https://github.com/redis/redis-doc/blob/master//docs/install/install-redisinsight/install-on-aws.md
master
redis
[ 0.11016394197940826, 0.04737265408039093, -0.10362174361944199, 0.02622421458363533, 0.02019156515598297, -0.06267273426055908, -0.03258305415511131, 0.026485124602913857, 0.018333274871110916, 0.016642145812511444, -0.0411863848567009, -0.04154403507709503, -0.022083204239606857, -0.03795...
0.113731
This tutorial shows how to install RedisInsight on [Kubernetes](https://kubernetes.io/) (K8s). This is an easy way to use RedisInsight with a [Redis Enterprise K8s deployment](https://redis.io/docs/about/redis-enterprise/#:~:text=and%20Multi%2Dcloud-,Redis%20Enterprise%20Software,-Redis%20Enterprise%20Software). ## Create the RedisInsight deployment and service Below is an annotated YAML file that will create a RedisInsight deployment and a service in a K8s cluster. 1. Create a new file named `redisinsight.yaml` with the content below. ```yaml # RedisInsight service with name 'redisinsight-service' apiVersion: v1 kind: Service metadata: name: redisinsight-service # name should not be 'redisinsight' # since the service creates # environment variables that # conflicts with redisinsight # application's environment # variables `RI\_APP\_HOST` and # `RI\_APP\_PORT` spec: type: LoadBalancer ports: - port: 80 targetPort: 5540 selector: app: redisinsight --- # RedisInsight deployment with name 'redisinsight' apiVersion: apps/v1 kind: Deployment metadata: name: redisinsight #deployment name labels: app: redisinsight #deployment label spec: replicas: 1 #a single replica pod selector: matchLabels: app: redisinsight #which pods is the deployment managing, as defined by the pod template template: #pod template metadata: labels: app: redisinsight #label for pod/s spec: containers: - name: redisinsight #Container name (DNS\_LABEL, unique) image: redis/redisinsight:latest #repo/image imagePullPolicy: IfNotPresent #Installs the latest RedisInsight version volumeMounts: - name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated. mountPath: /data ports: - containerPort: 5540 #exposed container port and protocol protocol: TCP volumes: - name: redisinsight emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ``` 2. Create the RedisInsight deployment and service: ```sh kubectl apply -f redisinsight.yaml ``` 3. Once the deployment and service are successfully applied and complete, access RedisInsight. This can be accomplished by using the `` of the service we created to reach RedisInsight. ```sh $ kubectl get svc redisinsight-service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE redisinsight-service 80:32143/TCP 1m ``` 4. If you are using minikube, run `minikube list` to list the service and access RedisInsight at `http://:`. ``` $ minikube list |-------------|----------------------|--------------|---------------------------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-------------|----------------------|--------------|---------------------------------------------| | default | kubernetes | No node port | | | default | redisinsight-service | 80 | http://: | | kube-system | kube-dns | No node port | | |-------------|----------------------|--------------|---------------------------------------------| ``` ## Create the RedisInsight deployment with persistant storage Below is an annotated YAML file that will create a RedisInsight deployment in a K8s cluster. It will assign a peristent volume created from a volume claim template. Write access to the container is configured in an init container. When using deployments with persistent writeable volumes, it's best to set the strategy to `Recreate`. Otherwise you may find yourself with two pods trying to use the same volume. 1. Create a new file `redisinsight.yaml` with the content below. ```yaml # RedisInsight service with name 'redisinsight-service' apiVersion: v1 kind: Service metadata: name: redisinsight-service # name should not be 'redisinsight' # since the service creates # environment variables that # conflicts with redisinsight # application's environment # variables `RI\_APP\_HOST` and # `RI\_APP\_PORT` spec: type: LoadBalancer ports: - port: 80 targetPort: 5540 selector: app: redisinsight --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: redisinsight-pv-claim labels: app: redisinsight spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: default --- # RedisInsight deployment with name 'redisinsight' apiVersion: apps/v1 kind: Deployment metadata: name: redisinsight #deployment name labels: app: redisinsight #deployment label spec: replicas: 1 #a single replica pod strategy: type: Recreate selector: matchLabels: app: redisinsight #which pods is the deployment managing, as defined by the pod template template: #pod template metadata: labels: app: redisinsight #label for pod/s spec: volumes: - name: redisinsight persistentVolumeClaim: claimName: redisinsight-pv-claim initContainers: - name: init image: busybox command: - /bin/sh - '-c' - | chown -R 1001 /data resources: {} volumeMounts: - name: redisinsight mountPath: /data
https://github.com/redis/redis-doc/blob/master//docs/install/install-redisinsight/install-on-k8s.md
master
redis
[ -0.011184287257492542, -0.07721810787916183, 0.00931849516928196, -0.017797274515032768, -0.057600658386945724, -0.00773435365408659, 0.0011495164362713695, -0.013262835331261158, 0.032469768077135086, 0.022460782900452614, -0.004088586196303368, -0.037657469511032104, -0.0022691290359944105...
0.131001
managing, as defined by the pod template template: #pod template metadata: labels: app: redisinsight #label for pod/s spec: volumes: - name: redisinsight persistentVolumeClaim: claimName: redisinsight-pv-claim initContainers: - name: init image: busybox command: - /bin/sh - '-c' - | chown -R 1001 /data resources: {} volumeMounts: - name: redisinsight mountPath: /data terminationMessagePath: /dev/termination-log terminationMessagePolicy: File containers: - name: redisinsight #Container name (DNS\_LABEL, unique) image: redis/redisinsight:latest #repo/image imagePullPolicy: IfNotPresent #Always pull image volumeMounts: - name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated. mountPath: /data ports: - containerPort: 5540 #exposed container port and protocol protocol: TCP ``` 2. Create the RedisInsight deployment and service. ```sh kubectl apply -f redisinsight.yaml ``` ## Create the RedisInsight deployment without a service. Below is an annotated YAML file that will create a RedisInsight deployment in a K8s cluster. 1. Create a new file redisinsight.yaml with the content below ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: redisinsight #deployment name labels: app: redisinsight #deployment label spec: replicas: 1 #a single replica pod selector: matchLabels: app: redisinsight #which pods is the deployment managing, as defined by the pod template template: #pod template metadata: labels: app: redisinsight #label for pod/s spec: containers: - name: redisinsight #Container name (DNS\_LABEL, unique) image: redis/redisinsight:latest #repo/image imagePullPolicy: IfNotPresent #Always pull image env: # If there's a service named 'redisinsight' that exposes the # deployment, we manually set `RI\_APP\_HOST` and # `RI\_APP\_PORT` to override the service environment # variables. - name: RI\_APP\_HOST value: "0.0.0.0" - name: RI\_APP\_PORT value: "5540" volumeMounts: - name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated. mountPath: /data ports: - containerPort: 5540 #exposed container port and protocol protocol: TCP livenessProbe: httpGet: path : /healthcheck/ # exposed RI endpoint for healthcheck port: 5540 # exposed container port initialDelaySeconds: 5 # number of seconds to wait after the container starts to perform liveness probe periodSeconds: 5 # period in seconds after which liveness probe is performed failureThreshold: 1 # number of liveness probe failures after which container restarts volumes: - name: redisinsight emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ``` 2. Create the RedisInsight deployment ```sh kubectl apply -f redisinsight.yaml ``` {{< alert title="Note" >}} If the deployment will be exposed by a service whose name is 'redisinsight', set `RI\_APP\_HOST` and `RI\_APP\_PORT` environment variables to override the environment variables created by the service. {{< /alert >}} 3. Once the deployment has been successfully applied and the deployment is complete, access RedisInsight. This can be accomplished by exposing the deployment as a K8s Service or by using port forwarding, as in the example below: ```sh kubectl port-forward deployment/redisinsight 5540 ``` Open your browser and point to
https://github.com/redis/redis-doc/blob/master//docs/install/install-redisinsight/install-on-k8s.md
master
redis
[ 0.031280893832445145, -0.031377002596855164, -0.0218564011156559, 0.035410843789577484, 0.023725176230072975, -0.0015172830317169428, 0.008578651584684849, 0.03544183075428009, 0.028063125908374786, 0.04201580211520195, 0.04612075537443161, -0.024178603664040565, -0.05068487673997879, -0.0...
0.176124
You can configure RedisInsight with the following environment variables. | Variable | Purpose | Default | Additional info | | --- | --- | --- | --- | | RI\_APP\_PORT | The port that RedisInsight listens on | * Docker: 5540* desktop: 5530 | See [Express Documentation](https://expressjs.com/en/api.html#app.listen)| | RI\_APP\_HOST | The host that RedisInsight connects to | * Docker: 0.0.0.0* desktop: 127.0.0.1 | See [Express Documentation](https://expressjs.com/en/api.html#app.listen)| | RI\_SERVER\_TLS\_KEY | Private key for HTTPS | n/a | Private key in [PEM format](https://www.ssl.com/guide/pem-der-crt-and-cer-x-509-encodings-and-conversions/#ftoc-heading-3). Can be a path to a file or a string in PEM format.| | RI\_SERVER\_TLS\_CERT | Certificate for supplied private key | n/a | Public certificate in [PEM format](https://www.ssl.com/guide/pem-der-crt-and-cer-x-509-encodings-and-conversions/#ftoc-heading-3). Can be a path to a file or a string in PEM format.| | RI\_ENCRYPTION\_KEY | Key to encrypt data with | n/a | Available only for Docker. Redisinsight stores sensitive information (database passwords, Workbench history, etc.) locally (using [sqlite3](https://github.com/TryGhost/node-sqlite3)). This variable allows you to store sensitive information encrypted using the specified encryption key.
https://github.com/redis/redis-doc/blob/master//docs/install/install-redisinsight/env-variables.md
master
redis
[ -0.006077465135604143, -0.02358597330749035, -0.06908783316612244, 0.02815709076821804, -0.023661760613322258, -0.03587070107460022, -0.05816156044602394, 0.04564481973648071, 0.008943332359194756, 0.023249905556440353, -0.041441041976213455, -0.05822870880365372, -0.020289137959480286, 0....
0.100958
This tutorial shows how to install RedisInsight on [Docker](https://www.docker.com/) so you can use RedisInsight in development. See a separate guide for installing [RedisInsight on AWS](/docs/install/install-redisinsight/install-on-aws/). ## Install Docker The first step is to [install Docker for your operating system](https://docs.docker.com/install/). ## Run RedisInsight Docker image You can install RedisInsight using one of the options described below. 1. If you do not want to persist your RedisInsight data: ```bash docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest ``` 2. If you want to persist your RedisInsight data, first attach the Docker volume to the `/data` path and then run the following command: ```bash docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest -v redisinsight:/data ``` If the previous command returns a permission error, ensure that the user with `ID = 1000` has the necessary permissions to access the volume provided (`redisinsight` in the command above). Next, point your browser to `http://localhost:5540`. RedisInsight also provides a health check endpoint at `http://localhost:5540/api/health/` to monitor the health of the running container.
https://github.com/redis/redis-doc/blob/master//docs/install/install-redisinsight/install-on-docker.md
master
redis
[ 0.0052583785727620125, -0.035413410514593124, -0.03808780014514923, 0.004468034952878952, 0.024753205478191376, -0.004029196687042713, -0.02703404426574707, -0.0019007049268111587, 0.042541831731796265, 0.030692143365740776, -0.012029443867504597, -0.026576334610581398, 0.011309739202260971,...
-0.019451
--- title: "Redis geospatial" linkTitle: "Geospatial" weight: 80 description: > Introduction to the Redis Geospatial data type --- Redis geospatial indexes let you store coordinates and search for them. This data structure is useful for finding nearby points within a given radius or bounding box. ## Basic commands \* `GEOADD` adds a location to a given geospatial index (note that longitude comes before latitude with this command). \* `GEOSEARCH` returns locations with a given radius or a bounding box. See the [complete list of geospatial index commands](https://redis.io/commands/?group=geo). ## Examples Suppose you're building a mobile app that lets you find all of the bike rental stations closest to your current location. Add several locations to a geospatial index: {{< clients-example geo\_tutorial geoadd >}} > GEOADD bikes:rentable -122.27652 37.805186 station:1 (integer) 1 > GEOADD bikes:rentable -122.2674626 37.8062344 station:2 (integer) 1 > GEOADD bikes:rentable -122.2469854 37.8104049 station:3 (integer) 1 {{< /clients-example >}} Find all locations within a 5 kilometer radius of a given location, and return the distance to each location: {{< clients-example geo\_tutorial geosearch >}} > GEOSEARCH bikes:rentable FROMLONLAT -122.2612767 37.7936847 BYRADIUS 5 km WITHDIST 1) 1) "station:1" 2) "1.8523" 2) 1) "station:2" 2) "1.4979" 3) 1) "station:3" 2) "2.2441" {{< /clients-example >}} ## Learn more \* [Redis Geospatial Explained](https://www.youtube.com/watch?v=qftiVQraxmI) introduces geospatial indexes by showing you how to build a map of local park attractions. \* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis geospatial indexes in detail.
https://github.com/redis/redis-doc/blob/master//docs/data-types/geospatial.md
master
redis
[ 0.030392419546842575, 0.03156064823269844, -0.05154750123620033, 0.0206051766872406, 0.008082360029220581, -0.035682953894138336, 0.027034049853682518, -0.006878120359033346, 0.008402817882597446, -0.048173654824495316, -0.009572195820510387, 0.019786298274993896, 0.07882729917764664, -0.0...
0.086702
A Redis sorted set is a collection of unique strings (members) ordered by an associated score. When more than one string has the same score, the strings are ordered lexicographically. Some use cases for sorted sets include: \* Leaderboards. For example, you can use sorted sets to easily maintain ordered lists of the highest scores in a massive online game. \* Rate limiters. In particular, you can use a sorted set to build a sliding-window rate limiter to prevent excessive API requests. You can think of sorted sets as a mix between a Set and a Hash. Like sets, sorted sets are composed of unique, non-repeating string elements, so in some sense a sorted set is a set as well. However while elements inside sets are not ordered, every element in a sorted set is associated with a floating point value, called \*the score\* (this is why the type is also similar to a hash, since every element is mapped to a value). Moreover, elements in a sorted set are \*taken in order\* (so they are not ordered on request, order is a peculiarity of the data structure used to represent sorted sets). They are ordered according to the following rule: \* If B and A are two elements with a different score, then A > B if A.score is > B.score. \* If B and A have exactly the same score, then A > B if the A string is lexicographically greater than the B string. B and A strings can't be equal since sorted sets only have unique elements. Let's start with a simple example, we'll add all our racers and the score they got in the first race: {{< clients-example ss\_tutorial zadd >}} > ZADD racer\_scores 10 "Norem" (integer) 1 > ZADD racer\_scores 12 "Castilla" (integer) 1 > ZADD racer\_scores 8 "Sam-Bodden" 10 "Royce" 6 "Ford" 14 "Prickett" (integer) 4 {{< /clients-example >}} As you can see `ZADD` is similar to `SADD`, but takes one additional argument (placed before the element to be added) which is the score. `ZADD` is also variadic, so you are free to specify multiple score-value pairs, even if this is not used in the example above. With sorted sets it is trivial to return a list of hackers sorted by their birth year because actually \*they are already sorted\*. Implementation note: Sorted sets are implemented via a dual-ported data structure containing both a skip list and a hash table, so every time we add an element Redis performs an O(log(N)) operation. That's good, but when we ask for sorted elements Redis does not have to do any work at all, it's already sorted. Note that the `ZRANGE` order is low to high, while the `ZREVRANGE` order is high to low: {{< clients-example ss\_tutorial zrange >}} > ZRANGE racer\_scores 0 -1 1) "Ford" 2) "Sam-Bodden" 3) "Norem" 4) "Royce" 5) "Castilla" 6) "Prickett" > ZREVRANGE racer\_scores 0 -1 1) "Prickett" 2) "Castilla" 3) "Royce" 4) "Norem" 5) "Sam-Bodden" 6) "Ford" {{< /clients-example >}} Note: 0 and -1 means from element index 0 to the last element (-1 works here just as it does in the case of the `LRANGE` command). It is possible to return scores as well, using the `WITHSCORES` argument: {{< clients-example ss\_tutorial zrange\_withscores >}} > ZRANGE racer\_scores 0 -1 withscores 1) "Ford" 2) "6" 3) "Sam-Bodden" 4) "8" 5) "Norem" 6) "10" 7) "Royce" 8) "10" 9) "Castilla" 10) "12" 11) "Prickett" 12) "14" {{< /clients-example >}} ### Operating on ranges Sorted sets are more powerful than this. They can operate on ranges. Let's get all the racers with 10
https://github.com/redis/redis-doc/blob/master//docs/data-types/sorted-sets.md
master
redis
[ -0.04685000330209732, -0.031360894441604614, -0.045150987803936005, -0.0141026321798563, -0.05439871922135353, -0.007659324444830418, 0.010124711319804192, 0.03111291117966175, 0.11643616110086441, 0.027132319286465645, -0.02615544945001602, 0.10180945694446564, 0.054282672703266144, -0.03...
0.146585
1) "Ford" 2) "6" 3) "Sam-Bodden" 4) "8" 5) "Norem" 6) "10" 7) "Royce" 8) "10" 9) "Castilla" 10) "12" 11) "Prickett" 12) "14" {{< /clients-example >}} ### Operating on ranges Sorted sets are more powerful than this. They can operate on ranges. Let's get all the racers with 10 or fewer points. We use the `ZRANGEBYSCORE` command to do it: {{< clients-example ss\_tutorial zrangebyscore >}} > ZRANGEBYSCORE racer\_scores -inf 10 1) "Ford" 2) "Sam-Bodden" 3) "Norem" 4) "Royce" {{< /clients-example >}} We asked Redis to return all the elements with a score between negative infinity and 10 (both extremes are included). To remove an element we'd simply call `ZREM` with the racer's name. It's also possible to remove ranges of elements. Let's remove racer Castilla along with all the racers with strictly fewer than 10 points: {{< clients-example ss\_tutorial zremrangebyscore >}} > ZREM racer\_scores "Castilla" (integer) 1 > ZREMRANGEBYSCORE racer\_scores -inf 9 (integer) 2 > ZRANGE racer\_scores 0 -1 1) "Norem" 2) "Royce" 3) "Prickett" {{< /clients-example >}} `ZREMRANGEBYSCORE` is perhaps not the best command name, but it can be very useful, and returns the number of removed elements. Another extremely useful operation defined for sorted set elements is the get-rank operation. It is possible to ask what is the position of an element in the set of ordered elements. The `ZREVRANK` command is also available in order to get the rank, considering the elements sorted in a descending way. {{< clients-example ss\_tutorial zrank >}} > ZRANK racer\_scores "Norem" (integer) 0 > ZREVRANK racer\_scores "Norem" (integer) 3 {{< /clients-example >}} ### Lexicographical scores In version Redis 2.8, a new feature was introduced that allows getting ranges lexicographically, assuming elements in a sorted set are all inserted with the same identical score (elements are compared with the C `memcmp` function, so it is guaranteed that there is no collation, and every Redis instance will reply with the same output). The main commands to operate with lexicographical ranges are `ZRANGEBYLEX`, `ZREVRANGEBYLEX`, `ZREMRANGEBYLEX` and `ZLEXCOUNT`. For example, let's add again our list of famous hackers, but this time using a score of zero for all the elements. We'll see that because of the sorted sets ordering rules, they are already sorted lexicographically. Using `ZRANGEBYLEX` we can ask for lexicographical ranges: {{< clients-example ss\_tutorial zadd\_lex >}} > ZADD racer\_scores 0 "Norem" 0 "Sam-Bodden" 0 "Royce" 0 "Castilla" 0 "Prickett" 0 "Ford" (integer) 3 > ZRANGE racer\_scores 0 -1 1) "Castilla" 2) "Ford" 3) "Norem" 4) "Prickett" 5) "Royce" 6) "Sam-Bodden" > ZRANGEBYLEX racer\_scores [A [L 1) "Castilla" 2) "Ford" {{< /clients-example >}} Ranges can be inclusive or exclusive (depending on the first character), also string infinite and minus infinite are specified respectively with the `+` and `-` strings. See the documentation for more information. This feature is important because it allows us to use sorted sets as a generic index. For example, if you want to index elements by a 128-bit unsigned integer argument, all you need to do is to add elements into a sorted set with the same score (for example 0) but with a 16 byte prefix consisting of \*\*the 128 bit number in big endian\*\*. Since numbers in big endian, when ordered lexicographically (in raw bytes order) are actually ordered numerically as well, you can ask for ranges in the 128 bit space, and get the element's value discarding the prefix. If you want to see the feature in the context of a more serious demo, check the [Redis autocomplete demo](http://autocomplete.redis.io). Updating the score: leaderboards --- Just a final note about sorted sets before switching to the next topic.
https://github.com/redis/redis-doc/blob/master//docs/data-types/sorted-sets.md
master
redis
[ -0.03733627498149872, 0.07342171669006348, -0.059388045221567154, 0.03185809776186943, -0.005703080911189318, 0.03397679701447487, 0.05727319046854973, 0.03245321661233902, -0.05857619643211365, -0.04775061085820198, -0.03639063239097595, -0.0013786006020382047, 0.0903572216629982, 0.00422...
0.069563
the 128 bit space, and get the element's value discarding the prefix. If you want to see the feature in the context of a more serious demo, check the [Redis autocomplete demo](http://autocomplete.redis.io). Updating the score: leaderboards --- Just a final note about sorted sets before switching to the next topic. Sorted sets' scores can be updated at any time. Just calling `ZADD` against an element already included in the sorted set will update its score (and position) with O(log(N)) time complexity. As such, sorted sets are suitable when there are tons of updates. Because of this characteristic a common use case is leaderboards. The typical application is a Facebook game where you combine the ability to take users sorted by their high score, plus the get-rank operation, in order to show the top-N users, and the user rank in the leader board (e.g., "you are the #4932 best score here"). ## Examples \* There are two ways we can use a sorted set to represent a leaderboard. If we know a racer's new score, we can update it directly via the `ZADD` command. However, if we want to add points to an existing score, we can use the `ZINCRBY` command. {{< clients-example ss\_tutorial leaderboard >}} > ZADD racer\_scores 100 "Wood" (integer) 1 > ZADD racer\_scores 100 "Henshaw" (integer) 1 > ZADD racer\_scores 150 "Henshaw" (integer) 0 > ZINCRBY racer\_scores 50 "Wood" "150" > ZINCRBY racer\_scores 50 "Henshaw" "200" {{< /clients-example >}} You'll see that `ZADD` returns 0 when the member already exists (the score is updated), while `ZINCRBY` returns the new score. The score for racer Henshaw went from 100, was changed to 150 with no regard for what score was there before, and then was incremented by 50 to 200. ## Basic commands \* `ZADD` adds a new member and associated score to a sorted set. If the member already exists, the score is updated. \* `ZRANGE` returns members of a sorted set, sorted within a given range. \* `ZRANK` returns the rank of the provided member, assuming the sorted is in ascending order. \* `ZREVRANK` returns the rank of the provided member, assuming the sorted set is in descending order. See the [complete list of sorted set commands](https://redis.io/commands/?group=sorted-set). ## Performance Most sorted set operations are O(log(n)), where \_n\_ is the number of members. Exercise some caution when running the `ZRANGE` command with large returns values (e.g., in the tens of thousands or more). This command's time complexity is O(log(n) + m), where \_m\_ is the number of results returned. ## Alternatives Redis sorted sets are sometimes used for indexing other Redis data structures. If you need to index and query your data, consider the [JSON](/docs/stack/json) data type and the [Search and query](/docs/stack/search) features. ## Learn more \* [Redis Sorted Sets Explained](https://www.youtube.com/watch?v=MUKlxdBQZ7g) is an entertaining introduction to sorted sets in Redis. \* [Redis University's RU101](https://university.redis.com/courses/ru101/) explores Redis sorted sets in detail.
https://github.com/redis/redis-doc/blob/master//docs/data-types/sorted-sets.md
master
redis
[ -0.012265494093298912, -0.027921689674258232, -0.0960269346833229, -0.021179046481847763, -0.0027167880907654762, 0.012307317927479744, 0.033495739102363586, 0.0549539215862751, 0.001723783672787249, 0.030375873669981956, -0.041741929948329926, 0.07214251160621643, 0.10338146239519119, -0....
0.118658
Bitmaps are not an actual data type, but a set of bit-oriented operations defined on the String type which is treated like a bit vector. Since strings are binary safe blobs and their maximum length is 512 MB, they are suitable to set up to 2^32 different bits. You can perform bitwise operations on one or more strings. Some examples of bitmap use cases include: \* Efficient set representations for cases where the members of a set correspond to the integers 0-N. \* Object permissions, where each bit represents a particular permission, similar to the way that file systems store permissions. ## Basic commands \* `SETBIT` sets a bit at the provided offset to 0 or 1. \* `GETBIT` returns the value of a bit at a given offset. See the [complete list of bitmap commands](https://redis.io/commands/?group=bitmap). ## Example Suppose you have 1000 cyclists racing through the country-side, with sensors on their bikes labeled 0-999. You want to quickly determine whether a given sensor has pinged a tracking server within the hour to check in on a rider. You can represent this scenario using a bitmap whose key references the current hour. \* Rider 123 pings the server on January 1, 2024 within the 00:00 hour. You can then confirm that rider 123 pinged the server. You can also check to see if rider 456 has pinged the server for that same hour. {{< clients-example bitmap\_tutorial ping >}} > SETBIT pings:2024-01-01-00:00 123 1 (integer) 0 > GETBIT pings:2024-01-01-00:00 123 1 > GETBIT pings:2024-01-01-00:00 456 0 {{< /clients-example >}} ## Bit Operations Bit operations are divided into two groups: constant-time single bit operations, like setting a bit to 1 or 0, or getting its value, and operations on groups of bits, for example counting the number of set bits in a given range of bits (e.g., population counting). One of the biggest advantages of bitmaps is that they often provide extreme space savings when storing information. For example in a system where different users are represented by incremental user IDs, it is possible to remember a single bit information (for example, knowing whether a user wants to receive a newsletter) of 4 billion users using just 512 MB of memory. The `SETBIT` command takes as its first argument the bit number, and as its second argument the value to set the bit to, which is 1 or 0. The command automatically enlarges the string if the addressed bit is outside the current string length. `GETBIT` just returns the value of the bit at the specified index. Out of range bits (addressing a bit that is outside the length of the string stored into the target key) are always considered to be zero. There are three commands operating on group of bits: 1. `BITOP` performs bit-wise operations between different strings. The provided operations are AND, OR, XOR and NOT. 2. `BITCOUNT` performs population counting, reporting the number of bits set to 1. 3. `BITPOS` finds the first bit having the specified value of 0 or 1. Both `BITPOS` and `BITCOUNT` are able to operate with byte ranges of the string, instead of running for the whole length of the string. We can trivially see the number of bits that have been set in a bitmap. {{< clients-example bitmap\_tutorial bitcount >}} > BITCOUNT pings:2024-01-01-00:00 (integer) 1 {{< /clients-example >}} For example imagine you want to know the longest streak of daily visits of your web site users. You start counting days starting from zero, that is the day you made your web site public, and set a bit with `SETBIT` every time the user
https://github.com/redis/redis-doc/blob/master//docs/data-types/bitmaps.md
master
redis
[ 0.06124662607908249, 0.06692365556955338, -0.06518448144197464, -0.008159585297107697, -0.011715101078152657, -0.08149047940969467, 0.06083429604768753, 0.00601815665140748, -0.07111410051584244, -0.03172864019870758, -0.057015836238861084, -0.052947621792554855, 0.04684455692768097, -0.01...
0.15449
(integer) 1 {{< /clients-example >}} For example imagine you want to know the longest streak of daily visits of your web site users. You start counting days starting from zero, that is the day you made your web site public, and set a bit with `SETBIT` every time the user visits the web site. As a bit index you simply take the current unix time, subtract the initial offset, and divide by the number of seconds in a day (normally, 3600\\*24). This way for each user you have a small string containing the visit information for each day. With `BITCOUNT` it is possible to easily get the number of days a given user visited the web site, while with a few `BITPOS` calls, or simply fetching and analyzing the bitmap client-side, it is possible to easily compute the longest streak. Bitmaps are trivial to split into multiple keys, for example for the sake of sharding the data set and because in general it is better to avoid working with huge keys. To split a bitmap across different keys instead of setting all the bits into a key, a trivial strategy is just to store M bits per key and obtain the key name with `bit-number/M` and the Nth bit to address inside the key with `bit-number MOD M`. ## Performance `SETBIT` and `GETBIT` are O(1). `BITOP` is O(n), where \_n\_ is the length of the longest string in the comparison. ## Learn more \* [Redis Bitmaps Explained](https://www.youtube.com/watch?v=oj8LdJQjhJo) teaches you how to use bitmaps for map exploration in an online game. \* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis bitmaps in detail.
https://github.com/redis/redis-doc/blob/master//docs/data-types/bitmaps.md
master
redis
[ 0.028313612565398216, 0.09366688132286072, -0.019247183576226234, 0.010356852784752846, -0.064448282122612, -0.0573369637131691, 0.059105776250362396, 0.024641895666718483, -0.017261173576116562, -0.011374762281775475, -0.054251790046691895, -0.04597903788089752, 0.06129808723926544, -0.00...
0.130002
--- title: "Redis Strings" linkTitle: "Strings" weight: 10 description: > Introduction to Redis strings --- Redis strings store sequences of bytes, including text, serialized objects, and binary arrays. As such, strings are the simplest type of value you can associate with a Redis key. They're often used for caching, but they support additional functionality that lets you implement counters and perform bitwise operations, too. Since Redis keys are strings, when we use the string type as a value too, we are mapping a string to another string. The string data type is useful for a number of use cases, like caching HTML fragments or pages. {{< clients-example set\_tutorial set\_get >}} > SET bike:1 Deimos OK > GET bike:1 "Deimos" {{< /clients-example >}} As you can see using the `SET` and the `GET` commands are the way we set and retrieve a string value. Note that `SET` will replace any existing value already stored into the key, in the case that the key already exists, even if the key is associated with a non-string value. So `SET` performs an assignment. Values can be strings (including binary data) of every kind, for instance you can store a jpeg image inside a value. A value can't be bigger than 512 MB. The `SET` command has interesting options, that are provided as additional arguments. For example, I may ask `SET` to fail if the key already exists, or the opposite, that it only succeed if the key already exists: {{< clients-example set\_tutorial setnx\_xx >}} > set bike:1 bike nx (nil) > set bike:1 bike xx OK {{< /clients-example >}} There are a number of other commands for operating on strings. For example the `GETSET` command sets a key to a new value, returning the old value as the result. You can use this command, for example, if you have a system that increments a Redis key using `INCR` every time your web site receives a new visitor. You may want to collect this information once every hour, without losing a single increment. You can `GETSET` the key, assigning it the new value of "0" and reading the old value back. The ability to set or retrieve the value of multiple keys in a single command is also useful for reduced latency. For this reason there are the `MSET` and `MGET` commands: {{< clients-example set\_tutorial mset >}} > mset bike:1 "Deimos" bike:2 "Ares" bike:3 "Vanth" OK > mget bike:1 bike:2 bike:3 1) "Deimos" 2) "Ares" 3) "Vanth" {{< /clients-example >}} When `MGET` is used, Redis returns an array of values. ### Strings as counters Even if strings are the basic values of Redis, there are interesting operations you can perform with them. For instance, one is atomic increment: {{< clients-example set\_tutorial incr >}} > set total\_crashes 0 OK > incr total\_crashes (integer) 1 > incrby total\_crashes 10 (integer) 11 {{< /clients-example >}} The `INCR` command parses the string value as an integer, increments it by one, and finally sets the obtained value as the new value. There are other similar commands like `INCRBY`, `DECR` and `DECRBY`. Internally it's always the same command, acting in a slightly different way. What does it mean that INCR is atomic? That even multiple clients issuing INCR against the same key will never enter into a race condition. For instance, it will never happen that client 1 reads "10", client 2 reads "10" at the same time, both increment to 11, and set the new value to 11. The final value will always be 12 and the read-increment-set operation is performed while all the other clients are not executing a
https://github.com/redis/redis-doc/blob/master//docs/data-types/strings.md
master
redis
[ -0.05872097983956337, 0.05068337917327881, -0.06807676702737808, 0.03267688676714897, -0.09616290032863617, -0.04796835407614708, 0.1103987768292427, 0.05638767033815384, 0.03727736324071884, -0.0699872374534607, 0.004375095944851637, 0.04859776794910431, 0.09221066534519196, -0.0731128901...
0.142182
it will never happen that client 1 reads "10", client 2 reads "10" at the same time, both increment to 11, and set the new value to 11. The final value will always be 12 and the read-increment-set operation is performed while all the other clients are not executing a command at the same time. ## Limits By default, a single Redis string can be a maximum of 512 MB. ## Basic commands ### Getting and setting Strings \* `SET` stores a string value. \* `SETNX` stores a string value only if the key doesn't already exist. Useful for implementing locks. \* `GET` retrieves a string value. \* `MGET` retrieves multiple string values in a single operation. ### Managing counters \* `INCRBY` atomically increments (and decrements when passing a negative number) counters stored at a given key. \* Another command exists for floating point counters: `INCRBYFLOAT`. ### Bitwise operations To perform bitwise operations on a string, see the [bitmaps data type](/docs/data-types/bitmaps) docs. See the [complete list of string commands](/commands/?group=string). ## Performance Most string operations are O(1), which means they're highly efficient. However, be careful with the `SUBSTR`, `GETRANGE`, and `SETRANGE` commands, which can be O(n). These random-access string commands may cause performance issues when dealing with large strings. ## Alternatives If you're storing structured data as a serialized string, you may also want to consider Redis [hashes](/docs/data-types/hashes) or [JSON](/docs/stack/json). ## Learn more \* [Redis Strings Explained](https://www.youtube.com/watch?v=7CUt4yWeRQE) is a short, comprehensive video explainer on Redis strings. \* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis strings in detail.
https://github.com/redis/redis-doc/blob/master//docs/data-types/strings.md
master
redis
[ -0.10107573866844177, -0.030146796256303787, -0.11792946606874466, -0.01391645334661007, -0.1518252193927765, -0.09647572785615921, 0.08677822351455688, 0.05375765264034271, 0.03489040583372116, -0.00878407247364521, 0.0025839742738753557, -0.0020194114185869694, 0.10210386663675308, -0.07...
0.13547
--- title: "Redis hashes" linkTitle: "Hashes" weight: 40 description: > Introduction to Redis hashes --- Redis hashes are record types structured as collections of field-value pairs. You can use hashes to represent basic objects and to store groupings of counters, among other things. {{< clients-example hash\_tutorial set\_get\_all >}} > HSET bike:1 model Deimos brand Ergonom type 'Enduro bikes' price 4972 (integer) 4 > HGET bike:1 model "Deimos" > HGET bike:1 price "4972" > HGETALL bike:1 1) "model" 2) "Deimos" 3) "brand" 4) "Ergonom" 5) "type" 6) "Enduro bikes" 7) "price" 8) "4972" {{< /clients-example >}} While hashes are handy to represent \*objects\*, actually the number of fields you can put inside a hash has no practical limits (other than available memory), so you can use hashes in many different ways inside your application. The command `HSET` sets multiple fields of the hash, while `HGET` retrieves a single field. `HMGET` is similar to `HGET` but returns an array of values: {{< clients-example hash\_tutorial hmget >}} > HMGET bike:1 model price no-such-field 1) "Deimos" 2) "4972" 3) (nil) {{< /clients-example >}} There are commands that are able to perform operations on individual fields as well, like `HINCRBY`: {{< clients-example hash\_tutorial hincrby >}} > HINCRBY bike:1 price 100 (integer) 5072 > HINCRBY bike:1 price -100 (integer) 4972 {{< /clients-example >}} You can find the [full list of hash commands in the documentation](https://redis.io/commands#hash). It is worth noting that small hashes (i.e., a few elements with small values) are encoded in special way in memory that make them very memory efficient. ## Basic commands \* `HSET` sets the value of one or more fields on a hash. \* `HGET` returns the value at a given field. \* `HMGET` returns the values at one or more given fields. \* `HINCRBY` increments the value at a given field by the integer provided. See the [complete list of hash commands](https://redis.io/commands/?group=hash). ## Examples \* Store counters for the number of times bike:1 has been ridden, has crashed, or has changed owners: {{< clients-example hash\_tutorial incrby\_get\_mget >}} > HINCRBY bike:1:stats rides 1 (integer) 1 > HINCRBY bike:1:stats rides 1 (integer) 2 > HINCRBY bike:1:stats rides 1 (integer) 3 > HINCRBY bike:1:stats crashes 1 (integer) 1 > HINCRBY bike:1:stats owners 1 (integer) 1 > HGET bike:1:stats rides "3" > HMGET bike:1:stats owners crashes 1) "1" 2) "1" {{< /clients-example >}} ## Performance Most Redis hash commands are O(1). A few commands - such as `HKEYS`, `HVALS`, and `HGETALL` - are O(n), where \_n\_ is the number of field-value pairs. ## Limits Every hash can store up to 4,294,967,295 (2^32 - 1) field-value pairs. In practice, your hashes are limited only by the overall memory on the VMs hosting your Redis deployment. ## Learn more \* [Redis Hashes Explained](https://www.youtube.com/watch?v=-KdITaRkQ-U) is a short, comprehensive video explainer covering Redis hashes. \* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis hashes in detail.
https://github.com/redis/redis-doc/blob/master//docs/data-types/hashes.md
master
redis
[ -0.009461713023483753, 0.060846664011478424, -0.06226919963955879, -0.013040022924542427, -0.047261934727430344, 0.01280433963984251, 0.07774568349123001, 0.06385421752929688, 0.04174526780843735, -0.03451018035411835, -0.005077079404145479, 0.03568648919463158, 0.11615078151226044, -0.075...
0.085097
--- title: "Redis sets" linkTitle: "Sets" weight: 30 description: > Introduction to Redis sets --- A Redis set is an unordered collection of unique strings (members). You can use Redis sets to efficiently: \* Track unique items (e.g., track all unique IP addresses accessing a given blog post). \* Represent relations (e.g., the set of all users with a given role). \* Perform common set operations such as intersection, unions, and differences. ## Basic commands \* `SADD` adds a new member to a set. \* `SREM` removes the specified member from the set. \* `SISMEMBER` tests a string for set membership. \* `SINTER` returns the set of members that two or more sets have in common (i.e., the intersection). \* `SCARD` returns the size (a.k.a. cardinality) of a set. See the [complete list of set commands](https://redis.io/commands/?group=set). ## Examples \* Store the sets of bikes racing in France and the USA. Note that if you add a member that already exists, it will be ignored. {{< clients-example sets\_tutorial sadd >}} > SADD bikes:racing:france bike:1 (integer) 1 > SADD bikes:racing:france bike:1 (integer) 0 > SADD bikes:racing:france bike:2 bike:3 (integer) 2 > SADD bikes:racing:usa bike:1 bike:4 (integer) 2 {{< /clients-example >}} \* Check whether bike:1 or bike:2 are racing in the US. {{< clients-example sets\_tutorial sismember >}} > SISMEMBER bikes:racing:usa bike:1 (integer) 1 > SISMEMBER bikes:racing:usa bike:2 (integer) 0 {{< /clients-example >}} \* Which bikes are competing in both races? {{< clients-example sets\_tutorial sinter >}} > SINTER bikes:racing:france bikes:racing:usa 1) "bike:1" {{< /clients-example >}} \* How many bikes are racing in France? {{< clients-example sets\_tutorial scard >}} > SCARD bikes:racing:france (integer) 3 {{< /clients-example >}} ## Tutorial The `SADD` command adds new elements to a set. It's also possible to do a number of other operations against sets like testing if a given element already exists, performing the intersection, union or difference between multiple sets, and so forth. {{< clients-example sets\_tutorial sadd\_smembers >}} > SADD bikes:racing:france bike:1 bike:2 bike:3 (integer) 3 > SMEMBERS bikes:racing:france 1) bike:3 2) bike:1 3) bike:2 {{< /clients-example >}} Here I've added three elements to my set and told Redis to return all the elements. There is no order guarantee with a set. Redis is free to return the elements in any order at every call. Redis has commands to test for set membership. These commands can be used on single as well as multiple items: {{< clients-example sets\_tutorial smismember >}} > SISMEMBER bikes:racing:france bike:1 (integer) 1 > SMISMEMBER bikes:racing:france bike:2 bike:3 bike:4 1) (integer) 1 2) (integer) 1 3) (integer) 0 {{< /clients-example >}} We can also find the difference between two sets. For instance, we may want to know which bikes are racing in France but not in the USA: {{< clients-example sets\_tutorial sdiff >}} > SADD bikes:racing:usa bike:1 bike:4 (integer) 2 > SDIFF bikes:racing:france bikes:racing:usa 1) "bike:3" 2) "bike:2" {{< /clients-example >}} There are other non trivial operations that are still easy to implement using the right Redis commands. For instance we may want a list of all the bikes racing in France, the USA, and some other races. We can do this using the `SINTER` command, which performs the intersection between different sets. In addition to intersection you can also perform unions, difference, and more. For example if we add a third race we can see some of these commands in action: {{< clients-example sets\_tutorial multisets >}} > SADD bikes:racing:france bike:1 bike:2 bike:3 (integer) 3 > SADD bikes:racing:usa bike:1 bike:4 (integer) 2 > SADD bikes:racing:italy bike:1 bike:2 bike:3 bike:4 (integer) 4 > SINTER bikes:racing:france bikes:racing:usa bikes:racing:italy 1) "bike:1" > SUNION bikes:racing:france bikes:racing:usa bikes:racing:italy 1) "bike:2"
https://github.com/redis/redis-doc/blob/master//docs/data-types/sets.md
master
redis
[ -0.01756751537322998, 0.008950472809374332, -0.08704500645399094, 0.032434672117233276, -0.0270239170640707, 0.002549173776060343, 0.07939454913139343, 0.002362634288147092, -0.014142542146146297, -0.036671821027994156, 0.01677645929157734, -0.015287152491509914, 0.11238009482622147, -0.06...
0.181196
see some of these commands in action: {{< clients-example sets\_tutorial multisets >}} > SADD bikes:racing:france bike:1 bike:2 bike:3 (integer) 3 > SADD bikes:racing:usa bike:1 bike:4 (integer) 2 > SADD bikes:racing:italy bike:1 bike:2 bike:3 bike:4 (integer) 4 > SINTER bikes:racing:france bikes:racing:usa bikes:racing:italy 1) "bike:1" > SUNION bikes:racing:france bikes:racing:usa bikes:racing:italy 1) "bike:2" 2) "bike:1" 3) "bike:4" 4) "bike:3" > SDIFF bikes:racing:france bikes:racing:usa bikes:racing:italy (empty array) > SDIFF bikes:racing:france bikes:racing:usa 1) "bike:3" 2) "bike:2" > SDIFF bikes:racing:usa bikes:racing:france 1) "bike:4" {{< /clients-example >}} You'll note that the `SDIFF` command returns an empty array when the difference between all sets is empty. You'll also note that the order of sets passed to `SDIFF` matters, since the difference is not commutative. When you want to remove items from a set, you can use the `SREM` command to remove one or more items from a set, or you can use the `SPOP` command to remove a random item from a set. You can also \_return\_ a random item from a set without removing it using the `SRANDMEMBER` command: {{< clients-example sets\_tutorial srem >}} > SADD bikes:racing:france bike:1 bike:2 bike:3 bike:4 bike:5 (integer) 5 > SREM bikes:racing:france bike:1 (integer) 1 > SPOP bikes:racing:france "bike:3" > SMEMBERS bikes:racing:france 1) "bike:2" 2) "bike:4" 3) "bike:5" > SRANDMEMBER bikes:racing:france "bike:2" {{< /clients-example >}} ## Limits The max size of a Redis set is 2^32 - 1 (4,294,967,295) members. ## Performance Most set operations, including adding, removing, and checking whether an item is a set member, are O(1). This means that they're highly efficient. However, for large sets with hundreds of thousands of members or more, you should exercise caution when running the `SMEMBERS` command. This command is O(n) and returns the entire set in a single response. As an alternative, consider the `SSCAN`, which lets you retrieve all members of a set iteratively. ## Alternatives Sets membership checks on large datasets (or on streaming data) can use a lot of memory. If you're concerned about memory usage and don't need perfect precision, consider a [Bloom filter or Cuckoo filter](/docs/stack/bloom) as an alternative to a set. Redis sets are frequently used as a kind of index. If you need to index and query your data, consider the [JSON](/docs/stack/json) data type and the [Search and query](/docs/stack/search) features. ## Learn more \* [Redis Sets Explained](https://www.youtube.com/watch?v=PKdCppSNTGQ) and [Redis Sets Elaborated](https://www.youtube.com/watch?v=aRw5ME\_5kMY) are two short but thorough video explainers covering Redis sets. \* [Redis University's RU101](https://university.redis.com/courses/ru101/) explores Redis sets in detail.
https://github.com/redis/redis-doc/blob/master//docs/data-types/sets.md
master
redis
[ 0.0802299827337265, 0.07286101579666138, -0.08321154117584229, 0.0002638458099681884, -0.01003650389611721, 0.03370625898241997, 0.024097107350826263, -0.011115715838968754, -0.02476995624601841, -0.053135666996240616, 0.008681013248860836, -0.14600463211536407, 0.056252092123031616, -0.05...
0.019052
A Redis stream is a data structure that acts like an append-only log but also implements several operations to overcome some of the limits of a typical append-only log. These include random access in O(1) time and complex consumption strategies, such as consumer groups. You can use streams to record and simultaneously syndicate events in real time. Examples of Redis stream use cases include: \* Event sourcing (e.g., tracking user actions, clicks, etc.) \* Sensor monitoring (e.g., readings from devices in the field) \* Notifications (e.g., storing a record of each user's notifications in a separate stream) Redis generates a unique ID for each stream entry. You can use these IDs to retrieve their associated entries later or to read and process all subsequent entries in the stream. Note that because these IDs are related to time, the ones shown here may vary and will be different from the IDs you see in your own Redis instance. Redis streams support several trimming strategies (to prevent streams from growing unbounded) and more than one consumption strategy (see `XREAD`, `XREADGROUP`, and `XRANGE`). ## Basic commands \* `XADD` adds a new entry to a stream. \* `XREAD` reads one or more entries, starting at a given position and moving forward in time. \* `XRANGE` returns a range of entries between two supplied entry IDs. \* `XLEN` returns the length of a stream. See the [complete list of stream commands](https://redis.io/commands/?group=stream). ## Examples \* When our racers pass a checkpoint, we add a stream entry for each racer that includes the racer's name, speed, position, and location ID: {{< clients-example stream\_tutorial xadd >}} > XADD race:france \* rider Castilla speed 30.2 position 1 location\_id 1 "1692632086370-0" > XADD race:france \* rider Norem speed 28.8 position 3 location\_id 1 "1692632094485-0" > XADD race:france \* rider Prickett speed 29.7 position 2 location\_id 1 "1692632102976-0" {{< /clients-example >}} \* Read two stream entries starting at ID `1692632086370-0`: {{< clients-example stream\_tutorial xrange >}} > XRANGE race:france 1692632086370-0 + COUNT 2 1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "30.2" 5) "position" 6) "1" 7) "location\_id" 8) "1" 2) 1) "1692632094485-0" 2) 1) "rider" 2) "Norem" 3) "speed" 4) "28.8" 5) "position" 6) "3" 7) "location\_id" 8) "1" {{< /clients-example >}} \* Read up to 100 new stream entries, starting at the end of the stream, and block for up to 300 ms if no entries are being written: {{< clients-example stream\_tutorial xread\_block >}} > XREAD COUNT 100 BLOCK 300 STREAMS race:france $ (nil) {{< /clients-example >}} ## Performance Adding an entry to a stream is O(1). Accessing any single entry is O(n), where \_n\_ is the length of the ID. Since stream IDs are typically short and of a fixed length, this effectively reduces to a constant time lookup. For details on why, note that streams are implemented as [radix trees](https://en.wikipedia.org/wiki/Radix\_tree). Simply put, Redis streams provide highly efficient inserts and reads. See each command's time complexity for the details. ## Streams basics Streams are an append-only data structure. The fundamental write command, called `XADD`, appends a new entry to the specified stream. Each stream entry consists of one or more field-value pairs, somewhat like a dictionary or a Redis hash: {{< clients-example stream\_tutorial xadd\_2 >}} > XADD race:france \* rider Castilla speed 29.9 position 1 location\_id 2 "1692632147973-0" {{< /clients-example >}} The above call to the `XADD` command adds an entry `rider: Castilla, speed: 29.9, position: 1, location\_id: 2` to the stream at key `race:france`, using an auto-generated entry ID, which is the one returned by the command, specifically `1692632147973-0`. It gets as its first argument
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.04559827595949173, -0.053056903183460236, -0.07225418835878372, 0.014133865013718605, 0.05765043571591377, -0.0605669841170311, 0.07972120493650436, -0.014921597205102444, 0.08703179657459259, -0.012183897197246552, -0.030327679589390755, 0.012623355723917484, -0.0027948715724051, -0.01...
0.205113
2 "1692632147973-0" {{< /clients-example >}} The above call to the `XADD` command adds an entry `rider: Castilla, speed: 29.9, position: 1, location\_id: 2` to the stream at key `race:france`, using an auto-generated entry ID, which is the one returned by the command, specifically `1692632147973-0`. It gets as its first argument the key name `race:france`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `\*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our `XADD` example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. It is possible to get the number of items inside a Stream just using the `XLEN` command: {{< clients-example stream\_tutorial xlen >}} > XLEN race:france (integer) 4 {{< /clients-example >}} ### Entry IDs The entry ID returned by the `XADD` command, and identifying univocally each entry inside a given stream, is composed of two parts: ``` - ``` The milliseconds time part is actually the local time in the local Redis node generating the stream ID, however if the current milliseconds time happens to be smaller than the previous entry time, then the previous entry time is used instead, so if a clock jumps backward the monotonically incrementing ID property still holds. The sequence number is used for entries created in the same millisecond. Since the sequence number is 64 bit wide, in practical terms there is no limit to the number of entries that can be generated within the same millisecond. The format of such IDs may look strange at first, and the gentle reader may wonder why the time is part of the ID. The reason is that Redis streams support range queries by ID. Because the ID is related to the time the entry is generated, this gives the ability to query for time ranges basically for free. We will see this soon while covering the `XRANGE` command. If for some reason the user needs incremental IDs that are not related to time but are actually associated to another external system ID, as previously mentioned, the `XADD` command can take an explicit ID instead of the `\*` wildcard ID that triggers auto-generation, like in the following examples: {{< clients-example stream\_tutorial xadd\_id >}} > XADD race:usa 0-1 racer Castilla 0-1 > XADD race:usa 0-2 racer Norem 0-2 {{< /clients-example >}} Note that in this case, the minimum ID is 0-1 and that the command will not accept an ID equal or smaller than a previous one: {{< clients-example stream\_tutorial xadd\_bad\_id >}} > XADD race:usa 0-1 racer Prickett (error) ERR The ID specified in XADD is equal or smaller than the target stream top item {{< /clients-example >}} If you're running Redis 7 or later, you can also provide an explicit ID consisting of the milliseconds part only. In this case, the sequence portion of the ID will be automatically generated. To do this, use the syntax below: {{< clients-example stream\_tutorial xadd\_7 >}} > XADD race:usa 0-\* racer
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.06198142096400261, 0.02668748050928116, -0.04801389202475548, -0.0855361595749855, 0.002725723199546337, -0.017006607726216316, 0.07348623871803284, 0.02466714009642601, 0.044971395283937454, -0.0043576606549322605, -0.017606526613235474, -0.03859781101346016, 0.05987847223877907, -0.07...
0.106965
If you're running Redis 7 or later, you can also provide an explicit ID consisting of the milliseconds part only. In this case, the sequence portion of the ID will be automatically generated. To do this, use the syntax below: {{< clients-example stream\_tutorial xadd\_7 >}} > XADD race:usa 0-\* racer Prickett 0-3 {{< /clients-example >}} ## Getting data from Streams Now we are finally able to append entries in our stream via `XADD`. However, while appending data to a stream is quite obvious, the way streams can be queried in order to extract data is not so obvious. If we continue with the analogy of the log file, one obvious way is to mimic what we normally do with the Unix command `tail -f`, that is, we may start to listen in order to get the new messages that are appended to the stream. Note that unlike the blocking list operations of Redis, where a given element will reach a single client which is blocking in a \*pop style\* operation like `BLPOP`, with streams we want multiple consumers to see the new messages appended to the stream (the same way many `tail -f` processes can see what is added to a log). Using the traditional terminology we want the streams to be able to \*fan out\* messages to multiple clients. However, this is just one potential access mode. We could also see a stream in quite a different way: not as a messaging system, but as a \*time series store\*. In this case, maybe it's also useful to get the new messages appended, but another natural query mode is to get messages by ranges of time, or alternatively to iterate the messages using a cursor to incrementally check all the history. This is definitely another useful access mode. Finally, if we see a stream from the point of view of consumers, we may want to access the stream in yet another way, that is, as a stream of messages that can be partitioned to multiple consumers that are processing such messages, so that groups of consumers can only see a subset of the messages arriving in a single stream. In this way, it is possible to scale the message processing across different consumers, without single consumers having to process all the messages: each consumer will just get different messages to process. This is basically what Kafka (TM) does with consumer groups. Reading messages via consumer groups is yet another interesting mode of reading from a Redis Stream. Redis Streams support all three of the query modes described above via different commands. The next sections will show them all, starting from the simplest and most direct to use: range queries. ### Querying by range: XRANGE and XREVRANGE To query the stream by range we are only required to specify two IDs, \*start\* and \*end\*. The range returned will include the elements having start or end as ID, so the range is inclusive. The two special IDs `-` and `+` respectively mean the smallest and the greatest ID possible. {{< clients-example stream\_tutorial xrange\_all >}} > XRANGE race:france - + 1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "30.2" 5) "position" 6) "1" 7) "location\_id" 8) "1" 2) 1) "1692632094485-0" 2) 1) "rider" 2) "Norem" 3) "speed" 4) "28.8" 5) "position" 6) "3" 7) "location\_id" 8) "1" 3) 1) "1692632102976-0" 2) 1) "rider" 2) "Prickett" 3) "speed" 4) "29.7" 5) "position" 6) "2" 7) "location\_id" 8) "1" 4) 1) "1692632147973-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "29.9" 5) "position" 6) "1" 7) "location\_id" 8) "2" {{< /clients-example >}}
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.04507486894726753, 0.0028772258665412664, -0.04376579821109772, -0.04879990592598915, 0.07023444026708603, -0.04379570856690407, 0.06128105893731117, 0.002366025699302554, 0.026801031082868576, 0.03134613856673241, 0.00396374799311161, -0.012981577776372433, -0.000952524715103209, -0.04...
0.072749
"position" 6) "3" 7) "location\_id" 8) "1" 3) 1) "1692632102976-0" 2) 1) "rider" 2) "Prickett" 3) "speed" 4) "29.7" 5) "position" 6) "2" 7) "location\_id" 8) "1" 4) 1) "1692632147973-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "29.9" 5) "position" 6) "1" 7) "location\_id" 8) "2" {{< /clients-example >}} Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, at the moment the entry was created (however note that streams are replicated with fully specified `XADD` commands, so the replicas will have identical IDs to the master). This means that I could query a range of time using `XRANGE`. In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, if I want to query a two milliseconds period I could use: {{< clients-example stream\_tutorial xrange\_time >}} > XRANGE race:france 1692632086369 1692632086371 1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "30.2" 5) "position" 6) "1" 7) "location\_id" 8) "1" {{< /clients-example >}} I have only a single entry in this range. However in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, `XRANGE` supports an optional \*\*COUNT\*\* option at the end. By specifying a count, I can just get the first \*N\* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. Let's assume that the stream `race:france` was populated with 4 items. To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. {{< clients-example stream\_tutorial xrange\_step\_1 >}} > XRANGE race:france - + COUNT 2 1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "30.2" 5) "position" 6) "1" 7) "location\_id" 8) "1" 2) 1) "1692632094485-0" 2) 1) "rider" 2) "Norem" 3) "speed" 4) "28.8" 5) "position" 6) "3" 7) "location\_id" 8) "1" {{< /clients-example >}} To continue the iteration with the next two items, I have to pick the last ID returned, that is `1692632094485-0`, and add the prefix `(` to it. The resulting exclusive range interval, that is `(1692632094485-0` in this case, can now be used as the new \*start\* argument for the next `XRANGE` call: {{< clients-example stream\_tutorial xrange\_step\_2 >}} > XRANGE race:france (1692632094485-0 + COUNT 2 1) 1) "1692632102976-0" 2) 1) "rider" 2) "Prickett" 3) "speed" 4) "29.7" 5) "position" 6) "2" 7) "location\_id" 8) "1" 2) 1) "1692632147973-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "29.9" 5) "position" 6) "1" 7) "location\_id" 8) "2" {{< /clients-example >}} Now that we've retrieved 4 items out of a stream that only had 4 entries in it, if we try to retrieve more items, we'll get an empty array: {{< clients-example stream\_tutorial xrange\_empty >}} > XRANGE race:france (1692632147973-0 + COUNT 2 (empty array) {{< /clients-example >}} Since `XRANGE` complexity is \*O(log(N))\*
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.050742775201797485, 0.06116756424307823, -0.0610695444047451, -0.043130308389663696, -0.012275819666683674, 0.005724519025534391, 0.086082324385643, 0.01616564206779003, 0.03951958566904068, -0.029393738135695457, -0.008151127956807613, -0.04023883119225502, 0.06769716739654541, -0.0398...
0.085253
that we've retrieved 4 items out of a stream that only had 4 entries in it, if we try to retrieve more items, we'll get an empty array: {{< clients-example stream\_tutorial xrange\_empty >}} > XRANGE race:france (1692632147973-0 + COUNT 2 (empty array) {{< /clients-example >}} Since `XRANGE` complexity is \*O(log(N))\* to seek, and then \*O(M)\* to return M elements, with a small count the command has a logarithmic time complexity, which means that each step of the iteration is fast. So `XRANGE` is also the de facto \*streams iterator\* and does not require an \*\*XSCAN\*\* command. The command `XREVRANGE` is the equivalent of `XRANGE` but returning the elements in inverted order, so a practical use for `XREVRANGE` is to check what is the last item in a Stream: {{< clients-example stream\_tutorial xrevrange >}} > XREVRANGE race:france + - COUNT 1 1) 1) "1692632147973-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "29.9" 5) "position" 6) "1" 7) "location\_id" 8) "2" {{< /clients-example >}} Note that the `XREVRANGE` command takes the \*start\* and \*stop\* arguments in reverse order. ## Listening for new items with XREAD When we do not want to access items by a range in a stream, usually what we want instead is to \*subscribe\* to new items arriving to the stream. This concept may appear related to Redis Pub/Sub, where you subscribe to a channel, or to Redis blocking lists, where you wait for a key to get new elements to fetch, but there are fundamental differences in the way you consume a stream: 1. A stream can have multiple clients (consumers) waiting for data. Every new item, by default, will be delivered to \*every consumer\* that is waiting for data in a given stream. This behavior is different than blocking lists, where each consumer will get a different element. However, the ability to \*fan out\* to multiple consumers is similar to Pub/Sub. 2. While in Pub/Sub messages are \*fire and forget\* and are never stored anyway, and while when using blocking lists, when a message is received by the client it is \*popped\* (effectively removed) from the list, streams work in a fundamentally different way. All the messages are appended in the stream indefinitely (unless the user explicitly asks to delete entries): different consumers will know what is a new message from its point of view by remembering the ID of the last message received. 3. Streams Consumer Groups provide a level of control that Pub/Sub or blocking lists cannot achieve, with different groups for the same stream, explicit acknowledgment of processed items, ability to inspect the pending items, claiming of unprocessed messages, and coherent history visibility for each single client, that is only able to see its private past history of messages. The command that provides the ability to listen for new messages arriving into a stream is called `XREAD`. It's a bit more complex than `XRANGE`, so we'll start showing simple forms, and later the whole command layout will be provided. {{< clients-example stream\_tutorial xread >}} > XREAD COUNT 2 STREAMS race:france 0 1) 1) "race:france" 2) 1) 1) "1692632086370-0" 2) 1) "rider" 2) "Castilla" 3) "speed" 4) "30.2" 5) "position" 6) "1" 7) "location\_id" 8) "1" 2) 1) "1692632094485-0" 2) 1) "rider" 2) "Norem" 3) "speed" 4) "28.8" 5) "position" 6) "3" 7) "location\_id" 8) "1" {{< /clients-example >}} The above is the non-blocking form of `XREAD`. Note that the \*\*COUNT\*\* option is not mandatory, in fact the only mandatory option of the command is the \*\*STREAMS\*\* option, that specifies a list of keys together with the corresponding maximum ID already seen for each
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ 0.0016427086666226387, 0.01173633523285389, -0.01607595384120941, -0.011800362728536129, 0.037302277982234955, -0.08316605538129807, 0.04248330742120743, -0.002628333866596222, 0.05744399130344391, -0.037418365478515625, 0.029654286801815033, -0.007126371376216412, 0.05412006378173828, -0....
0.078558
8) "1" {{< /clients-example >}} The above is the non-blocking form of `XREAD`. Note that the \*\*COUNT\*\* option is not mandatory, in fact the only mandatory option of the command is the \*\*STREAMS\*\* option, that specifies a list of keys together with the corresponding maximum ID already seen for each stream by the calling consumer, so that the command will provide the client only with messages with an ID greater than the one we specified. In the above command we wrote `STREAMS race:france 0` so we want all the messages in the Stream `race:france` having an ID greater than `0-0`. As you can see in the example above, the command returns the key name, because actually it is possible to call this command with more than one key to read from different streams at the same time. I could write, for instance: `STREAMS race:france race:italy 0 0`. Note how after the \*\*STREAMS\*\* option we need to provide the key names, and later the IDs. For this reason, the \*\*STREAMS\*\* option must always be the last option. Any other options must come before the \*\*STREAMS\*\* option. Apart from the fact that `XREAD` can access multiple streams at once, and that we are able to specify the last ID we own to just get newer messages, in this simple form the command is not doing something so different compared to `XRANGE`. However, the interesting part is that we can turn `XREAD` into a \*blocking command\* easily, by specifying the \*\*BLOCK\*\* argument: ``` > XREAD BLOCK 0 STREAMS race:france $ ``` Note that in the example above, other than removing \*\*COUNT\*\*, I specified the new \*\*BLOCK\*\* option with a timeout of 0 milliseconds (that means to never timeout). Moreover, instead of passing a normal ID for the stream `mystream` I passed the special ID `$`. This special ID means that `XREAD` should use as last ID the maximum ID already stored in the stream `mystream`, so that we will receive only \*new\* messages, starting from the time we started listening. This is similar to the `tail -f` Unix command in some way. Note that when the \*\*BLOCK\*\* option is used, we do not have to use the special ID `$`. We can use any valid ID. If the command is able to serve our request immediately without blocking, it will do so, otherwise it will block. Normally if we want to consume the stream starting from new entries, we start with the ID `$`, and after that we continue using the ID of the last message received to make the next call, and so forth. The blocking form of `XREAD` is also able to listen to multiple Streams, just by specifying multiple key names. If the request can be served synchronously because there is at least one stream with elements greater than the corresponding ID we specified, it returns with the results. Otherwise, the command will block and will return the items of the first stream which gets new data (according to the specified ID). Similarly to blocking list operations, blocking stream reads are \*fair\* from the point of view of clients waiting for data, since the semantics is FIFO style. The first client that blocked for a given stream will be the first to be unblocked when new items are available. `XREAD` has no other options than \*\*COUNT\*\* and \*\*BLOCK\*\*, so it's a pretty basic command with a specific purpose to attach consumers to one or multiple streams. More powerful features to consume streams are available using the consumer groups API, however reading via consumer groups is implemented by a different command
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.03306635096669197, 0.003748878138139844, -0.030227158218622208, -0.015075835399329662, -0.02823864482343197, -0.0015312934992834926, 0.09650939702987671, -0.003753532422706485, 0.015485616400837898, -0.06001761555671692, 0.0008154248353093863, -0.06945084780454636, 0.04645770788192749, ...
0.102141
has no other options than \*\*COUNT\*\* and \*\*BLOCK\*\*, so it's a pretty basic command with a specific purpose to attach consumers to one or multiple streams. More powerful features to consume streams are available using the consumer groups API, however reading via consumer groups is implemented by a different command called `XREADGROUP`, covered in the next section of this guide. ## Consumer groups When the task at hand is to consume the same stream from different clients, then `XREAD` already offers a way to \*fan-out\* to N clients, potentially also using replicas in order to provide more read scalability. However in certain problems what we want to do is not to provide the same stream of messages to many clients, but to provide a \*different subset\* of messages from the same stream to many clients. An obvious case where this is useful is that of messages which are slow to process: the ability to have N different workers that will receive different parts of the stream allows us to scale message processing, by routing different messages to different workers that are ready to do more work. In practical terms, if we imagine having three consumers C1, C2, C3, and a stream that contains the messages 1, 2, 3, 4, 5, 6, 7 then what we want is to serve the messages according to the following diagram: ``` 1 -> C1 2 -> C2 3 -> C3 4 -> C1 5 -> C2 6 -> C3 7 -> C1 ``` In order to achieve this, Redis uses a concept called \*consumer groups\*. It is very important to understand that Redis consumer groups have nothing to do, from an implementation standpoint, with Kafka (TM) consumer groups. Yet they are similar in functionality, so I decided to keep Kafka's (TM) terminology, as it originally popularized this idea. A consumer group is like a \*pseudo consumer\* that gets data from a stream, and actually serves multiple consumers, providing certain guarantees: 1. Each message is served to a different consumer so that it is not possible that the same message will be delivered to multiple consumers. 2. Consumers are identified, within a consumer group, by a name, which is a case-sensitive string that the clients implementing consumers must choose. This means that even after a disconnect, the stream consumer group retains all the state, since the client will claim again to be the same consumer. However, this also means that it is up to the client to provide a unique identifier. 3. Each consumer group has the concept of the \*first ID never consumed\* so that, when a consumer asks for new messages, it can provide just messages that were not previously delivered. 4. Consuming a message, however, requires an explicit acknowledgment using a specific command. Redis interprets the acknowledgment as: this message was correctly processed so it can be evicted from the consumer group. 5. A consumer group tracks all the messages that are currently pending, that is, messages that were delivered to some consumer of the consumer group, but are yet to be acknowledged as processed. Thanks to this feature, when accessing the message history of a stream, each consumer \*will only see messages that were delivered to it\*. In a way, a consumer group can be imagined as some \*amount of state\* about a stream: ``` +----------------------------------------+ | consumer\_group\_name: mygroup | | consumer\_group\_stream: somekey | | last\_delivered\_id: 1292309234234-92 | | | | consumers: | | "consumer-1" with pending messages | | 1292309234234-4 | | 1292309234232-8 | | "consumer-42" with pending messages | | ... (and so forth) | +----------------------------------------+ ``` If you
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.06423516571521759, -0.06349094212055206, 0.009036128409206867, 0.014412296935915947, -0.025913845747709274, -0.030078653246164322, 0.05879267677664757, -0.059894852340221405, 0.07883240282535553, -0.021839594468474388, -0.023101309314370155, -0.056079573929309845, 0.06488310545682907, 0...
0.147236
about a stream: ``` +----------------------------------------+ | consumer\_group\_name: mygroup | | consumer\_group\_stream: somekey | | last\_delivered\_id: 1292309234234-92 | | | | consumers: | | "consumer-1" with pending messages | | 1292309234234-4 | | 1292309234232-8 | | "consumer-42" with pending messages | | ... (and so forth) | +----------------------------------------+ ``` If you see this from this point of view, it is very simple to understand what a consumer group can do, how it is able to just provide consumers with their history of pending messages, and how consumers asking for new messages will just be served with message IDs greater than `last\_delivered\_id`. At the same time, if you look at the consumer group as an auxiliary data structure for Redis streams, it is obvious that a single stream can have multiple consumer groups, that have a different set of consumers. Actually, it is even possible for the same stream to have clients reading without consumer groups via `XREAD`, and clients reading via `XREADGROUP` in different consumer groups. Now it's time to zoom in to see the fundamental consumer group commands. They are the following: \* `XGROUP` is used in order to create, destroy and manage consumer groups. \* `XREADGROUP` is used to read from a stream via a consumer group. \* `XACK` is the command that allows a consumer to mark a pending message as correctly processed. ## Creating a consumer group Assuming I have a key `race:france` of type stream already existing, in order to create a consumer group I just need to do the following: {{< clients-example stream\_tutorial xgroup\_create >}} > XGROUP CREATE race:france france\_riders $ OK {{< /clients-example >}} As you can see in the command above when creating the consumer group we have to specify an ID, which in the example is just `$`. This is needed because the consumer group, among the other states, must have an idea about what message to serve next at the first consumer connecting, that is, what was the \*last message ID\* when the group was just created. If we provide `$` as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify `0` instead the consumer group will consume \*all\* the messages in the stream history to start with. Of course, you can specify any other valid ID. What you know is that the consumer group will start delivering messages that are greater than the ID you specify. Because `$` means the current greatest ID in the stream, specifying `$` will have the effect of consuming only new messages. `XGROUP CREATE` also supports creating the stream automatically, if it doesn't exist, using the optional `MKSTREAM` subcommand as the last argument: {{< clients-example stream\_tutorial xgroup\_create\_mkstream >}} > XGROUP CREATE race:italy italy\_riders $ MKSTREAM OK {{< /clients-example >}} Now that the consumer group is created we can immediately try to read messages via the consumer group using the `XREADGROUP` command. We'll read from consumers, that we will call Alice and Bob, to see how the system will return different messages to Alice or Bob. `XREADGROUP` is very similar to `XREAD` and provides the same \*\*BLOCK\*\* option, otherwise it is a synchronous command. However there is a \*mandatory\* option that must be always specified, which is \*\*GROUP\*\* and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option \*\*COUNT\*\* is also supported and is identical to the one in `XREAD`. We'll add riders to the race:italy stream and try reading something using the consumer group: Note: \*here
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.09273980557918549, -0.006353430915623903, -0.011338874697685242, 0.013374569825828075, -0.020386692136526108, -0.005038862116634846, 0.10539045929908752, -0.0682615116238594, 0.04398093372583389, -0.017577357590198517, 0.0851183757185936, -0.03927639126777649, 0.026035530492663383, -0.0...
0.095883
two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option \*\*COUNT\*\* is also supported and is identical to the one in `XREAD`. We'll add riders to the race:italy stream and try reading something using the consumer group: Note: \*here rider is the field name, and the name is the associated value. Remember that stream items are small dictionaries.\* {{< clients-example stream\_tutorial xgroup\_read >}} > XADD race:italy \* rider Castilla "1692632639151-0" > XADD race:italy \* rider Royce "1692632647899-0" > XADD race:italy \* rider Sam-Bodden "1692632662819-0" > XADD race:italy \* rider Prickett "1692632670501-0" > XADD race:italy \* rider Norem "1692632678249-0" > XREADGROUP GROUP italy\_riders Alice COUNT 1 STREAMS race:italy > 1) 1) "race:italy" 2) 1) 1) "1692632639151-0" 2) 1) "rider" 2) "Castilla" {{< /clients-example >}} `XREADGROUP` replies are just like `XREAD` replies. Note however the `GROUP ` provided above. It states that I want to read from the stream using the consumer group `mygroup` and I'm the consumer `Alice`. Every time a consumer performs an operation with a consumer group, it must specify its name, uniquely identifying this consumer inside the group. There is another very important detail in the command line above, after the mandatory \*\*STREAMS\*\* option the ID requested for the key `mystream` is the special ID `>`. This special ID is only valid in the context of consumer groups, and it means: \*\*messages never delivered to other consumers so far\*\*. This is almost always what you want, however it is also possible to specify a real ID, such as `0` or any other valid ID, in this case, however, what happens is that we request from `XREADGROUP` to just provide us with the \*\*history of pending messages\*\*, and in such case, will never see new messages in the group. So basically `XREADGROUP` has the following behavior based on the ID we specify: \* If the ID is the special ID `>` then the command will return only new messages never delivered to other consumers so far, and as a side effect, will update the consumer group's \*last ID\*. \* If the ID is any other valid numerical ID, then the command will let us access our \*history of pending messages\*. That is, the set of messages that were delivered to this specified consumer (identified by the provided name), and never acknowledged so far with `XACK`. We can test this behavior immediately specifying an ID of 0, without any \*\*COUNT\*\* option: we'll just see the only pending message, that is, the one about Castilla: {{< clients-example stream\_tutorial xgroup\_read\_id >}} > XREADGROUP GROUP italy\_riders Alice STREAMS race:italy 0 1) 1) "race:italy" 2) 1) 1) "1692632639151-0" 2) 1) "rider" 2) "Castilla" {{< /clients-example >}} However, if we acknowledge the message as processed, it will no longer be part of the pending messages history, so the system will no longer report anything: {{< clients-example stream\_tutorial xack >}} > XACK race:italy italy\_riders 1692632639151-0 (integer) 1 > XREADGROUP GROUP italy\_riders Alice STREAMS race:italy 0 1) 1) "race:italy" 2) (empty array) {{< /clients-example >}} Don't worry if you yet don't know how `XACK` works, the idea is just that processed messages are no longer part of the history that we can access. Now it's Bob's turn to read something: {{< clients-example stream\_tutorial xgroup\_read\_bob >}} > XREADGROUP GROUP italy\_riders Bob COUNT 2 STREAMS race:italy > 1) 1) "race:italy" 2) 1) 1) "1692632647899-0" 2) 1) "rider" 2) "Royce" 2) 1) "1692632662819-0" 2) 1) "rider" 2) "Sam-Bodden" {{< /clients-example >}} Bob asked for a maximum of two messages and is reading via the same group `mygroup`. So what
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.0073967305943369865, 0.043480534106492996, -0.06334546208381653, 0.0037427288480103016, -0.025453997775912285, -0.0003442515735514462, 0.08466670662164688, 0.023717407137155533, -0.00008048755989875644, -0.0703265443444252, 0.030149947851896286, -0.13387171924114227, 0.02751954272389412, ...
0.090746
XREADGROUP GROUP italy\_riders Bob COUNT 2 STREAMS race:italy > 1) 1) "race:italy" 2) 1) 1) "1692632647899-0" 2) 1) "rider" 2) "Royce" 2) 1) "1692632662819-0" 2) 1) "rider" 2) "Sam-Bodden" {{< /clients-example >}} Bob asked for a maximum of two messages and is reading via the same group `mygroup`. So what happens is that Redis reports just \*new\* messages. As you can see the "Castilla" message is not delivered, since it was already delivered to Alice, so Bob gets Royce and Sam-Bodden and so forth. This way Alice, Bob, and any other consumer in the group, are able to read different messages from the same stream, to read their history of yet to process messages, or to mark messages as processed. This allows creating different topologies and semantics for consuming messages from a stream. There are a few things to keep in mind: \* Consumers are auto-created the first time they are mentioned, no need for explicit creation. \* Even with `XREADGROUP` you can read from multiple keys at the same time, however for this to work, you need to create a consumer group with the same name in every stream. This is not a common need, but it is worth mentioning that the feature is technically available. \* `XREADGROUP` is a \*write command\* because even if it reads from the stream, the consumer group is modified as a side effect of reading, so it can only be called on master instances. An example of a consumer implementation, using consumer groups, written in the Ruby language could be the following. The Ruby code is aimed to be readable by virtually any experienced programmer, even if they do not know Ruby: ```ruby require 'redis' if ARGV.length == 0 puts "Please specify a consumer name" exit 1 end ConsumerName = ARGV[0] GroupName = "mygroup" r = Redis.new def process\_message(id,msg) puts "[#{ConsumerName}] #{id} = #{msg.inspect}" end $lastid = '0-0' puts "Consumer #{ConsumerName} starting..." check\_backlog = true while true # Pick the ID based on the iteration: the first time we want to # read our pending messages, in case we crashed and are recovering. # Once we consumed our history, we can start getting new messages. if check\_backlog myid = $lastid else myid = '>' end items = r.xreadgroup('GROUP',GroupName,ConsumerName,'BLOCK','2000','COUNT','10','STREAMS',:my\_stream\_key,myid) if items == nil puts "Timeout!" next end # If we receive an empty reply, it means we were consuming our history # and that the history is now empty. Let's start to consume new messages. check\_backlog = false if items[0][1].length == 0 items[0][1].each{|i| id,fields = i # Process the message process\_message(id,fields) # Acknowledge the message as processed r.xack(:my\_stream\_key,GroupName,id) $lastid = id } end ``` As you can see the idea here is to start by consuming the history, that is, our list of pending messages. This is useful because the consumer may have crashed before, so in the event of a restart we want to re-read messages that were delivered to us without getting acknowledged. Note that we might process a message multiple times or one time (at least in the case of consumer failures, but there are also the limits of Redis persistence and replication involved, see the specific section about this topic). Once the history was consumed, and we get an empty list of messages, we can switch to using the `>` special ID in order to consume new messages. ## Recovering from permanent failures The example above allows us to write consumers that participate in the same consumer group, each taking a subset of messages to process, and when recovering from failures re-reading the pending messages that were delivered just to
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.0032925494015216827, -0.07167758792638779, -0.03539561852812767, 0.02959350310266018, -0.036978404968976974, -0.03288444131612778, 0.0924513041973114, -0.04139857739210129, 0.05102263391017914, -0.03638504073023796, 0.016923416405916214, -0.05974656715989113, 0.021845562383532524, -0.02...
0.132314
special ID in order to consume new messages. ## Recovering from permanent failures The example above allows us to write consumers that participate in the same consumer group, each taking a subset of messages to process, and when recovering from failures re-reading the pending messages that were delivered just to them. However in the real world consumers may permanently fail and never recover. What happens to the pending messages of the consumer that never recovers after stopping for any reason? Redis consumer groups offer a feature that is used in these situations in order to \*claim\* the pending messages of a given consumer so that such messages will change ownership and will be re-assigned to a different consumer. The feature is very explicit. A consumer has to inspect the list of pending messages, and will have to claim specific messages using a special command, otherwise the server will leave the messages pending forever and assigned to the old consumer. In this way different applications can choose if to use such a feature or not, and exactly how to use it. The first step of this process is just a command that provides observability of pending entries in the consumer group and is called `XPENDING`. This is a read-only command which is always safe to call and will not change ownership of any message. In its simplest form, the command is called with two arguments, which are the name of the stream and the name of the consumer group. {{< clients-example stream\_tutorial xpending >}} > XPENDING race:italy italy\_riders 1) (integer) 2 2) "1692632647899-0" 3) "1692632662819-0" 4) 1) 1) "Bob" 2) "2" {{< /clients-example >}} When called in this way, the command outputs the total number of pending messages in the consumer group (two in this case), the lower and higher message ID among the pending messages, and finally a list of consumers and the number of pending messages they have. We have only Bob with two pending messages because the single message that Alice requested was acknowledged using `XACK`. We can ask for more information by giving more arguments to `XPENDING`, because the full command signature is the following: ``` XPENDING [[IDLE ] []] ``` By providing a start and end ID (that can be just `-` and `+` as in `XRANGE`) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer name, is used if we want to limit the output to just messages pending for a given consumer, but won't use this feature in the following example. {{< clients-example stream\_tutorial xpending\_plus\_minus >}} > XPENDING race:italy italy\_riders - + 10 1) 1) "1692632647899-0" 2) "Bob" 3) (integer) 74642 4) (integer) 1 2) 1) "1692632662819-0" 2) "Bob" 3) (integer) 74642 4) (integer) 1 {{< /clients-example >}} Now we have the details for each message: the ID, the consumer name, the \*idle time\* in milliseconds, which is how many milliseconds have passed since the last time the message was delivered to some consumer, and finally the number of times that a given message was delivered. We have two messages from Bob, and they are idle for 60000+ milliseconds, about a minute. Note that nobody prevents us from checking what the first message content was by just using `XRANGE`. {{< clients-example stream\_tutorial xrange\_pending >}} > XRANGE race:italy 1692632647899-0 1692632647899-0 1) 1) "1692632647899-0" 2) 1) "rider" 2) "Royce" {{< /clients-example >}} We have just to repeat the same ID twice in the arguments. Now that we have some ideas, Alice may decide that after 1
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.09789830446243286, -0.030651576817035675, -0.03268060460686684, 0.07695072889328003, 0.05492298677563667, -0.054232798516750336, 0.11660423874855042, -0.02508614771068096, 0.035251934081315994, -0.0019122941885143518, 0.06300639361143112, 0.03554759547114372, 0.025682009756565094, -0.04...
0.143138
was by just using `XRANGE`. {{< clients-example stream\_tutorial xrange\_pending >}} > XRANGE race:italy 1692632647899-0 1692632647899-0 1) 1) "1692632647899-0" 2) 1) "rider" 2) "Royce" {{< /clients-example >}} We have just to repeat the same ID twice in the arguments. Now that we have some ideas, Alice may decide that after 1 minute of not processing messages, Bob will probably not recover quickly, and it's time to \*claim\* such messages and resume the processing in place of Bob. To do so, we use the `XCLAIM` command. This command is very complex and full of options in its full form, since it is used for replication of consumer groups changes, but we'll use just the arguments that we need normally. In this case it is as simple as: ``` XCLAIM ... ``` Basically we say, for this specific key and group, I want that the message IDs specified will change ownership, and will be assigned to the specified consumer name ``. However, we also provide a minimum idle time, so that the operation will only work if the idle time of the mentioned messages is greater than the specified idle time. This is useful because maybe two clients are retrying to claim a message at the same time: ``` Client 1: XCLAIM race:italy italy\_riders Alice 60000 1692632647899-0 Client 2: XCLAIM race:italy italy\_riders Lora 60000 1692632647899-0 ``` However, as a side effect, claiming a message will reset its idle time and will increment its number of deliveries counter, so the second client will fail claiming it. In this way we avoid trivial re-processing of messages (even if in the general case you cannot obtain exactly once processing). This is the result of the command execution: {{< clients-example stream\_tutorial xclaim >}} > XCLAIM race:italy italy\_riders Alice 60000 1692632647899-0 1) 1) "1692632647899-0" 2) 1) "rider" 2) "Royce" {{< /clients-example >}} The message was successfully claimed by Alice, who can now process the message and acknowledge it, and move things forward even if the original consumer is not recovering. It is clear from the example above that as a side effect of successfully claiming a given message, the `XCLAIM` command also returns it. However this is not mandatory. The \*\*JUSTID\*\* option can be used in order to return just the IDs of the message successfully claimed. This is useful if you want to reduce the bandwidth used between the client and the server (and also the performance of the command) and you are not interested in the message because your consumer is implemented in a way that it will rescan the history of pending messages from time to time. Claiming may also be implemented by a separate process: one that just checks the list of pending messages, and assigns idle messages to consumers that appear to be active. Active consumers can be obtained using one of the observability features of Redis streams. This is the topic of the next section. ## Automatic claiming The `XAUTOCLAIM` command, added in Redis 6.2, implements the claiming process that we've described above. `XPENDING` and `XCLAIM` provide the basic building blocks for different types of recovery mechanisms. This command optimizes the generic process by having Redis manage it and offers a simple solution for most recovery needs. `XAUTOCLAIM` identifies idle pending messages and transfers ownership of them to a consumer. The command's signature looks like this: ``` XAUTOCLAIM [COUNT count] [JUSTID] ``` So, in the example above, I could have used automatic claiming to claim a single message like this: {{< clients-example stream\_tutorial xautoclaim >}} > XAUTOCLAIM race:italy italy\_riders Alice 60000 0-0 COUNT 1 1) "0-0" 2) 1) 1) "1692632662819-0" 2) 1)
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.08442006260156631, 0.04470796883106232, 0.0011547323083505034, -0.0037417386192828417, 0.020094148814678192, 0.001241066842339933, 0.10009724646806717, -0.007700458634644747, 0.02907976694405079, -0.01769126020371914, 0.061973027884960175, -0.014377404004335403, 0.07857537269592285, -0....
0.073164
signature looks like this: ``` XAUTOCLAIM [COUNT count] [JUSTID] ``` So, in the example above, I could have used automatic claiming to claim a single message like this: {{< clients-example stream\_tutorial xautoclaim >}} > XAUTOCLAIM race:italy italy\_riders Alice 60000 0-0 COUNT 1 1) "0-0" 2) 1) 1) "1692632662819-0" 2) 1) "rider" 2) "Sam-Bodden" {{< /clients-example >}} Like `XCLAIM`, the command replies with an array of the claimed messages, but it also returns a stream ID that allows iterating the pending entries. The stream ID is a cursor, and I can use it in my next call to continue in claiming idle pending messages: {{< clients-example stream\_tutorial xautoclaim\_cursor >}} > XAUTOCLAIM race:italy italy\_riders Lora 60000 (1692632662819-0 COUNT 1 1) "1692632662819-0" 2) 1) 1) "1692632647899-0" 2) 1) "rider" 2) "Royce" {{< /clients-example >}} When `XAUTOCLAIM` returns the "0-0" stream ID as a cursor, that means that it reached the end of the consumer group pending entries list. That doesn't mean that there are no new idle pending messages, so the process continues by calling `XAUTOCLAIM` from the beginning of the stream. ## Claiming and the delivery counter The counter that you observe in the `XPENDING` output is the number of deliveries of each message. The counter is incremented in two ways: when a message is successfully claimed via `XCLAIM` or when an `XREADGROUP` call is used in order to access the history of pending messages. When there are failures, it is normal that messages will be delivered multiple times, but eventually they usually get processed and acknowledged. However there might be a problem processing some specific message, because it is corrupted or crafted in a way that triggers a bug in the processing code. In such a case what happens is that consumers will continuously fail to process this particular message. Because we have the counter of the delivery attempts, we can use that counter to detect messages that for some reason are not processable. So once the deliveries counter reaches a given large number that you chose, it is probably wiser to put such messages in another stream and send a notification to the system administrator. This is basically the way that Redis Streams implements the \*dead letter\* concept. ## Streams observability Messaging systems that lack observability are very hard to work with. Not knowing who is consuming messages, what messages are pending, the set of consumer groups active in a given stream, makes everything opaque. For this reason, Redis Streams and consumer groups have different ways to observe what is happening. We already covered `XPENDING`, which allows us to inspect the list of messages that are under processing at a given moment, together with their idle time and number of deliveries. However we may want to do more than that, and the `XINFO` command is an observability interface that can be used with sub-commands in order to get information about streams or consumer groups. This command uses subcommands in order to show different information about the status of the stream and its consumer groups. For instance \*\*XINFO STREAM \*\* reports information about the stream itself. {{< clients-example stream\_tutorial xinfo >}} > XINFO STREAM race:italy 1) "length" 2) (integer) 5 3) "radix-tree-keys" 4) (integer) 1 5) "radix-tree-nodes" 6) (integer) 2 7) "last-generated-id" 8) "1692632678249-0" 9) "groups" 10) (integer) 1 11) "first-entry" 12) 1) "1692632639151-0" 2) 1) "rider" 2) "Castilla" 13) "last-entry" 14) 1) "1692632678249-0" 2) 1) "rider" 2) "Norem" {{< /clients-example >}} The output shows information about how the stream is encoded internally, and also shows the first and last message in the stream. Another piece of information available is the
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.09496626257896423, 0.0876171737909317, -0.017943205311894417, -0.01760629378259182, 0.060550615191459656, -0.07614120841026306, 0.11179596930742264, 0.026570862159132957, 0.05435393378138542, -0.04969450831413269, -0.020856864750385284, -0.07507345825433731, 0.08106637001037598, 0.04910...
0.042103
12) 1) "1692632639151-0" 2) 1) "rider" 2) "Castilla" 13) "last-entry" 14) 1) "1692632678249-0" 2) 1) "rider" 2) "Norem" {{< /clients-example >}} The output shows information about how the stream is encoded internally, and also shows the first and last message in the stream. Another piece of information available is the number of consumer groups associated with this stream. We can dig further asking for more information about the consumer groups. {{< clients-example stream\_tutorial xinfo\_groups >}} > XINFO GROUPS race:italy 1) 1) "name" 2) "italy\_riders" 3) "consumers" 4) (integer) 3 5) "pending" 6) (integer) 2 7) "last-delivered-id" 8) "1692632662819-0" {{< /clients-example >}} As you can see in this and in the previous output, the `XINFO` command outputs a sequence of field-value items. Because it is an observability command this allows the human user to immediately understand what information is reported, and allows the command to report more information in the future by adding more fields without breaking compatibility with older clients. Other commands that must be more bandwidth efficient, like `XPENDING`, just report the information without the field names. The output of the example above, where the \*\*GROUPS\*\* subcommand is used, should be clear observing the field names. We can check in more detail the state of a specific consumer group by checking the consumers that are registered in the group. {{< clients-example stream\_tutorial xinfo\_consumers >}} > XINFO CONSUMERS race:italy italy\_riders 1) 1) "name" 2) "Alice" 3) "pending" 4) (integer) 1 5) "idle" 6) (integer) 177546 2) 1) "name" 2) "Bob" 3) "pending" 4) (integer) 0 5) "idle" 6) (integer) 424686 3) 1) "name" 2) "Lora" 3) "pending" 4) (integer) 1 5) "idle" 6) (integer) 72241 {{< /clients-example >}} In case you do not remember the syntax of the command, just ask the command itself for help: ``` > XINFO HELP 1) XINFO [ [value] [opt] ...]. Subcommands are: 2) CONSUMERS 3) Show consumers of . 4) GROUPS 5) Show the stream consumer groups. 6) STREAM [FULL [COUNT ] 7) Show information about the stream. 8) HELP 9) Prints this help. ``` ## Differences with Kafka (TM) partitions Consumer groups in Redis streams may resemble in some way Kafka (TM) partitioning-based consumer groups, however note that Redis streams are, in practical terms, very different. The partitions are only \*logical\* and the messages are just put into a single Redis key, so the way the different clients are served is based on who is ready to process new messages, and not from which partition clients are reading. For instance, if the consumer C3 at some point fails permanently, Redis will continue to serve C1 and C2 all the new messages arriving, as if now there are only two \*logical\* partitions. Similarly, if a given consumer is much faster at processing messages than the other consumers, this consumer will receive proportionally more messages in the same unit of time. This is possible since Redis tracks all the unacknowledged messages explicitly, and remembers who received which message and the ID of the first message never delivered to any consumer. However, this also means that in Redis if you really want to partition messages in the same stream into multiple Redis instances, you have to use multiple keys and some sharding system such as Redis Cluster or some other application-specific sharding system. A single Redis stream is not automatically partitioned to multiple instances. We could say that schematically the following is true: \* If you use 1 stream -> 1 consumer, you are processing messages in order. \* If you use N streams with N consumers, so that only a given consumer hits a subset
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.04788714274764061, 0.025141960009932518, -0.052385468035936356, -0.009667241014540195, 0.02712627500295639, 0.006606542970985174, 0.09489372372627258, -0.007107491139322519, 0.08634329587221146, -0.10123677551746368, 0.02759263850748539, -0.07554004341363907, -0.04416432976722717, -0.01...
0.161848
stream is not automatically partitioned to multiple instances. We could say that schematically the following is true: \* If you use 1 stream -> 1 consumer, you are processing messages in order. \* If you use N streams with N consumers, so that only a given consumer hits a subset of the N streams, you can scale the above model of 1 stream -> 1 consumer. \* If you use 1 stream -> N consumers, you are load balancing to N consumers, however in that case, messages about the same logical item may be consumed out of order, because a given consumer may process message 3 faster than another consumer is processing message 4. So basically Kafka partitions are more similar to using N different Redis keys, while Redis consumer groups are a server-side load balancing system of messages from a given stream to N different consumers. ## Capped Streams Many applications do not want to collect data into a stream forever. Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory and not as fast but suited to store the history for, potentially, decades to come. Redis streams have some support for this. One is the \*\*MAXLEN\*\* option of the `XADD` command. This option is very simple to use: {{< clients-example stream\_tutorial maxlen >}} > XADD race:italy MAXLEN 2 \* rider Jones "1692633189161-0" > XADD race:italy MAXLEN 2 \* rider Wood "1692633198206-0" > XADD race:italy MAXLEN 2 \* rider Henshaw "1692633208557-0" > XLEN race:italy (integer) 2 > XRANGE race:italy - + 1) 1) "1692633198206-0" 2) 1) "rider" 2) "Wood" 2) 1) "1692633208557-0" 2) 1) "rider" 2) "Henshaw" {{< /clients-example >}} Using \*\*MAXLEN\*\* the old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. There is currently no option to tell the stream to just retain items that are not older than a given period, because such command, in order to run consistently, would potentially block for a long time in order to evict items. Imagine for example what happens if there is an insertion spike, then a long pause, and another insertion, all with the same maximum time. The stream would block to evict the data that became too old during the pause. So it is up to the user to do some planning and understand what is the maximum stream length desired. Moreover, while the length of the stream is proportional to the memory used, trimming by time is less simple to control and anticipate: it depends on the insertion rate which often changes over time (and when it does not change, then to just trim by size is trivial). However trimming with \*\*MAXLEN\*\* can be expensive: streams are represented by macro nodes into a radix tree, in order to be very memory efficient. Altering the single macro node, consisting of a few tens of elements, is not optimal. So it's possible to use the command in the following special form: ``` XADD race:italy MAXLEN ~ 1000 \* ... entry fields here ... ``` The `~` argument between the \*\*MAXLEN\*\* option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.06601231545209885, -0.08647863566875458, -0.026728151366114616, -0.007375672925263643, 0.03899046778678894, -0.020884977653622627, -0.057998161762952805, 0.027345990762114525, 0.105473093688488, -0.004379841964691877, -0.05868476629257202, -0.002514369087293744, -0.004860265180468559, -...
0.108572
don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. You'll note here that the client libraries have various implementations of this. For example, the Python client defaults to approximate and has to be explicitly set to a true length. There is also the `XTRIM` command, which performs something very similar to what the \*\*MAXLEN\*\* option does above, except that it can be run by itself: {{< clients-example stream\_tutorial xtrim >}} > XTRIM race:italy MAXLEN 10 (integer) 0 {{< /clients-example >}} Or, as for the `XADD` option: {{< clients-example stream\_tutorial xtrim2 >}} > XTRIM mystream MAXLEN ~ 10 (integer) 0 {{< /clients-example >}} However, `XTRIM` is designed to accept different trimming strategies. Another trimming strategy is \*\*MINID\*\*, that evicts entries with IDs lower than the one specified. As `XTRIM` is an explicit command, the user is expected to know about the possible shortcomings of different trimming strategies. Another useful eviction strategy that may be added to `XTRIM` in the future, is to remove by a range of IDs to ease use of `XRANGE` and `XTRIM` to move data from Redis to other storage systems if needed. ## Special IDs in the streams API You may have noticed that there are several special IDs that can be used in the Redis API. Here is a short recap, so that they can make more sense in the future. The first two special IDs are `-` and `+`, and are used in range queries with the `XRANGE` command. Those two IDs respectively mean the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a lot cleaner to write `-` and `+` instead of those numbers. Then there are APIs where we want to say, the ID of the item with the greatest ID inside the stream. This is what `$` means. So for instance if I want only new entries with `XREADGROUP` I use this ID to signify I already have all the existing entries, but not the new ones that will be inserted in the future. Similarly when I create or set the ID of a consumer group, I can set the last delivered item to `$` in order to just deliver new entries to the consumers in the group. As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greatest ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol with multiple meanings. Another special ID is `>`, that is a special meaning only related to consumer groups and only when the `XREADGROUP` command is used. This special ID means that we want only entries that were never delivered to other consumers so far. So basically the `>` ID is the \*last delivered ID\* of a consumer group. Finally the special ID `\*`, that can be used only with the `XADD` command, means to auto select an ID for us for the new entry. So we have `-`, `+`, `$`, `>` and `\*`, and all have a different meaning, and most of the time, can be used in different contexts. ## Persistence, replication and message safety A Stream, like
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ 0.03893233835697174, 0.1472998410463333, -0.0022982421796768904, -0.06628171354532242, -0.018070215359330177, -0.05707363411784172, 0.022324848920106888, 0.052101269364356995, -0.04000639170408249, -0.0043719857931137085, 0.0011335944291204214, -0.010951746255159378, 0.043792303651571274, ...
0.035635
the `XADD` command, means to auto select an ID for us for the new entry. So we have `-`, `+`, `$`, `>` and `\*`, and all have a different meaning, and most of the time, can be used in different contexts. ## Persistence, replication and message safety A Stream, like any other Redis data structure, is asynchronously replicated to replicas and persisted into AOF and RDB files. However what may not be so obvious is that also the consumer groups full state is propagated to AOF, RDB and replicas, so if a message is pending in the master, also the replica will have the same information. Similarly, after a restart, the AOF will restore the consumer groups' state. However note that Redis streams and consumer groups are persisted and replicated using the Redis default replication, so: \* AOF must be used with a strong fsync policy if persistence of messages is important in your application. \* By default the asynchronous replication will not guarantee that `XADD` commands or consumer groups state changes are replicated: after a failover something can be missing depending on the ability of replicas to receive the data from the master. \* The `WAIT` command may be used in order to force the propagation of the changes to a set of replicas. However note that while this makes it very unlikely that data is lost, the Redis failover process as operated by Sentinel or Redis Cluster performs only a \*best effort\* check to failover to the replica which is the most updated, and under certain specific failure conditions may promote a replica that lacks some data. So when designing an application using Redis streams and consumer groups, make sure to understand the semantical properties your application should have during failures, and configure things accordingly, evaluating whether it is safe enough for your use case. ## Removing single items from a stream Streams also have a special command for removing items from the middle of a stream, just by ID. Normally for an append only data structure this may look like an odd feature, but it is actually useful for applications involving, for instance, privacy regulations. The command is called `XDEL` and receives the name of the stream followed by the IDs to delete: {{< clients-example stream\_tutorial xdel >}} > XRANGE race:italy - + COUNT 2 1) 1) "1692633198206-0" 2) 1) "rider" 2) "Wood" 2) 1) "1692633208557-0" 2) 1) "rider" 2) "Henshaw" > XDEL race:italy 1692633208557-0 (integer) 1 > XRANGE race:italy - + COUNT 2 1) 1) "1692633198206-0" 2) 1) "rider" 2) "Wood" {{< /clients-example >}} However in the current implementation, memory is not really reclaimed until a macro node is completely empty, so you should not abuse this feature. ## Zero length streams A difference between streams and other Redis data structures is that when the other data structures no longer have any elements, as a side effect of calling commands that remove elements, the key itself will be removed. So for instance, a sorted set will be completely removed when a call to `ZREM` will remove the last element in the sorted set. Streams, on the other hand, are allowed to stay at zero elements, both as a result of using a \*\*MAXLEN\*\* option with a count of zero (`XADD` and `XTRIM` commands), or because `XDEL` was called. The reason why such an asymmetry exists is because Streams may have associated consumer groups, and we do not want to lose the state that the consumer groups defined just because there are no longer any items in the stream. Currently the stream is not deleted even when
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.043514955788850784, -0.0864742249250412, -0.08724330365657806, 0.0024444255977869034, 0.081298828125, -0.0375736728310585, 0.06075557321310043, -0.060328319668769836, 0.05837720260024071, 0.006618123967200518, 0.048184946179389954, 0.047237902879714966, 0.047673020511865616, -0.07042781...
0.076286
was called. The reason why such an asymmetry exists is because Streams may have associated consumer groups, and we do not want to lose the state that the consumer groups defined just because there are no longer any items in the stream. Currently the stream is not deleted even when it has no associated consumer groups. ## Total latency of consuming a message Non blocking stream commands like `XRANGE` and `XREAD` or `XREADGROUP` without the BLOCK option are served synchronously like any other Redis command, so to discuss latency of such commands is meaningless: it is more interesting to check the time complexity of the commands in the Redis documentation. It should be enough to say that stream commands are at least as fast as sorted set commands when extracting ranges, and that `XADD` is very fast and can easily insert from half a million to one million items per second in an average machine if pipelining is used. However latency becomes an interesting parameter if we want to understand the delay of processing a message, in the context of blocking consumers in a consumer group, from the moment the message is produced via `XADD`, to the moment the message is obtained by the consumer because `XREADGROUP` returned with the message. ## How serving blocked consumers works Before providing the results of performed tests, it is interesting to understand what model Redis uses in order to route stream messages (and in general actually how any blocking operation waiting for data is managed). \* The blocked client is referenced in a hash table that maps keys for which there is at least one blocking consumer, to a list of consumers that are waiting for such key. This way, given a key that received data, we can resolve all the clients that are waiting for such data. \* When a write happens, in this case when the `XADD` command is called, it calls the `signalKeyAsReady()` function. This function will put the key into a list of keys that need to be processed, because such keys may have new data for blocked consumers. Note that such \*ready keys\* will be processed later, so in the course of the same event loop cycle, it is possible that the key will receive other writes. \* Finally, before returning into the event loop, the \*ready keys\* are finally processed. For each key the list of clients waiting for data is scanned, and if applicable, such clients will receive the new data that arrived. In the case of streams the data is the messages in the applicable range requested by the consumer. As you can see, basically, before returning to the event loop both the client calling `XADD` and the clients blocked to consume messages, will have their reply in the output buffers, so the caller of `XADD` should receive the reply from Redis at about the same time the consumers will receive the new messages. This model is \*push-based\*, since adding data to the consumers buffers will be performed directly by the action of calling `XADD`, so the latency tends to be quite predictable. ## Latency tests results In order to check these latency characteristics a test was performed using multiple instances of Ruby programs pushing messages having as an additional field the computer millisecond time, and Ruby programs reading the messages from the consumer group and processing them. The message processing step consisted of comparing the current computer time with the message timestamp, in order to understand the total latency. Results obtained: ``` Processed between 0 and 1 ms -> 74.11% Processed between 1 and
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.009627869352698326, -0.06625887006521225, -0.059193965047597885, 0.023357784375548363, 0.0405929796397686, -0.08391659706830978, 0.01479449961334467, -0.055001433938741684, 0.0887335017323494, -0.04605768993496895, 0.060464322566986084, -0.012406867928802967, 0.0013821845641359687, -0.0...
0.071353
and Ruby programs reading the messages from the consumer group and processing them. The message processing step consisted of comparing the current computer time with the message timestamp, in order to understand the total latency. Results obtained: ``` Processed between 0 and 1 ms -> 74.11% Processed between 1 and 2 ms -> 25.80% Processed between 2 and 3 ms -> 0.06% Processed between 3 and 4 ms -> 0.01% Processed between 4 and 5 ms -> 0.02% ``` So 99.9% of requests have a latency <= 2 milliseconds, with the outliers that remain still very close to the average. Adding a few million unacknowledged messages to the stream does not change the gist of the benchmark, with most queries still processed with very short latency. A few remarks: \* Here we processed up to 10k messages per iteration, this means that the `COUNT` parameter of `XREADGROUP` was set to 10000. This adds a lot of latency but is needed in order to allow the slow consumers to be able to keep with the message flow. So you can expect a real world latency that is a lot smaller. \* The system used for this benchmark is very slow compared to today's standards. ## Learn more \* The [Redis Streams Tutorial](/docs/data-types/streams-tutorial) explains Redis streams with many examples. \* [Redis Streams Explained](https://www.youtube.com/watch?v=Z8qcpXyMAiA) is an entertaining introduction to streams in Redis. \* [Redis University's RU202](https://university.redis.com/courses/ru202/) is a free, online course dedicated to Redis Streams.
https://github.com/redis/redis-doc/blob/master//docs/data-types/streams.md
master
redis
[ -0.0016277293907478452, -0.09415843337774277, -0.04698331654071808, 0.08808381855487823, 0.00778320524841547, -0.12954573333263397, 0.0037717244122177362, -0.01648498699069023, 0.02271948754787445, -0.001498704426921904, 0.0038815070874989033, 0.021183768287301064, 0.02514570765197277, -0....
0.011037
--- title: "Redis lists" linkTitle: "Lists" weight: 20 description: > Introduction to Redis lists --- Redis lists are linked lists of string values. Redis lists are frequently used to: \* Implement stacks and queues. \* Build queue management for background worker systems. ## Basic commands \* `LPUSH` adds a new element to the head of a list; `RPUSH` adds to the tail. \* `LPOP` removes and returns an element from the head of a list; `RPOP` does the same but from the tails of a list. \* `LLEN` returns the length of a list. \* `LMOVE` atomically moves elements from one list to another. \* `LTRIM` reduces a list to the specified range of elements. ### Blocking commands Lists support several blocking commands. For example: \* `BLPOP` removes and returns an element from the head of a list. If the list is empty, the command blocks until an element becomes available or until the specified timeout is reached. \* `BLMOVE` atomically moves elements from a source list to a target list. If the source list is empty, the command will block until a new element becomes available. See the [complete series of list commands](https://redis.io/commands/?group=list). ## Examples \* Treat a list like a queue (first in, first out): {{< clients-example list\_tutorial queue >}} > LPUSH bikes:repairs bike:1 (integer) 1 > LPUSH bikes:repairs bike:2 (integer) 2 > RPOP bikes:repairs "bike:1" > RPOP bikes:repairs "bike:2" {{< /clients-example >}} \* Treat a list like a stack (first in, last out): {{< clients-example list\_tutorial stack >}} > LPUSH bikes:repairs bike:1 (integer) 1 > LPUSH bikes:repairs bike:2 (integer) 2 > LPOP bikes:repairs "bike:2" > LPOP bikes:repairs "bike:1" {{< /clients-example >}} \* Check the length of a list: {{< clients-example list\_tutorial llen >}} > LLEN bikes:repairs (integer) 0 {{< /clients-example >}} \* Atomically pop an element from one list and push to another: {{< clients-example list\_tutorial lmove\_lrange >}} > LPUSH bikes:repairs bike:1 (integer) 1 > LPUSH bikes:repairs bike:2 (integer) 2 > LMOVE bikes:repairs bikes:finished LEFT LEFT "bike:2" > LRANGE bikes:repairs 0 -1 1) "bike:1" > LRANGE bikes:finished 0 -1 1) "bike:2" {{< /clients-example >}} \* To limit the length of a list you can call `LTRIM`: {{< clients-example list\_tutorial ltrim.1 >}} > RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 (integer) 5 > LTRIM bikes:repairs 0 2 OK > LRANGE bikes:repairs 0 -1 1) "bike:1" 2) "bike:2" 3) "bike:3" {{< /clients-example >}} ### What are Lists? To explain the List data type it's better to start with a little bit of theory, as the term \*List\* is often used in an improper way by information technology folks. For instance "Python Lists" are not what the name may suggest (Linked Lists), but rather Arrays (the same data type is called Array in Ruby actually). From a very general point of view a List is just a sequence of ordered elements: 10,20,1,2,3 is a list. But the properties of a List implemented using an Array are very different from the properties of a List implemented using a \*Linked List\*. Redis lists are implemented via Linked Lists. This means that even if you have millions of elements inside a list, the operation of adding a new element in the head or in the tail of the list is performed \*in constant time\*. The speed of adding a new element with the `LPUSH` command to the head of a list with ten elements is the same as adding an element to the head of list with 10 million elements. What's the downside? Accessing an element \*by index\* is very fast in lists implemented with an Array (constant time indexed access) and not
https://github.com/redis/redis-doc/blob/master//docs/data-types/lists.md
master
redis
[ -0.1049436405301094, 0.017207413911819458, -0.06874255836009979, 0.015964725986123085, 0.059041447937488556, -0.06396790593862534, 0.08365488797426224, 0.030756505206227303, 0.011866321787238121, 0.0007157707586884499, 0.04928705468773842, 0.04351644963026047, 0.03268784284591675, -0.08088...
0.219885
`LPUSH` command to the head of a list with ten elements is the same as adding an element to the head of list with 10 million elements. What's the downside? Accessing an element \*by index\* is very fast in lists implemented with an Array (constant time indexed access) and not so fast in lists implemented by linked lists (where the operation requires an amount of work proportional to the index of the accessed element). Redis Lists are implemented with linked lists because for a database system it is crucial to be able to add elements to a very long list in a very fast way. Another strong advantage, as you'll see in a moment, is that Redis Lists can be taken at constant length in constant time. When fast access to the middle of a large collection of elements is important, there is a different data structure that can be used, called sorted sets. Sorted sets are covered in the [Sorted sets](/docs/data-types/sorted-sets) tutorial page. ### First steps with Redis Lists The `LPUSH` command adds a new element into a list, on the left (at the head), while the `RPUSH` command adds a new element into a list, on the right (at the tail). Finally the `LRANGE` command extracts ranges of elements from lists: {{< clients-example list\_tutorial lpush\_rpush >}} > RPUSH bikes:repairs bike:1 (integer) 1 > RPUSH bikes:repairs bike:2 (integer) 2 > LPUSH bikes:repairs bike:important\_bike (integer) 3 > LRANGE bikes:repairs 0 -1 1) "bike:important\_bike" 2) "bike:1" 3) "bike:2" {{< /clients-example >}} Note that `LRANGE` takes two indexes, the first and the last element of the range to return. Both the indexes can be negative, telling Redis to start counting from the end: so -1 is the last element, -2 is the penultimate element of the list, and so forth. As you can see `RPUSH` appended the elements on the right of the list, while the final `LPUSH` appended the element on the left. Both commands are \*variadic commands\*, meaning that you are free to push multiple elements into a list in a single call: {{< clients-example list\_tutorial variadic >}} > RPUSH bikes:repairs bike:1 bike:2 bike:3 (integer) 3 > LPUSH bikes:repairs bike:important\_bike bike:very\_important\_bike > LRANGE mylist 0 -1 1) "bike:very\_important\_bike" 2) "bike:important\_bike" 3) "bike:1" 4) "bike:2" 5) "bike:3" {{< /clients-example >}} An important operation defined on Redis lists is the ability to \*pop elements\*. Popping elements is the operation of both retrieving the element from the list, and eliminating it from the list, at the same time. You can pop elements from left and right, similarly to how you can push elements in both sides of the list. We'll add three elements and pop three elements, so at the end of this sequence of commands the list is empty and there are no more elements to pop: {{< clients-example list\_tutorial lpop\_rpop >}} > RPUSH bikes:repairs bike:1 bike:2 bike:3 (integer) 3 > RPOP bikes:repairs "bike:3" > LPOP bikes:repairs "bike:1" > RPOP bikes:repairs "bike:2" > RPOP bikes:repairs (nil) {{< /clients-example >}} Redis returned a NULL value to signal that there are no elements in the list. ### Common use cases for lists Lists are useful for a number of tasks, two very representative use cases are the following: \* Remember the latest updates posted by users into a social network. \* Communication between processes, using a consumer-producer pattern where the producer pushes items into a list, and a consumer (usually a \*worker\*) consumes those items and executes actions. Redis has special list commands to make this use case both more reliable and efficient. For example both the popular Ruby libraries [resque](https://github.com/resque/resque) and [sidekiq](https://github.com/mperham/sidekiq) use
https://github.com/redis/redis-doc/blob/master//docs/data-types/lists.md
master
redis
[ -0.04928239434957504, -0.012081928551197052, -0.06530498713254929, 0.050401248037815094, 0.01690514385700226, -0.04005221277475357, 0.022719308733940125, 0.06559216231107712, 0.021726960316300392, 0.04368880018591881, 0.006607625167816877, 0.10553018748760223, -0.017730260267853737, -0.076...
0.152704
using a consumer-producer pattern where the producer pushes items into a list, and a consumer (usually a \*worker\*) consumes those items and executes actions. Redis has special list commands to make this use case both more reliable and efficient. For example both the popular Ruby libraries [resque](https://github.com/resque/resque) and [sidekiq](https://github.com/mperham/sidekiq) use Redis lists under the hood in order to implement background jobs. The popular Twitter social network [takes the latest tweets](http://www.infoq.com/presentations/Real-Time-Delivery-Twitter) posted by users into Redis lists. To describe a common use case step by step, imagine your home page shows the latest photos published in a photo sharing social network and you want to speedup access. \* Every time a user posts a new photo, we add its ID into a list with `LPUSH`. \* When users visit the home page, we use `LRANGE 0 9` in order to get the latest 10 posted items. ### Capped lists In many use cases we just want to use lists to store the \*latest items\*, whatever they are: social network updates, logs, or anything else. Redis allows us to use lists as a capped collection, only remembering the latest N items and discarding all the oldest items using the `LTRIM` command. The `LTRIM` command is similar to `LRANGE`, but \*\*instead of displaying the specified range of elements\*\* it sets this range as the new list value. All the elements outside the given range are removed. For example, if you're adding bikes on the end of a list of repairs, but only want to worry about the 3 that have been on the list the longest: {{< clients-example list\_tutorial ltrim >}} > RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 (integer) 5 > LTRIM bikes:repairs 0 2 OK > LRANGE bikes:repairs 0 -1 1) "bike:1" 2) "bike:2" 3) "bike:3" {{< /clients-example >}} The above `LTRIM` command tells Redis to keep just list elements from index 0 to 2, everything else will be discarded. This allows for a very simple but useful pattern: doing a List push operation + a List trim operation together to add a new element and discard elements exceeding a limit. Using `LTRIM` with negative indexes can then be used to keep only the 3 most recently added: {{< clients-example list\_tutorial ltrim\_end\_of\_list >}} > RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 (integer) 5 > LTRIM bikes:repairs -3 -1 OK > LRANGE bikes:repairs 0 -1 1) "bike:3" 2) "bike:4" 3) "bike:5" {{< /clients-example >}} The above combination adds new elements and keeps only the 3 newest elements into the list. With `LRANGE` you can access the top items without any need to remember very old data. Note: while `LRANGE` is technically an O(N) command, accessing small ranges towards the head or the tail of the list is a constant time operation. Blocking operations on lists --- Lists have a special feature that make them suitable to implement queues, and in general as a building block for inter process communication systems: blocking operations. Imagine you want to push items into a list with one process, and use a different process in order to actually do some kind of work with those items. This is the usual producer / consumer setup, and can be implemented in the following simple way: \* To push items into the list, producers call `LPUSH`. \* To extract / process items from the list, consumers call `RPOP`. However it is possible that sometimes the list is empty and there is nothing to process, so `RPOP` just returns NULL. In this case a consumer is forced to wait some time and retry again with `RPOP`. This is called \*polling\*, and
https://github.com/redis/redis-doc/blob/master//docs/data-types/lists.md
master
redis
[ -0.1173194870352745, -0.0020049794111400843, -0.1084350049495697, 0.044193968176841736, 0.06355288624763489, -0.06645044684410095, 0.01113133504986763, 0.047048937529325485, -0.0032231484074145555, 0.024577533826231956, 0.039865415543317795, 0.09361712634563446, 0.0551297590136528, -0.0627...
0.210114
process items from the list, consumers call `RPOP`. However it is possible that sometimes the list is empty and there is nothing to process, so `RPOP` just returns NULL. In this case a consumer is forced to wait some time and retry again with `RPOP`. This is called \*polling\*, and is not a good idea in this context because it has several drawbacks: 1. Forces Redis and clients to process useless commands (all the requests when the list is empty will get no actual work done, they'll just return NULL). 2. Adds a delay to the processing of items, since after a worker receives a NULL, it waits some time. To make the delay smaller, we could wait less between calls to `RPOP`, with the effect of amplifying problem number 1, i.e. more useless calls to Redis. So Redis implements commands called `BRPOP` and `BLPOP` which are versions of `RPOP` and `LPOP` able to block if the list is empty: they'll return to the caller only when a new element is added to the list, or when a user-specified timeout is reached. This is an example of a `BRPOP` call we could use in the worker: {{< clients-example list\_tutorial brpop >}} > RPUSH bikes:repairs bike:1 bike:2 (integer) 2 > BRPOP bikes:repairs 1 1) "bikes:repairs" 2) "bike:2" > BRPOP bikes:repairs 1 1) "bikes:repairs" 2) "bike:1" > BRPOP bikes:repairs 1 (nil) (2.01s) {{< /clients-example >}} It means: "wait for elements in the list `bikes:repairs`, but return if after 1 second no element is available". Note that you can use 0 as timeout to wait for elements forever, and you can also specify multiple lists and not just one, in order to wait on multiple lists at the same time, and get notified when the first list receives an element. A few things to note about `BRPOP`: 1. Clients are served in an ordered way: the first client that blocked waiting for a list, is served first when an element is pushed by some other client, and so forth. 2. The return value is different compared to `RPOP`: it is a two-element array since it also includes the name of the key, because `BRPOP` and `BLPOP` are able to block waiting for elements from multiple lists. 3. If the timeout is reached, NULL is returned. There are more things you should know about lists and blocking ops. We suggest that you read more on the following: \* It is possible to build safer queues or rotating queues using `LMOVE`. \* There is also a blocking variant of the command, called `BLMOVE`. ## Automatic creation and removal of keys So far in our examples we never had to create empty lists before pushing elements, or removing empty lists when they no longer have elements inside. It is Redis' responsibility to delete keys when lists are left empty, or to create an empty list if the key does not exist and we are trying to add elements to it, for example, with `LPUSH`. This is not specific to lists, it applies to all the Redis data types composed of multiple elements -- Streams, Sets, Sorted Sets and Hashes. Basically we can summarize the behavior with three rules: 1. When we add an element to an aggregate data type, if the target key does not exist, an empty aggregate data type is created before adding the element. 2. When we remove elements from an aggregate data type, if the value remains empty, the key is automatically destroyed. The Stream data type is the only exception to this rule. 3. Calling a read-only command such as `LLEN`
https://github.com/redis/redis-doc/blob/master//docs/data-types/lists.md
master
redis
[ -0.09134628623723984, 0.009125836193561554, -0.06958746165037155, 0.05057801678776741, 0.016514014452695847, -0.07480474561452866, 0.11141102761030197, -0.018461482599377632, 0.0690813735127449, 0.016803909093141556, 0.00455239275470376, 0.007824554108083248, -0.00669553317129612, -0.08107...
0.161263
an empty aggregate data type is created before adding the element. 2. When we remove elements from an aggregate data type, if the value remains empty, the key is automatically destroyed. The Stream data type is the only exception to this rule. 3. Calling a read-only command such as `LLEN` (which returns the length of the list), or a write command removing elements, with an empty key, always produces the same result as if the key is holding an empty aggregate type of the type the command expects to find. Examples of rule 1: {{< clients-example list\_tutorial rule\_1 >}} > DEL new\_bikes (integer) 0 > LPUSH new\_bikes bike:1 bike:2 bike:3 (integer) 3 {{< /clients-example >}} However we can't perform operations against the wrong type if the key exists: {{< clients-example list\_tutorial rule\_1.1 >}} > SET new\_bikes bike:1 OK > TYPE new\_bikes string > LPUSH new\_bikes bike:2 bike:3 (error) WRONGTYPE Operation against a key holding the wrong kind of value {{< /clients-example >}} Example of rule 2: {{< clients-example list\_tutorial rule\_2 >}} > RPUSH bikes:repairs bike:1 bike:2 bike:3 (integer) 3 > EXISTS bikes:repairs (integer) 1 > LPOP bikes:repairs "bike:3" > LPOP bikes:repairs "bike:2" > LPOP bikes:repairs "bike:1" > EXISTS bikes:repairs (integer) 0 {{< /clients-example >}} The key no longer exists after all the elements are popped. Example of rule 3: {{< clients-example list\_tutorial rule\_3 >}} > DEL bikes:repairs (integer) 0 > LLEN bikes:repairs (integer) 0 > LPOP bikes:repairs (nil) {{< /clients-example >}} ## Limits The max length of a Redis list is 2^32 - 1 (4,294,967,295) elements. ## Performance List operations that access its head or tail are O(1), which means they're highly efficient. However, commands that manipulate elements within a list are usually O(n). Examples of these include `LINDEX`, `LINSERT`, and `LSET`. Exercise caution when running these commands, mainly when operating on large lists. ## Alternatives Consider [Redis streams](/docs/data-types/streams) as an alternative to lists when you need to store and process an indeterminate series of events. ## Learn more \* [Redis Lists Explained](https://www.youtube.com/watch?v=PB5SeOkkxQc) is a short, comprehensive video explainer on Redis lists. \* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis lists in detail.
https://github.com/redis/redis-doc/blob/master//docs/data-types/lists.md
master
redis
[ -0.015895642340183258, 0.05591090768575668, 0.0488605871796608, 0.052599724382162094, -0.013954231515526772, -0.04452483355998993, 0.05899390950798988, 0.032088857144117355, 0.05117947235703468, 0.043285567313432693, 0.0045177144929766655, -0.01413762103766203, 0.03042786568403244, -0.0379...
0.009352
Redis bitfields let you set, increment, and get integer values of arbitrary bit length. For example, you can operate on anything from unsigned 1-bit integers to signed 63-bit integers. These values are stored using binary-encoded Redis strings. Bitfields support atomic read, write and increment operations, making them a good choice for managing counters and similar numerical values. ## Basic commands \* `BITFIELD` atomically sets, increments and reads one or more values. \* `BITFIELD\_RO` is a read-only variant of `BITFIELD`. ## Examples ## Example Suppose you want to maintain two metrics for various bicycles: the current price and the number of owners over time. You can represent these counters with a 32-bit wide bitfield per for each bike. \* Bike 1 initially costs 1,000 (counter in offset 0) and has never had an owner. After being sold, it's now considered used and the price instantly drops to reflect its new condition, and it now has an owner (offset 1). After quite some time, the bike becomes a classic. The original owner sells it for a profit, so the price goes up and the number of owners does as well.Finally, you can look at the bike's current price and number of owners. {{< clients-example bitfield\_tutorial bf >}} > BITFIELD bike:1:stats SET u32 #0 1000 1) (integer) 0 > BITFIELD bike:1:stats INCRBY u32 #0 -50 INCRBY u32 #1 1 1) (integer) 950 2) (integer) 1 > BITFIELD bike:1:stats INCRBY u32 #0 500 INCRBY u32 #1 1 1) (integer) 1450 2) (integer) 2 > BITFIELD bike:1:stats GET u32 #0 GET u32 #1 1) (integer) 1450 2) (integer) 2 {{< /clients-example >}} ## Performance `BITFIELD` is O(n), where \_n\_ is the number of counters accessed.
https://github.com/redis/redis-doc/blob/master//docs/data-types/bitfields.md
master
redis
[ -0.03857726231217384, 0.008441347628831863, -0.11893593519926071, 0.030523475259542465, -0.06278104335069656, -0.07797526568174362, 0.048286762088537216, 0.04387610778212547, 0.05295173078775406, -0.007532325107604265, -0.03623969852924347, 0.01304719503968954, 0.04826132208108902, -0.0426...
0.225406
Redis is a data structure server. At its core, Redis provides a collection of native data types that help you solve a wide variety of problems, from [caching](/docs/manual/client-side-caching/) to [queuing](/docs/data-types/lists/) to [event processing](/docs/data-types/streams/). Below is a short description of each data type, with links to broader overviews and command references. If you'd like to try a comprehensive tutorial for each data structure, see their overview pages below. ## Core ### Strings [Redis strings](/docs/data-types/strings) are the most basic Redis data type, representing a sequence of bytes. For more information, see: \* [Overview of Redis strings](/docs/data-types/strings/) \* [Redis string command reference](/commands/?group=string) ### Lists [Redis lists](/docs/data-types/lists) are lists of strings sorted by insertion order. For more information, see: \* [Overview of Redis lists](/docs/data-types/lists/) \* [Redis list command reference](/commands/?group=list) ### Sets [Redis sets](/docs/data-types/sets) are unordered collections of unique strings that act like the sets from your favorite programming language (for example, [Java HashSets](https://docs.oracle.com/javase/7/docs/api/java/util/HashSet.html), [Python sets](https://docs.python.org/3.10/library/stdtypes.html#set-types-set-frozenset), and so on). With a Redis set, you can add, remove, and test for existence in O(1) time (in other words, regardless of the number of set elements). For more information, see: \* [Overview of Redis sets](/docs/data-types/sets/) \* [Redis set command reference](/commands/?group=set) ### Hashes [Redis hashes](/docs/data-types/hashes) are record types modeled as collections of field-value pairs. As such, Redis hashes resemble [Python dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries), [Java HashMaps](https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html), and [Ruby hashes](https://ruby-doc.org/core-3.1.2/Hash.html). For more information, see: \* [Overview of Redis hashes](/docs/data-types/hashes/) \* [Redis hashes command reference](/commands/?group=hash) ### Sorted sets [Redis sorted sets](/docs/data-types/sorted-sets) are collections of unique strings that maintain order by each string's associated score. For more information, see: \* [Overview of Redis sorted sets](/docs/data-types/sorted-sets) \* [Redis sorted set command reference](/commands/?group=sorted-set) ### Streams A [Redis stream](/docs/data-types/streams) is a data structure that acts like an append-only log. Streams help record events in the order they occur and then syndicate them for processing. For more information, see: \* [Overview of Redis Streams](/docs/data-types/streams) \* [Redis Streams command reference](/commands/?group=stream) ### Geospatial indexes [Redis geospatial indexes](/docs/data-types/geospatial) are useful for finding locations within a given geographic radius or bounding box. For more information, see: \* [Overview of Redis geospatial indexes](/docs/data-types/geospatial/) \* [Redis geospatial indexes command reference](/commands/?group=geo) ### Bitmaps [Redis bitmaps](/docs/data-types/bitmaps/) let you perform bitwise operations on strings. For more information, see: \* [Overview of Redis bitmaps](/docs/data-types/bitmaps/) \* [Redis bitmap command reference](/commands/?group=bitmap) ### Bitfields [Redis bitfields](/docs/data-types/bitfields/) efficiently encode multiple counters in a string value. Bitfields provide atomic get, set, and increment operations and support different overflow policies. For more information, see: \* [Overview of Redis bitfields](/docs/data-types/bitfields/) \* The `BITFIELD` command. ### HyperLogLog The [Redis HyperLogLog](/docs/data-types/hyperloglogs) data structures provide probabilistic estimates of the cardinality (i.e., number of elements) of large sets. For more information, see: \* [Overview of Redis HyperLogLog](/docs/data-types/hyperloglogs) \* [Redis HyperLogLog command reference](/commands/?group=hyperloglog) ## Extensions To extend the features provided by the included data types, use one of these options: 1. Write your own custom [server-side functions in Lua](/docs/manual/programmability/). 1. Write your own Redis module using the [modules API](/docs/reference/modules/) or check out the [community-supported modules](/docs/modules/). 1. Use [JSON](/docs/stack/json/), [querying](/docs/stack/search/), [time series](/docs/stack/timeseries/), and other capabilities provided by [Redis Stack](/docs/stack/). ---
https://github.com/redis/redis-doc/blob/master//docs/data-types/_index.md
master
redis
[ -0.039889831095933914, -0.047128595411777496, -0.05433335900306702, -0.0013817725703120232, -0.05067426711320877, -0.12723073363304138, 0.05521445348858833, 0.029669955372810364, 0.0032712393440306187, 0.008491151966154575, -0.04840564355254173, 0.055540166795253754, -0.009310547262430191, ...
0.199425
HyperLogLog is a probabilistic data structure that estimates the cardinality of a set. As a probabilistic data structure, HyperLogLog trades perfect accuracy for efficient space utilization. The Redis HyperLogLog implementation uses up to 12 KB and provides a standard error of 0.81%. Counting unique items usually requires an amount of memory proportional to the number of items you want to count, because you need to remember the elements you have already seen in the past in order to avoid counting them multiple times. However, a set of algorithms exist that trade memory for precision: they return an estimated measure with a standard error, which, in the case of the Redis implementation for HyperLogLog, is less than 1%. The magic of this algorithm is that you no longer need to use an amount of memory proportional to the number of items counted, and instead can use a constant amount of memory; 12k bytes in the worst case, or a lot less if your HyperLogLog (We'll just call them HLL from now) has seen very few elements. HLLs in Redis, while technically a different data structure, are encoded as a Redis string, so you can call `GET` to serialize a HLL, and `SET` to deserialize it back to the server. Conceptually the HLL API is like using Sets to do the same task. You would `SADD` every observed element into a set, and would use `SCARD` to check the number of elements inside the set, which are unique since `SADD` will not re-add an existing element. While you don't really \*add items\* into an HLL, because the data structure only contains a state that does not include actual elements, the API is the same: \* Every time you see a new element, you add it to the count with `PFADD`. \* When you want to retrieve the current approximation of unique elements added using the `PFADD` command, you can use the `PFCOUNT` command. If you need to merge two different HLLs, the `PFMERGE` command is available. Since HLLs provide approximate counts of unique elements, the result of the merge will give you an approximation of the number of unique elements across both source HLLs. {{< clients-example hll\_tutorial pfadd >}} > PFADD bikes Hyperion Deimos Phoebe Quaoar (integer) 1 > PFCOUNT bikes (integer) 4 > PFADD commuter\_bikes Salacia Mimas Quaoar (integer) 1 > PFMERGE all\_bikes bikes commuter\_bikes OK > PFCOUNT all\_bikes (integer) 6 {{< /clients-example >}} Some examples of use cases for this data structure is counting unique queries performed by users in a search form every day, number of unique visitors to a web page and other similar cases. Redis is also able to perform the union of HLLs, please check the [full documentation](/commands#hyperloglog) for more information. ## Use cases \*\*Anonymous unique visits of a web page (SaaS, analytics tools)\*\* This application answers these questions: - How many unique visits has this page had on this day? - How many unique users have played this song? - How many unique users have viewed this video? {{% alert title="Note" color="warning" %}} Storing the IP address or any other kind of personal identifier is against the law in some countries, which makes it impossible to get unique visitor statistics on your website. {{% /alert %}} One HyperLogLog is created per page (video/song) per period, and every IP/identifier is added to it on every visit. ## Basic commands \* `PFADD` adds an item to a HyperLogLog. \* `PFCOUNT` returns an estimate of the number of items in the set. \* `PFMERGE` combines two or more HyperLogLogs into one. See the [complete list of HyperLogLog commands](https://redis.io/commands/?group=hyperloglog).
https://github.com/redis/redis-doc/blob/master//docs/data-types/probabilistic/hyperloglogs.md
master
redis
[ 0.05811745673418045, -0.028949199244379997, -0.08554413914680481, 0.034425992518663406, -0.043438516557216644, -0.08038485050201416, 0.055759645998477936, -0.014566863887012005, 0.057912759482860565, 0.010863530449569225, -0.013020643033087254, 0.038965560495853424, 0.05828213691711426, 0....
0.100121
and every IP/identifier is added to it on every visit. ## Basic commands \* `PFADD` adds an item to a HyperLogLog. \* `PFCOUNT` returns an estimate of the number of items in the set. \* `PFMERGE` combines two or more HyperLogLogs into one. See the [complete list of HyperLogLog commands](https://redis.io/commands/?group=hyperloglog). ## Performance Writing (`PFADD`) to and reading from (`PFCOUNT`) the HyperLogLog is done in constant time and space. Merging HLLs is O(n), where \_n\_ is the number of sketches. ## Limits The HyperLogLog can estimate the cardinality of sets with up to 18,446,744,073,709,551,616 (2^64) members. ## Learn more \* [Redis new data structure: the HyperLogLog](http://antirez.com/news/75) has a lot of details about the data structure and its implementation in Redis. \* [Redis HyperLogLog Explained](https://www.youtube.com/watch?v=MunL8nnwscQ) shows you how to use Redis HyperLogLog data structures to build a traffic heat map.
https://github.com/redis/redis-doc/blob/master//docs/data-types/probabilistic/hyperloglogs.md
master
redis
[ 0.0061974539421498775, -0.06844671070575714, -0.0807194784283638, 0.02047766000032425, -0.01897873915731907, -0.06277050822973251, 0.054812852293252945, 0.01115867868065834, 0.02707834169268608, 0.027949005365371704, 0.00827128067612648, 0.03142328932881355, 0.03231234475970268, -0.0454339...
0.117638
At the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a \*leader follower\* (master-replica) replication that is simple to use and configure. It allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it \*regardless\* of what happens to the master. This system works using three main mechanisms: 1. When a master and a replica instances are well-connected, the master keeps the replica updated by sending a stream of commands to the replica to replicate the effects on the dataset happening in the master side due to: client writes, keys expired or evicted, any other action changing the master dataset. 2. When the link between the master and the replica breaks, for network issues or because a timeout is sensed in the master or the replica, the replica reconnects and attempts to proceed with a partial resynchronization: it means that it will try to just obtain the part of the stream of commands it missed during the disconnection. 3. When a partial resynchronization is not possible, the replica will ask for a full resynchronization. This will involve a more complex process in which the master needs to create a snapshot of all its data, send it to the replica, and then continue sending the stream of commands as the dataset changes. Redis uses by default asynchronous replication, which being low latency and high performance, is the natural replication mode for the vast majority of Redis use cases. However, Redis replicas asynchronously acknowledge the amount of data they received periodically with the master. So the master does not wait every time for a command to be processed by the replicas, however it knows, if needed, what replica already processed what command. This allows having optional synchronous replication. Synchronous replication of certain data can be requested by the clients using the `WAIT` command. However `WAIT` is only able to ensure there are the specified number of acknowledged copies in the other Redis instances, it does not turn a set of Redis instances into a CP system with strong consistency: acknowledged writes can still be lost during a failover, depending on the exact configuration of the Redis persistence. However with `WAIT` the probability of losing a write after a failure event is greatly reduced to certain hard to trigger failure modes. You can check the Redis Sentinel or Redis Cluster documentation for more information about high availability and failover. The rest of this document mainly describes the basic characteristics of Redis basic replication. ### Important facts about Redis replication \* Redis uses asynchronous replication, with asynchronous replica-to-master acknowledges of the amount of data processed. \* A master can have multiple replicas. \* Replicas are able to accept connections from other replicas. Aside from connecting a number of replicas to the same master, replicas can also be connected to other replicas in a cascading-like structure. Since Redis 4.0, all the sub-replicas will receive exactly the same replication stream from the master. \* Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more replicas perform the initial synchronization or a partial resynchronization. \* Replication is also largely non-blocking on the replica side. While the replica is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis replicas
https://github.com/redis/redis-doc/blob/master//docs/management/replication.md
master
redis
[ -0.05188029631972313, -0.06497368961572647, 0.02145465463399887, 0.08167289197444916, 0.050075285136699677, -0.043341390788555145, -0.026543883606791496, -0.02527637965977192, 0.05869900807738304, 0.017283782362937927, -0.03321971371769905, 0.07934124022722244, 0.06680777668952942, -0.0837...
0.149207
initial synchronization or a partial resynchronization. \* Replication is also largely non-blocking on the replica side. While the replica is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis replicas to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The replica will block incoming connections during this brief window (that can be as long as many seconds for very large datasets). Since Redis 4.0 you can configure Redis so that the deletion of the old data set happens in a different thread, however loading the new initial dataset will still happen in the main thread and block the replica. \* Replication can be used both for scalability, to have multiple replicas for read-only queries (for example, slow O(N) operations can be offloaded to replicas), or simply for improving data safety and high availability. \* You can use replication to avoid the cost of having the master writing the full dataset to disk: a typical technique involves configuring your master `redis.conf` to avoid persisting to disk at all, then connect a replica configured to save from time to time, or with AOF enabled. However, this setup must be handled with care, since a restarting master will start with an empty dataset: if the replica tries to sync with it, the replica will be emptied as well. ## Safety of replication when master has persistence turned off In setups where Redis replication is used, it is strongly advised to have persistence turned on in the master and in the replicas. When this is not possible, for example because of latency concerns due to very slow disks, instances should be configured to \*\*avoid restarting automatically\*\* after a reboot. To better understand why masters with persistence turned off configured to auto restart are dangerous, check the following failure mode where data is wiped from the master and all its replicas: 1. We have a setup with node A acting as master, with persistence turned down, and nodes B and C replicating from node A. 2. Node A crashes, however it has some auto-restart system, that restarts the process. However since persistence is turned off, the node restarts with an empty data set. 3. Nodes B and C will replicate from node A, which is empty, so they'll effectively destroy their copy of the data. When Redis Sentinel is used for high availability, also turning off persistence on the master, together with auto restart of the process, is dangerous. For example the master can restart fast enough for Sentinel to not detect a failure, so that the failure mode described above happens. Every time data safety is important, and replication is used with master configured without persistence, auto restart of instances should be disabled. ## How Redis replication works Every Redis master has a replication ID: it is a large pseudo random string that marks a given story of the dataset. Each master also takes an offset that increments for every byte of replication stream that it is produced to be sent to replicas, to update the state of the replicas with the new changes modifying the dataset. The replication offset is incremented even if no replica is actually connected, so basically every given pair of: Replication ID, offset Identifies an exact version of the dataset of a master. When replicas connect to masters, they use the `PSYNC` command to send their old master
https://github.com/redis/redis-doc/blob/master//docs/management/replication.md
master
redis
[ -0.0631798729300499, -0.040368009358644485, -0.03955943137407303, 0.009206605143845081, -0.02691393718123436, -0.05699077248573303, -0.037800755351781845, -0.059096019715070724, 0.05370201915502548, 0.0125027010217309, -0.00664353696629405, 0.1252690851688385, 0.049641337245702744, -0.0635...
0.127138
changes modifying the dataset. The replication offset is incremented even if no replica is actually connected, so basically every given pair of: Replication ID, offset Identifies an exact version of the dataset of a master. When replicas connect to masters, they use the `PSYNC` command to send their old master replication ID and the offsets they processed so far. This way the master can send just the incremental part needed. However if there is not enough \*backlog\* in the master buffers, or if the replica is referring to an history (replication ID) which is no longer known, then a full resynchronization happens: in this case the replica will get a full copy of the dataset, from scratch. This is how a full synchronization works in more details: The master starts a background saving process to produce an RDB file. At the same time it starts to buffer all new write commands received from the clients. When the background saving is complete, the master transfers the database file to the replica, which saves it on disk, and then loads it into memory. The master will then send all buffered commands to the replica. This is done as a stream of commands and is in the same format of the Redis protocol itself. You can try it yourself via telnet. Connect to the Redis port while the server is doing some work and issue the `SYNC` command. You'll see a bulk transfer and then every command received by the master will be re-issued in the telnet session. Actually `SYNC` is an old protocol no longer used by newer Redis instances, but is still there for backward compatibility: it does not allow partial resynchronizations, so now `PSYNC` is used instead. As already said, replicas are able to automatically reconnect when the master-replica link goes down for some reason. If the master receives multiple concurrent replica synchronization requests, it performs a single background save in to serve all of them. ## Replication ID explained In the previous section we said that if two instances have the same replication ID and replication offset, they have exactly the same data. However it is useful to understand what exactly is the replication ID, and why instances have actually two replication IDs: the main ID and the secondary ID. A replication ID basically marks a given \*history\* of the data set. Every time an instance restarts from scratch as a master, or a replica is promoted to master, a new replication ID is generated for this instance. The replicas connected to a master will inherit its replication ID after the handshake. So two instances with the same ID are related by the fact that they hold the same data, but potentially at a different time. It is the offset that works as a logical time to understand, for a given history (replication ID), who holds the most updated data set. For instance, if two instances A and B have the same replication ID, but one with offset 1000 and one with offset 1023, it means that the first lacks certain commands applied to the data set. It also means that A, by applying just a few commands, may reach exactly the same state of B. The reason why Redis instances have two replication IDs is because of replicas that are promoted to masters. After a failover, the promoted replica requires to still remember what was its past replication ID, because such replication ID was the one of the former master. In this way, when other replicas will sync with the new master, they will try to perform a
https://github.com/redis/redis-doc/blob/master//docs/management/replication.md
master
redis
[ -0.01820230856537819, -0.00703688059002161, 0.02038644813001156, 0.010292209684848785, 0.0274765994399786, -0.015499426051974297, -0.08581043034791946, -0.06788009405136108, 0.018802132457494736, 0.020784951746463776, -0.024235079064965248, 0.09083231538534164, 0.07406812906265259, -0.1322...
0.047982
that are promoted to masters. After a failover, the promoted replica requires to still remember what was its past replication ID, because such replication ID was the one of the former master. In this way, when other replicas will sync with the new master, they will try to perform a partial resynchronization using the old master replication ID. This will work as expected, because when the replica is promoted to master it sets its secondary ID to its main ID, remembering what was the offset when this ID switch happened. Later it will select a new random replication ID, because a new history begins. When handling the new replicas connecting, the master will match their IDs and offsets both with the current ID and the secondary ID (up to a given offset, for safety). In short this means that after a failover, replicas connecting to the newly promoted master don't have to perform a full sync. In case you wonder why a replica promoted to master needs to change its replication ID after a failover: it is possible that the old master is still working as a master because of some network partition: retaining the same replication ID would violate the fact that the same ID and same offset of any two random instances mean they have the same data set. ## Diskless replication Normally a full resynchronization requires creating an RDB file on disk, then reloading the same RDB from disk to feed the replicas with the data. With slow disks this can be a very stressing operation for the master. Redis version 2.8.18 is the first version to have support for diskless replication. In this setup the child process directly sends the RDB over the wire to replicas, without using the disk as intermediate storage. ## Configuration To configure basic Redis replication is trivial: just add the following line to the replica configuration file: replicaof 192.168.1.1 6379 Of course you need to replace 192.168.1.1 6379 with your master IP address (or hostname) and port. Alternatively, you can call the `REPLICAOF` command and the master host will start a sync with the replica. There are also a few parameters for tuning the replication backlog taken in memory by the master to perform the partial resynchronization. See the example `redis.conf` shipped with the Redis distribution for more information. Diskless replication can be enabled using the `repl-diskless-sync` configuration parameter. The delay to start the transfer to wait for more replicas to arrive after the first one is controlled by the `repl-diskless-sync-delay` parameter. Please refer to the example `redis.conf` file in the Redis distribution for more details. ## Read-only replica Since Redis 2.6, replicas support a read-only mode that is enabled by default. This behavior is controlled by the `replica-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`. Read-only replicas will reject all write commands, so that it is not possible to write to a replica because of a mistake. This does not mean that the feature is intended to expose a replica instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. The [Security](/topics/security) page describes how to secure a Redis instance. You may wonder why it is possible to revert the read-only setting and have replica instances that can be targeted by write operations. The answer is that writable replicas exist only for historical reasons. Using writable replicas can result in inconsistency between the master and the replica, so it is not recommended to use writable replicas. To
https://github.com/redis/redis-doc/blob/master//docs/management/replication.md
master
redis
[ -0.062249671667814255, -0.014055510051548481, 0.073819600045681, -0.008809289894998074, 0.06253232806921005, 0.03272082656621933, -0.009675918146967888, -0.046876128762960434, 0.05905912071466446, -0.013763549737632275, -0.015628695487976074, 0.07438952475786209, 0.10857921838760376, -0.05...
0.066354
to revert the read-only setting and have replica instances that can be targeted by write operations. The answer is that writable replicas exist only for historical reasons. Using writable replicas can result in inconsistency between the master and the replica, so it is not recommended to use writable replicas. To understand in which situations this can be a problem, we need to understand how replication works. Changes on the master is replicated by propagating regular Redis commands to the replica. When a key expires on the master, this is propagated as a DEL command. If a key which exists on the master but is deleted, expired or has a different type on the replica compared to the master will react differently to commands like DEL, INCR or RPOP propagated from the master than intended. The propagated command may fail on the replica or result in a different outcome. To minimize the risks (if you insist on using writable replicas) we suggest you follow these recommendations: \* Don't write to keys in a writable replica that are also used on the master. (This can be hard to guarantee if you don't have control over all the clients that write to the master.) \* Don't configure an instance as a writable replica as an intermediary step when upgrading a set of instances in a running system. In general, don't configure an instance as a writable replica if it can ever be promoted to a master if you want to guarantee data consistency. Historically, there were some use cases that were considered legitimate for writable replicas. As of version 7.0, these use cases are now all obsolete and the same can be achieved by other means. For example: \* Computing slow Set or Sorted set operations and storing the result in temporary local keys using commands like `SUNIONSTORE` and `ZINTERSTORE`. Instead, use commands that return the result without storing it, such as `SUNION` and `ZINTER`. \* Using the `SORT` command (which is not considered a read-only command because of the optional STORE option and therefore cannot be used on a read-only replica). Instead, use `SORT\_RO`, which is a read-only command. \* Using `EVAL` and `EVALSHA` are also not considered read-only commands, because the Lua script may call write commands. Instead, use `EVAL\_RO` and `EVALSHA\_RO` where the Lua script can only call read-only commands. While writes to a replica will be discarded if the replica and the master resync or if the replica is restarted, there is no guarantee that they will sync automatically. Before version 4.0, writable replicas were incapable of expiring keys with a time to live set. This means that if you use `EXPIRE` or other commands that set a maximum TTL for a key, the key will leak, and while you may no longer see it while accessing it with read commands, you will see it in the count of keys and it will still use memory. Redis 4.0 RC3 and greater versions are able to evict keys with TTL as masters do, with the exceptions of keys written in DB numbers greater than 63 (but by default Redis instances only have 16 databases). Note though that even in versions greater than 4.0, using `EXPIRE` on a key that could ever exists on the master can cause inconsistency between the replica and the master. Also note that since Redis 4.0 replica writes are only local, and are not propagated to sub-replicas attached to the instance. Sub-replicas instead will always receive the replication stream identical to the one sent by the top-level master to the intermediate replicas. So for example in
https://github.com/redis/redis-doc/blob/master//docs/management/replication.md
master
redis
[ -0.03221486508846283, -0.03408902511000633, 0.018053993582725525, 0.04917149990797043, -0.044947218149900436, -0.010156787000596523, -0.07034759223461151, -0.019122282043099403, 0.05624117702245712, 0.06140805780887604, 0.011034158989787102, 0.15960609912872314, 0.05189190432429314, -0.047...
0.022492
replica and the master. Also note that since Redis 4.0 replica writes are only local, and are not propagated to sub-replicas attached to the instance. Sub-replicas instead will always receive the replication stream identical to the one sent by the top-level master to the intermediate replicas. So for example in the following setup: A ---> B ---> C Even if `B` is writable, C will not see `B` writes and will instead have identical dataset as the master instance `A`. ## Setting a replica to authenticate to a master If your master has a password via `requirepass`, it's trivial to configure the replica to use that password in all sync operations. To do it on a running instance, use `redis-cli` and type: config set masterauth To set it permanently, add this to your config file: masterauth ## Allow writes only with N attached replicas Starting with Redis 2.8, you can configure a Redis master to accept write queries only if at least N replicas are currently connected to the master. However, because Redis uses asynchronous replication it is not possible to ensure the replica actually received a given write, so there is always a window for data loss. This is how the feature works: \* Redis replicas ping the master every second, acknowledging the amount of replication stream processed. \* Redis masters will remember the last time it received a ping from every replica. \* The user can configure a minimum number of replicas that have a lag not greater than a maximum number of seconds. If there are at least N replicas, with a lag less than M seconds, then the write will be accepted. You may think of it as a best effort data safety mechanism, where consistency is not ensured for a given write, but at least the time window for data loss is restricted to a given number of seconds. In general bound data loss is better than unbound one. If the conditions are not met, the master will instead reply with an error and the write will not be accepted. There are two configuration parameters for this feature: \* min-replicas-to-write `` \* min-replicas-max-lag `` For more information, please check the example `redis.conf` file shipped with the Redis source distribution. ## How Redis replication deals with expires on keys Redis expires allow keys to have a limited time to live (TTL). Such a feature depends on the ability of an instance to count the time, however Redis replicas correctly replicate keys with expires, even when such keys are altered using Lua scripts. To implement such a feature Redis cannot rely on the ability of the master and replica to have synced clocks, since this is a problem that cannot be solved and would result in race conditions and diverging data sets, so Redis uses three main techniques to make the replication of expired keys able to work: 1. Replicas don't expire keys, instead they wait for masters to expire the keys. When a master expires a key (or evict it because of LRU), it synthesizes a `DEL` command which is transmitted to all the replicas. 2. However because of master-driven expire, sometimes replicas may still have in memory keys that are already logically expired, since the master was not able to provide the `DEL` command in time. To deal with that the replica uses its logical clock to report that a key does not exist \*\*only for read operations\*\* that don't violate the consistency of the data set (as new commands from the master will arrive). In this way replicas avoid reporting logically expired keys that
https://github.com/redis/redis-doc/blob/master//docs/management/replication.md
master
redis
[ -0.0030455701053142548, -0.05390583723783493, -0.06492067873477936, 0.010763992555439472, -0.04787476733326912, -0.03572666272521019, -0.022453416138887405, -0.0384744256734848, 0.046595025807619095, 0.004642433021217585, -0.06441543996334076, 0.036934804171323776, 0.13313870131969452, -0....
0.017594
time. To deal with that the replica uses its logical clock to report that a key does not exist \*\*only for read operations\*\* that don't violate the consistency of the data set (as new commands from the master will arrive). In this way replicas avoid reporting logically expired keys that are still existing. In practical terms, an HTML fragments cache that uses replicas to scale will avoid returning items that are already older than the desired time to live. 3. During Lua scripts executions no key expiries are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys expiring in the middle of a script, and is needed to send the same script to the replica in a way that is guaranteed to have the same effects in the data set. Once a replica is promoted to a master it will start to expire keys independently, and will not require any help from its old master. ## Configuring replication in Docker and NAT When Docker, or other types of containers using port forwarding, or Network Address Translation is used, Redis replication needs some extra care, especially when using Redis Sentinel or other systems where the master `INFO` or `ROLE` commands output is scanned to discover replicas' addresses. The problem is that the `ROLE` command, and the replication section of the `INFO` output, when issued into a master instance, will show replicas as having the IP address they use to connect to the master, which, in environments using NAT may be different compared to the logical address of the replica instance (the one that clients should use to connect to replicas). Similarly the replicas will be listed with the listening port configured into `redis.conf`, that may be different from the forwarded port in case the port is remapped. To fix both issues, it is possible, since Redis 3.2.2, to force a replica to announce an arbitrary pair of IP and port to the master. The two configurations directives to use are: replica-announce-ip 5.5.5.5 replica-announce-port 1234 And are documented in the example `redis.conf` of recent Redis distributions. ## The INFO and ROLE command There are two Redis commands that provide a lot of information on the current replication parameters of master and replica instances. One is `INFO`. If the command is called with the `replication` argument as `INFO replication` only information relevant to the replication are displayed. Another more computer-friendly command is `ROLE`, that provides the replication status of masters and replicas together with their replication offsets, list of connected replicas and so forth. ## Partial sync after restarts and failovers Since Redis 4.0, when an instance is promoted to master after a failover, it will still be able to perform a partial resynchronization with the replicas of the old master. To do so, the replica remembers the old replication ID and offset of its former master, so can provide part of the backlog to the connecting replicas even if they ask for the old replication ID. However the new replication ID of the promoted replica will be different, since it constitutes a different history of the data set. For example, the master can return available and can continue accepting writes for some time, so using the same replication ID in the promoted replica would violate the rule that a replication ID and offset pair identifies only a single data set. Moreover, replicas - when powered off gently and restarted - are able to store in the `RDB`
https://github.com/redis/redis-doc/blob/master//docs/management/replication.md
master
redis
[ -0.10570939630270004, -0.026839112862944603, 0.00006415533425752074, 0.022527359426021576, 0.056584492325782776, -0.07958705723285675, -0.07688482850790024, 0.02187296375632286, 0.10277891159057617, 0.03400746360421181, 0.05355951562523842, 0.11607750505208969, -0.0026827778201550245, -0.0...
0.064343
continue accepting writes for some time, so using the same replication ID in the promoted replica would violate the rule that a replication ID and offset pair identifies only a single data set. Moreover, replicas - when powered off gently and restarted - are able to store in the `RDB` file the information needed to resync with their master. This is useful in case of upgrades. When this is needed, it is better to use the `SHUTDOWN` command in order to perform a `save & quit` operation on the replica. It is not possible to partially sync a replica that restarted via the AOF file. However the instance may be turned to RDB persistence before shutting down it, than can be restarted, and finally AOF can be enabled again. ## `Maxmemory` on replicas By default, a replica will ignore `maxmemory` (unless it is promoted to master after a failover or manually). It means that the eviction of keys will be handled by the master, sending the DEL commands to the replica as keys evict in the master side. This behavior ensures that masters and replicas stay consistent, which is usually what you want. However, if your replica is writable, or you want the replica to have a different memory setting, and you are sure all the writes performed to the replica are idempotent, then you may change this default (but be sure to understand what you are doing). Note that since the replica by default does not evict, it may end up using more memory than what is set via `maxmemory` (since there are certain buffers that may be larger on the replica, or data structures may sometimes take more memory and so forth). Make sure you monitor your replicas, and make sure they have enough memory to never hit a real out-of-memory condition before the master hits the configured `maxmemory` setting. To change this behavior, you can allow a replica to not ignore the `maxmemory`. The configuration directives to use is: replica-ignore-maxmemory no
https://github.com/redis/redis-doc/blob/master//docs/management/replication.md
master
redis
[ -0.02622784674167633, -0.07120747864246368, -0.017242377623915672, -0.020377641543745995, 0.010916564613580704, 0.01864829659461975, -0.03710808604955673, -0.08571087568998337, 0.02258250117301941, 0.07850286364555359, -0.014811510220170021, 0.15052561461925507, 0.048973944038152695, -0.04...
-0.020316
## Redis setup tips ### Linux \* Deploy Redis using the Linux operating system. Redis is also tested on OS X, and from time to time on FreeBSD and OpenBSD systems. However, Linux is where most of the stress testing is performed, and where most production deployments are run. \* Set the Linux kernel overcommit memory setting to 1. Add `vm.overcommit\_memory = 1` to `/etc/sysctl.conf`. Then, reboot or run the command `sysctl vm.overcommit\_memory=1` to activate the setting. See [FAQ: Background saving fails with a fork() error on Linux?](https://redis.io/docs/get-started/faq/#background-saving-fails-with-a-fork-error-on-linux) for details. \* To ensure the Linux kernel feature Transparent Huge Pages does not impact Redis memory usage and latency, run the command: `echo never > /sys/kernel/mm/transparent\_hugepage/enabled` to disable it. See [Latency Diagnosis - Latency induced by transparent huge pages](https://redis.io/docs/management/optimization/latency/#latency-induced-by-transparent-huge-pages) for additional context. ### Memory \* Ensured that swap is enabled and that your swap file size is equal to amount of memory on your system. If Linux does not have swap set up, and your Redis instance accidentally consumes too much memory, Redis can crash when it is out of memory, or the Linux kernel OOM killer can kill the Redis process. When swapping is enabled, you can detect latency spikes and act on them. \* Set an explicit `maxmemory` option limit in your instance to make sure that it will report errors instead of failing when the system memory limit is near to be reached. Note that `maxmemory` should be set by calculating the overhead for Redis, other than data, and the fragmentation overhead. So if you think you have 10 GB of free memory, set it to 8 or 9. \* If you are using Redis in a write-heavy application, while saving an RDB file on disk or rewriting the AOF log, Redis can use up to 2 times the memory normally used. The additional memory used is proportional to the number of memory pages modified by writes during the saving process, so it is often proportional to the number of keys (or aggregate types items) touched during this time. Make sure to size your memory accordingly. \* See the `LATENCY DOCTOR` and `MEMORY DOCTOR` commands to assist in troubleshooting. ### Imaging \* When running under daemontools, use `daemonize no`. ### Replication \* Set up a non-trivial replication backlog in proportion to the amount of memory Redis is using. The backlog allows replicas to sync with the primary (master) instance much more easily. \* If you use replication, Redis performs RDB saves even if persistence is disabled. (This does not apply to diskless replication.) If you don't have disk usage on the master, enable diskless replication. \* If you are using replication, ensure that either your master has persistence enabled, or that it does not automatically restart on crashes. Replicas will try to maintain an exact copy of the master, so if a master restarts with an empty data set, replicas will be wiped as well. ### Security \* By default, Redis does not require any authentication and listens to all the network interfaces. This is a big security issue if you leave Redis exposed on the internet or other places where attackers can reach it. See for example [this attack](http://antirez.com/news/96) to see how dangerous it can be. Please check our [security page](/topics/security) and the [quick start](/topics/quickstart) for information about how to secure Redis. ## Running Redis on EC2 \* Use HVM based instances, not PV based instances. \* Do not use old instance families. For example, use m3.medium with HVM instead of m1.medium with PV. \* The use of Redis persistence with EC2 EBS volumes needs to be
https://github.com/redis/redis-doc/blob/master//docs/management/admin.md
master
redis
[ 0.03805091604590416, -0.013404464349150658, -0.08517080545425415, 0.013870288617908955, 0.07548245042562485, -0.12222534418106079, -0.021004680544137955, 0.048895999789237976, -0.023728691041469574, 0.02743043750524521, 0.012448926456272602, 0.015357320196926594, -0.0018602843629196286, -0...
0.115781
about how to secure Redis. ## Running Redis on EC2 \* Use HVM based instances, not PV based instances. \* Do not use old instance families. For example, use m3.medium with HVM instead of m1.medium with PV. \* The use of Redis persistence with EC2 EBS volumes needs to be handled with care because sometimes EBS volumes have high latency characteristics. \* You may want to try the new diskless replication if you have issues when replicas are synchronizing with the master. ## Upgrading or restarting a Redis instance without downtime Redis is designed to be a long-running process in your server. You can modify many configuration options without a restart using the `CONFIG SET` command. You can also switch from AOF to RDB snapshots persistence, or the other way around, without restarting Redis. Check the output of the `CONFIG GET \*` command for more information. From time to time, a restart is required, for example, to upgrade the Redis process to a newer version, or when you need to modify a configuration parameter that is currently not supported by the `CONFIG` command. Follow these steps to avoid downtime. \* Set up your new Redis instance as a replica for your current Redis instance. In order to do so, you need a different server, or a server that has enough RAM to keep two instances of Redis running at the same time. \* If you use a single server, ensure that the replica is started on a different port than the master instance, otherwise the replica cannot start. \* Wait for the replication initial synchronization to complete. Check the replica's log file. \* Using `INFO`, ensure the master and replica have the same number of keys. Use `redis-cli` to check that the replica is working as expected and is replying to your commands. \* Allow writes to the replica using `CONFIG SET slave-read-only no`. \* Configure all your clients to use the new instance (the replica). Note that you may want to use the `CLIENT PAUSE` command to ensure that no client can write to the old master during the switch. \* Once you confirm that the master is no longer receiving any queries (you can check this using the `MONITOR` command), elect the replica to master using the `REPLICAOF NO ONE` command, and then shut down your master. If you are using [Redis Sentinel](/topics/sentinel) or [Redis Cluster](/topics/cluster-tutorial), the simplest way to upgrade to newer versions is to upgrade one replica after the other. Then you can perform a manual failover to promote one of the upgraded replicas to master, and finally promote the last replica. --- \*\*NOTE\*\* Redis Cluster 4.0 is not compatible with Redis Cluster 3.2 at cluster bus protocol level, so a mass restart is needed in this case. However, Redis 5 cluster bus is backward compatible with Redis 4. ---
https://github.com/redis/redis-doc/blob/master//docs/management/admin.md
master
redis
[ -0.017078334465622902, -0.006312356796115637, -0.05118498206138611, -0.05198332667350769, 0.0014728271635249257, -0.012688564136624336, -0.054636355489492416, -0.005260765086859465, -0.00954605732113123, 0.001026646699756384, -0.028474798426032066, 0.03248976916074753, 0.07090101391077042, ...
0.019813
This page tries to help you with what to do if you have issues with Redis. Part of the Redis project is helping people that are experiencing problems because we don't like to leave people alone with their issues. \* If you have \*\*latency problems\*\* with Redis, that in some way appears to be idle for some time, read our [Redis latency troubleshooting guide](/topics/latency). \* Redis stable releases are usually very reliable, however in the rare event you are \*\*experiencing crashes\*\* the developers can help a lot more if you provide debugging information. Please read our [Debugging Redis guide](/topics/debugging). \* We have a long history of users experiencing crashes with Redis that actually turned out to be servers with \*\*broken RAM\*\*. Please test your RAM using \*\*redis-server --test-memory\*\* in case Redis is not stable in your system. Redis built-in memory test is fast and reasonably reliable, but if you can you should reboot your server and use [memtest86](http://memtest86.com). For every other problem please drop a message to the [Redis Google Group](http://groups.google.com/group/redis-db). We will be glad to help. You can also find assistance on the [Redis Discord server](https://discord.gg/redis). ### List of known critical bugs in Redis 3.0.x, 2.8.x and 2.6.x To find a list of critical bugs please refer to the changelogs: \* [Redis 3.0 Changelog](https://raw.githubusercontent.com/redis/redis/3.0/00-RELEASENOTES). \* [Redis 2.8 Changelog](https://raw.githubusercontent.com/redis/redis/2.8/00-RELEASENOTES). \* [Redis 2.6 Changelog](https://raw.githubusercontent.com/redis/redis/2.6/00-RELEASENOTES). Check the \*upgrade urgency\* level in each patch release to more easily spot releases that included important fixes. ### List of known Linux related bugs affecting Redis. \* Ubuntu 10.04 and 10.10 contain [bugs](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/666211) that can cause performance issues. The default kernels shipped with these distributions are not recommended. Bugs were reported as having affected EC2 instances, but some users also cited server impact. \* Certain versions of the Xen hypervisor report poor fork() performance. See [the latency page](/topics/latency) for more information.
https://github.com/redis/redis-doc/blob/master//docs/management/troubleshooting.md
master
redis
[ 0.017335105687379837, -0.048038989305496216, -0.05743124336004257, 0.02497659996151924, -0.021090544760227203, -0.11084925383329391, -0.00037656695349141955, 0.029510946944355965, -0.004934667143970728, 0.016517499461770058, -0.008969167247414589, 0.030684204772114754, 0.012126196175813675, ...
0.138835
Redis Sentinel provides high availability for Redis when not using [Redis Cluster](/docs/manual/scaling). Redis Sentinel also provides other collateral tasks such as monitoring, notifications and acts as a configuration provider for clients. This is the full list of Sentinel capabilities at a macroscopic level (i.e. the \*big picture\*): \* \*\*Monitoring\*\*. Sentinel constantly checks if your master and replica instances are working as expected. \* \*\*Notification\*\*. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances. \* \*\*Automatic failover\*\*. If a master is not working as expected, Sentinel can start a failover process where a replica is promoted to master, the other additional replicas are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting. \* \*\*Configuration provider\*\*. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address. ## Sentinel as a distributed system Redis Sentinel is a distributed system: Sentinel itself is designed to run in a configuration where there are multiple Sentinel processes cooperating together. The advantage of having multiple Sentinel processes cooperating are the following: 1. Failure detection is performed when multiple Sentinels agree about the fact a given master is no longer available. This lowers the probability of false positives. 2. Sentinel works even if not all the Sentinel processes are working, making the system robust against failures. There is no fun in having a failover system which is itself a single point of failure, after all. The sum of Sentinels, Redis instances (masters and replicas) and clients connecting to Sentinel and Redis, are also a larger distributed system with specific properties. In this document concepts will be introduced gradually starting from basic information needed in order to understand the basic properties of Sentinel, to more complex information (that are optional) in order to understand how exactly Sentinel works. ## Sentinel quick start ### Obtaining Sentinel The current version of Sentinel is called \*\*Sentinel 2\*\*. It is a rewrite of the initial Sentinel implementation using stronger and simpler-to-predict algorithms (that are explained in this documentation). A stable release of Redis Sentinel is shipped since Redis 2.8. New developments are performed in the \*unstable\* branch, and new features sometimes are back ported into the latest stable branch as soon as they are considered to be stable. Redis Sentinel version 1, shipped with Redis 2.6, is deprecated and should not be used. ### Running Sentinel If you are using the `redis-sentinel` executable (or if you have a symbolic link with that name to the `redis-server` executable) you can run Sentinel with the following command line: redis-sentinel /path/to/sentinel.conf Otherwise you can use directly the `redis-server` executable starting it in Sentinel mode: redis-server /path/to/sentinel.conf --sentinel Both ways work the same. However \*\*it is mandatory\*\* to use a configuration file when running Sentinel, as this file will be used by the system in order to save the current state that will be reloaded in case of restarts. Sentinel will simply refuse to start if no configuration file is given or if the configuration file path is not writable. Sentinels by default run \*\*listening for connections to TCP port 26379\*\*, so for Sentinels to work, port 26379 of your servers \*\*must be open\*\* to receive connections from the IP addresses of the other Sentinel instances. Otherwise Sentinels can't talk and can't agree about what to
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.00508908461779356, -0.10282447189092636, -0.00780158955603838, 0.07058768719434738, 0.05213196948170662, -0.06776586174964905, 0.024689912796020508, -0.003829787252470851, 0.03904516249895096, 0.012674772180616856, -0.057434093207120895, 0.03861283138394356, 0.047905124723911285, -0.031...
0.216598
path is not writable. Sentinels by default run \*\*listening for connections to TCP port 26379\*\*, so for Sentinels to work, port 26379 of your servers \*\*must be open\*\* to receive connections from the IP addresses of the other Sentinel instances. Otherwise Sentinels can't talk and can't agree about what to do, so failover will never be performed. ### Fundamental things to know about Sentinel before deploying 1. You need at least three Sentinel instances for a robust deployment. 2. The three Sentinel instances should be placed into computers or virtual machines that are believed to fail in an independent way. So for example different physical servers or Virtual Machines executed on different availability zones. 3. Sentinel + Redis distributed system does not guarantee that acknowledged writes are retained during failures, since Redis uses asynchronous replication. However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments, while there are other less secure ways to deploy it. 4. You need Sentinel support in your clients. Popular client libraries have Sentinel support, but not all. 5. There is no HA setup which is safe if you don't test from time to time in development environments, or even better if you can, in production environments, if they work. You may have a misconfiguration that will become apparent only when it's too late (at 3am when your master stops working). 6. \*\*Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care\*\*: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master. Check the [section about \_Sentinel and Docker\_](#sentinel-docker-nat-and-possible-issues) later in this document for more information. ### Configuring Sentinel The Redis source distribution contains a file called `sentinel.conf` that is a self-documented example configuration file you can use to configure Sentinel, however a typical minimal configuration file looks like the following: sentinel monitor mymaster 127.0.0.1 6379 2 sentinel down-after-milliseconds mymaster 60000 sentinel failover-timeout mymaster 180000 sentinel parallel-syncs mymaster 1 sentinel monitor resque 192.168.1.3 6380 4 sentinel down-after-milliseconds resque 10000 sentinel failover-timeout resque 180000 sentinel parallel-syncs resque 5 You only need to specify the masters to monitor, giving to each separated master (that may have any number of replicas) a different name. There is no need to specify replicas, which are auto-discovered. Sentinel will update the configuration automatically with additional information about replicas (in order to retain the information in case of restart). The configuration is also rewritten every time a replica is promoted to master during a failover and every time a new Sentinel is discovered. The example configuration above basically monitors two sets of Redis instances, each composed of a master and an undefined number of replicas. One set of instances is called `mymaster`, and the other `resque`. The meaning of the arguments of `sentinel monitor` statements is the following: sentinel monitor For the sake of clarity, let's check line by line what the configuration options mean: The first line is used to tell Redis to monitor a master called \*mymaster\*, that is at address 127.0.0.1 and port 6379, with a quorum of 2. Everything is pretty obvious but the \*\*quorum\*\* argument: \* The \*\*quorum\*\* is the number of Sentinels that need to agree about the fact the master is not reachable, in order to really mark the master as failing, and eventually start a failover procedure if possible. \* However \*\*the quorum is only used to detect the failure\*\*. In order to actually perform a failover, one of the Sentinels need to be elected leader for the failover and
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ 0.01654811017215252, -0.11943759024143219, -0.012349594384431839, 0.04534748196601868, -0.017801348119974136, -0.048904888331890106, -0.026016948744654655, 0.013674384914338589, 0.01277786772698164, 0.028925713151693344, -0.06863536685705185, 0.02882838249206543, 0.025718877092003822, -0.0...
0.137276
not reachable, in order to really mark the master as failing, and eventually start a failover procedure if possible. \* However \*\*the quorum is only used to detect the failure\*\*. In order to actually perform a failover, one of the Sentinels need to be elected leader for the failover and be authorized to proceed. This only happens with the vote of the \*\*majority of the Sentinel processes\*\*. So for example if you have 5 Sentinel processes, and the quorum for a given master set to the value of 2, this is what happens: \* If two Sentinels agree at the same time about the master being unreachable, one of the two will try to start a failover. \* If there are at least a total of three Sentinels reachable, the failover will be authorized and will actually start. In practical terms this means during failures \*\*Sentinel never starts a failover if the majority of Sentinel processes are unable to talk\*\* (aka no failover in the minority partition). ### Other Sentinel options The other options are almost always in the form: sentinel And are used for the following purposes: \* `down-after-milliseconds` is the time in milliseconds an instance should not be reachable (either does not reply to our PINGs or it is replying with an error) for a Sentinel starting to think it is down. \* `parallel-syncs` sets the number of replicas that can be reconfigured to use the new master after a failover at the same time. The lower the number, the more time it will take for the failover process to complete, however if the replicas are configured to serve old data, you may not want all the replicas to re-synchronize with the master at the same time. While the replication process is mostly non blocking for a replica, there is a moment when it stops to load the bulk data from the master. You may want to make sure only one replica at a time is not reachable by setting this option to the value of 1. Additional options are described in the rest of this document and documented in the example `sentinel.conf` file shipped with the Redis distribution. Configuration parameters can be modified at runtime: \* Master-specific configuration parameters are modified using `SENTINEL SET`. \* Global configuration parameters are modified using `SENTINEL CONFIG SET`. See the [\_Reconfiguring Sentinel at runtime\_ section](#reconfiguring-sentinel-at-runtime) for more information. ### Example Sentinel deployments Now that you know the basic information about Sentinel, you may wonder where you should place your Sentinel processes, how many Sentinel processes you need and so forth. This section shows a few example deployments. We use ASCII art in order to show you configuration examples in a \*graphical\* format, this is what the different symbols means: +--------------------+ | This is a computer | | or VM that fails | | independently. We | | call it a "box" | +--------------------+ We write inside the boxes what they are running: +-------------------+ | Redis master M1 | | Redis Sentinel S1 | +-------------------+ Different boxes are connected by lines, to show that they are able to talk: +-------------+ +-------------+ | Sentinel S1 |---------------| Sentinel S2 | +-------------+ +-------------+ Network partitions are shown as interrupted lines using slashes: +-------------+ +-------------+ | Sentinel S1 |------ // ------| Sentinel S2 | +-------------+ +-------------+ Also note that: \* Masters are called M1, M2, M3, ..., Mn. \* Replicas are called R1, R2, R3, ..., Rn (R stands for \*replica\*). \* Sentinels are called S1, S2, S3, ..., Sn. \* Clients are called C1, C2, C3, ..., Cn. \* When an instance changes role because
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.011064676567912102, -0.09548000991344452, 0.05991694703698158, 0.01683819107711315, 0.0025288534816354513, -0.0016868436941877007, -0.05858886241912842, 0.009217998012900352, -0.01735035516321659, 0.041528623551130295, -0.048928774893283844, -0.030759047716856003, 0.051654838025569916, ...
0.134486
Also note that: \* Masters are called M1, M2, M3, ..., Mn. \* Replicas are called R1, R2, R3, ..., Rn (R stands for \*replica\*). \* Sentinels are called S1, S2, S3, ..., Sn. \* Clients are called C1, C2, C3, ..., Cn. \* When an instance changes role because of Sentinel actions, we put it inside square brackets, so [M1] means an instance that is now a master because of Sentinel intervention. Note that we will never show \*\*setups where just two Sentinels are used\*\*, since Sentinels always need \*\*to talk with the majority\*\* in order to start a failover. #### Example 1: just two Sentinels, DON'T DO THIS +----+ +----+ | M1 |---------| R1 | | S1 | | S2 | +----+ +----+ Configuration: quorum = 1 \* In this setup, if the master M1 fails, R1 will be promoted since the two Sentinels can reach agreement about the failure (obviously with quorum set to 1) and can also authorize a failover because the majority is two. So apparently it could superficially work, however check the next points to see why this setup is broken. \* If the box where M1 is running stops working, also S1 stops working. The Sentinel running in the other box S2 will not be able to authorize a failover, so the system will become not available. Note that a majority is needed in order to order different failovers, and later propagate the latest configuration to all the Sentinels. Also note that the ability to failover in a single side of the above setup, without any agreement, would be very dangerous: +----+ +------+ | M1 |----//-----| [M1] | | S1 | | S2 | +----+ +------+ In the above configuration we created two masters (assuming S2 could failover without authorization) in a perfectly symmetrical way. Clients may write indefinitely to both sides, and there is no way to understand when the partition heals what configuration is the right one, in order to prevent a \*permanent split brain condition\*. So please \*\*deploy at least three Sentinels in three different boxes\*\* always. #### Example 2: basic setup with three boxes This is a very simple setup, that has the advantage to be simple to tune for additional safety. It is based on three boxes, each box running both a Redis process and a Sentinel process. +----+ | M1 | | S1 | +----+ | +----+ | +----+ | R2 |----+----| R3 | | S2 | | S3 | +----+ +----+ Configuration: quorum = 2 If the master M1 fails, S2 and S3 will agree about the failure and will be able to authorize a failover, making clients able to continue. In every Sentinel setup, as Redis uses asynchronous replication, there is always the risk of losing some writes because a given acknowledged write may not be able to reach the replica which is promoted to master. However in the above setup there is a higher risk due to clients being partitioned away with an old master, like in the following picture: +----+ | M1 | | S1 | <- C1 (writes will be lost) +----+ | / / +------+ | +----+ | [M2] |----+----| R3 | | S2 | | S3 | +------+ +----+ In this case a network partition isolated the old master M1, so the replica R2 is promoted to master. However clients, like C1, that are in the same partition as the old master, may continue to write data to the old master. This data will be lost forever since when the partition will heal, the master will be reconfigured as a
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.05591438710689545, -0.06199095770716667, 0.06576979905366898, 0.019405841827392578, -0.023085104301571846, 0.02056356705725193, 0.0016623169649392366, 0.00013379483425524086, -0.020202796906232834, -0.006020460743457079, -0.049662746489048004, -0.05180945619940758, 0.08781315386295319, ...
0.183524
so the replica R2 is promoted to master. However clients, like C1, that are in the same partition as the old master, may continue to write data to the old master. This data will be lost forever since when the partition will heal, the master will be reconfigured as a replica of the new master, discarding its data set. This problem can be mitigated using the following Redis replication feature, that allows to stop accepting writes if a master detects that it is no longer able to transfer its writes to the specified number of replicas. min-replicas-to-write 1 min-replicas-max-lag 10 With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous \*not being able to write\* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds. Using this configuration, the old Redis master M1 in the above example, will become unavailable after 10 seconds. When the partition heals, the Sentinel configuration will converge to the new one, the client C1 will be able to fetch a valid configuration and will continue with the new master. However there is no free lunch. With this refinement, if the two replicas are down, the master will stop accepting writes. It's a trade off. #### Example 3: Sentinel in the client boxes Sometimes we have only two Redis boxes available, one for the master and one for the replica. The configuration in the example 2 is not viable in that case, so we can resort to the following, where Sentinels are placed where clients are: +----+ +----+ | M1 |----+----| R1 | | | | | | +----+ | +----+ | +------------+------------+ | | | | | | +----+ +----+ +----+ | C1 | | C2 | | C3 | | S1 | | S2 | | S3 | +----+ +----+ +----+ Configuration: quorum = 2 In this setup, the point of view Sentinels is the same as the clients: if a master is reachable by the majority of the clients, it is fine. C1, C2, C3 here are generic clients, it does not mean that C1 identifies a single client connected to Redis. It is more likely something like an application server, a Rails app, or something like that. If the box where M1 and S1 are running fails, the failover will happen without issues, however it is easy to see that different network partitions will result in different behaviors. For example Sentinel will not be able to setup if the network between the clients and the Redis servers is disconnected, since the Redis master and replica will both be unavailable. Note that if C3 gets partitioned with M1 (hardly possible with the network described above, but more likely possible with different layouts, or because of failures at the software layer), we have a similar issue as described in Example 2, with the difference that here we have no way to break the symmetry, since there is just a replica and master, so the master can't stop accepting queries when it is disconnected from its replica, otherwise the master would never be available during replica failures. So this is a valid setup but the setup in the Example 2 has advantages such as the HA system of Redis running in the same boxes as Redis itself which may be simpler to manage, and the ability to put a
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.05542869120836258, -0.07068219780921936, 0.0000770205951994285, 0.04355710744857788, 0.048722364008426666, -0.034979674965143204, -0.04833446815609932, -0.007628561928868294, 0.04237987846136093, 0.004662834573537111, -0.007684726268053055, 0.10584244877099991, 0.08207660168409348, -0.0...
0.05603
master would never be available during replica failures. So this is a valid setup but the setup in the Example 2 has advantages such as the HA system of Redis running in the same boxes as Redis itself which may be simpler to manage, and the ability to put a bound on the amount of time a master in the minority partition can receive writes. #### Example 4: Sentinel client side with less than three clients The setup described in the Example 3 cannot be used if there are less than three boxes in the client side (for example three web servers). In this case we need to resort to a mixed setup like the following: +----+ +----+ | M1 |----+----| R1 | | S1 | | | S2 | +----+ | +----+ | +------+-----+ | | | | +----+ +----+ | C1 | | C2 | | S3 | | S4 | +----+ +----+ Configuration: quorum = 3 This is similar to the setup in Example 3, but here we run four Sentinels in the four boxes we have available. If the master M1 becomes unavailable the other three Sentinels will perform the failover. In theory this setup works removing the box where C2 and S4 are running, and setting the quorum to 2. However it is unlikely that we want HA in the Redis side without having high availability in our application layer. ### Sentinel, Docker, NAT, and possible issues Docker uses a technique called port mapping: programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using. This is useful in order to run multiple containers using the same ports, at the same time, in the same server. Docker is not the only software system where this happens, there are other Network Address Translation setups where ports may be remapped, and sometimes not ports but also IP addresses. Remapping ports and addresses creates issues with Sentinel in two ways: 1. Sentinel auto-discovery of other Sentinels no longer works, since it is based on \*hello\* messages where each Sentinel announce at which port and IP address they are listening for connection. However Sentinels have no way to understand that an address or port is remapped, so it is announcing an information that is not correct for other Sentinels to connect. 2. Replicas are listed in the `INFO` output of a Redis master in a similar way: the address is detected by the master checking the remote peer of the TCP connection, while the port is advertised by the replica itself during the handshake, however the port may be wrong for the same reason as exposed in point 1. Since Sentinels auto detect replicas using masters `INFO` output information, the detected replicas will not be reachable, and Sentinel will never be able to failover the master, since there are no good replicas from the point of view of the system, so there is currently no way to monitor with Sentinel a set of master and replica instances deployed with Docker, \*\*unless you instruct Docker to map the port 1:1\*\*. For the first problem, in case you want to run a set of Sentinel instances using Docker with forwarded ports (or any other NAT setup where ports are remapped), you can use the following two Sentinel configuration directives in order to force Sentinel to announce a specific set of IP and port: sentinel announce-ip sentinel announce-port Note that Docker has the ability to run in \*host networking mode\* (check the `--net=host` option for more information). This should create no
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.04491302743554115, -0.05947723239660263, -0.009814083576202393, 0.04406629130244255, -0.03824429586529732, -0.024426352232694626, -0.06112903729081154, 0.02907312661409378, -0.011524444445967674, 0.054356373846530914, -0.04742696136236191, -0.005788817536085844, 0.10876079648733139, -0....
0.089741
you can use the following two Sentinel configuration directives in order to force Sentinel to announce a specific set of IP and port: sentinel announce-ip sentinel announce-port Note that Docker has the ability to run in \*host networking mode\* (check the `--net=host` option for more information). This should create no issues since ports are not remapped in this setup. ### IP Addresses and DNS names Older versions of Sentinel did not support host names and required IP addresses to be specified everywhere. Starting with version 6.2, Sentinel has \*optional\* support for host names. \*\*This capability is disabled by default. If you're going to enable DNS/hostnames support, please note:\*\* 1. The name resolution configuration on your Redis and Sentinel nodes must be reliable and be able to resolve addresses quickly. Unexpected delays in address resolution may have a negative impact on Sentinel. 2. You should use hostnames everywhere and avoid mixing hostnames and IP addresses. To do that, use `replica-announce-ip ` and `sentinel announce-ip ` for all Redis and Sentinel instances, respectively. Enabling the `resolve-hostnames` global configuration allows Sentinel to accept host names: \* As part of a `sentinel monitor` command \* As a replica address, if the replica uses a host name value for `replica-announce-ip` Sentinel will accept host names as valid inputs and resolve them, but will still refer to IP addresses when announcing an instance, updating configuration files, etc. Enabling the `announce-hostnames` global configuration makes Sentinel use host names instead. This affects replies to clients, values written in configuration files, the `REPLICAOF` command issued to replicas, etc. This behavior may not be compatible with all Sentinel clients, that may explicitly expect an IP address. Using host names may be useful when clients use TLS to connect to instances and require a name rather than an IP address in order to perform certificate ASN matching. ## A quick tutorial In the next sections of this document, all the details about [\_Sentinel API\_](#sentinel-api), configuration and semantics will be covered incrementally. However for people that want to play with the system ASAP, this section is a tutorial that shows how to configure and interact with 3 Sentinel instances. Here we assume that the instances are executed at port 5000, 5001, 5002. We also assume that you have a running Redis master at port 6379 with a replica running at port 6380. We will use the IPv4 loopback address 127.0.0.1 everywhere during the tutorial, assuming you are running the simulation on your personal computer. The three Sentinel configuration files should look like the following: port 5000 sentinel monitor mymaster 127.0.0.1 6379 2 sentinel down-after-milliseconds mymaster 5000 sentinel failover-timeout mymaster 60000 sentinel parallel-syncs mymaster 1 The other two configuration files will be identical but using 5001 and 5002 as port numbers. A few things to note about the above configuration: \* The master set is called `mymaster`. It identifies the master and its replicas. Since each \*master set\* has a different name, Sentinel can monitor different sets of masters and replicas at the same time. \* The quorum was set to the value of 2 (last argument of `sentinel monitor` configuration directive). \* The `down-after-milliseconds` value is 5000 milliseconds, that is 5 seconds, so masters will be detected as failing as soon as we don't receive any reply from our pings within this amount of time. Once you start the three Sentinels, you'll see a few messages they log, like: +monitor master mymaster 127.0.0.1 6379 quorum 2 This is a Sentinel event, and you can receive this kind of events via Pub/Sub if you `SUBSCRIBE` to the event name as specified later in
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ 0.002858725842088461, -0.050820350646972656, 0.0029786245431751013, 0.018299650400877, -0.017828000709414482, -0.04560833424329758, -0.03306346759200096, -0.06277751177549362, 0.01680913008749485, 0.06068495288491249, -0.06444720923900604, -0.025719981640577316, 0.004450871143490076, -0.00...
0.106703
amount of time. Once you start the three Sentinels, you'll see a few messages they log, like: +monitor master mymaster 127.0.0.1 6379 quorum 2 This is a Sentinel event, and you can receive this kind of events via Pub/Sub if you `SUBSCRIBE` to the event name as specified later in [\_Pub/Sub Messages\_ section](#pubsub-messages). Sentinel generates and logs different events during failure detection and failover. Asking Sentinel about the state of a master --- The most obvious thing to do with Sentinel to get started, is check if the master it is monitoring is doing well: $ redis-cli -p 5000 127.0.0.1:5000> sentinel master mymaster 1) "name" 2) "mymaster" 3) "ip" 4) "127.0.0.1" 5) "port" 6) "6379" 7) "runid" 8) "953ae6a589449c13ddefaee3538d356d287f509b" 9) "flags" 10) "master" 11) "link-pending-commands" 12) "0" 13) "link-refcount" 14) "1" 15) "last-ping-sent" 16) "0" 17) "last-ok-ping-reply" 18) "735" 19) "last-ping-reply" 20) "735" 21) "down-after-milliseconds" 22) "5000" 23) "info-refresh" 24) "126" 25) "role-reported" 26) "master" 27) "role-reported-time" 28) "532439" 29) "config-epoch" 30) "1" 31) "num-slaves" 32) "1" 33) "num-other-sentinels" 34) "2" 35) "quorum" 36) "2" 37) "failover-timeout" 38) "60000" 39) "parallel-syncs" 40) "1" As you can see, it prints a number of information about the master. There are a few that are of particular interest for us: 1. `num-other-sentinels` is 2, so we know the Sentinel already detected two more Sentinels for this master. If you check the logs you'll see the `+sentinel` events generated. 2. `flags` is just `master`. If the master was down we could expect to see `s\_down` or `o\_down` flag as well here. 3. `num-slaves` is correctly set to 1, so Sentinel also detected that there is an attached replica to our master. In order to explore more about this instance, you may want to try the following two commands: SENTINEL replicas mymaster SENTINEL sentinels mymaster The first will provide similar information about the replicas connected to the master, and the second about the other Sentinels. Obtaining the address of the current master --- As we already specified, Sentinel also acts as a configuration provider for clients that want to connect to a set of master and replicas. Because of possible failovers or reconfigurations, clients have no idea about who is the currently active master for a given set of instances, so Sentinel exports an API to ask this question: 127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster 1) "127.0.0.1" 2) "6379" ### Testing the failover At this point our toy Sentinel deployment is ready to be tested. We can just kill our master and check if the configuration changes. To do so we can just do: redis-cli -p 6379 DEBUG sleep 30 This command will make our master no longer reachable, sleeping for 30 seconds. It basically simulates a master hanging for some reason. If you check the Sentinel logs, you should be able to see a lot of action: 1. Each Sentinel detects the master is down with an `+sdown` event. 2. This event is later escalated to `+odown`, which means that multiple Sentinels agree about the fact the master is not reachable. 3. Sentinels vote a Sentinel that will start the first failover attempt. 4. The failover happens. If you ask again what is the current master address for `mymaster`, eventually we should get a different reply this time: 127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster 1) "127.0.0.1" 2) "6380" So far so good... At this point you may jump to create your Sentinel deployment or can read more to understand all the Sentinel commands and internals. ## Sentinel API Sentinel provides an API in order to inspect its state, check the health of monitored masters and replicas, subscribe in
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ 0.051179975271224976, -0.12780551612377167, -0.022735120728611946, 0.05423879250884056, 0.053677164018154144, -0.07845745235681534, -0.021405857056379318, -0.03797505795955658, 0.03214070945978165, 0.022128557786345482, -0.07809492945671082, -0.019642721861600876, 0.028462806716561317, -0....
0.118212
So far so good... At this point you may jump to create your Sentinel deployment or can read more to understand all the Sentinel commands and internals. ## Sentinel API Sentinel provides an API in order to inspect its state, check the health of monitored masters and replicas, subscribe in order to receive specific notifications, and change the Sentinel configuration at run time. By default Sentinel runs using TCP port 26379 (note that 6379 is the normal Redis port). Sentinels accept commands using the Redis protocol, so you can use `redis-cli` or any other unmodified Redis client in order to talk with Sentinel. It is possible to directly query a Sentinel to check what is the state of the monitored Redis instances from its point of view, to see what other Sentinels it knows, and so forth. Alternatively, using Pub/Sub, it is possible to receive \*push style\* notifications from Sentinels, every time some event happens, like a failover, or an instance entering an error condition, and so forth. ### Sentinel commands The `SENTINEL` command is the main API for Sentinel. The following is the list of its subcommands (minimal version is noted for where applicable): \* \*\*SENTINEL CONFIG GET ``\*\* (`>= 6.2`) Get the current value of a global Sentinel configuration parameter. The specified name may be a wildcard, similar to the Redis `CONFIG GET` command. \* \*\*SENTINEL CONFIG SET `` ``\*\* (`>= 6.2`) Set the value of a global Sentinel configuration parameter. \* \*\*SENTINEL CKQUORUM ``\*\* Check if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover. This command should be used in monitoring systems to check if a Sentinel deployment is ok. \* \*\*SENTINEL FLUSHCONFIG\*\* Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing. \* \*\*SENTINEL FAILOVER ``\*\* Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). \* \*\*SENTINEL GET-MASTER-ADDR-BY-NAME ``\*\* Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica. \* \*\*SENTINEL INFO-CACHE\*\* (`>= 3.2`) Return cached `INFO` output from masters and replicas. \* \*\*SENTINEL IS-MASTER-DOWN-BY-ADDR \*\* Check if the master specified by ip:port is down from current Sentinel's point of view. This command is mostly for internal use. \* \*\*SENTINEL MASTER ``\*\* Show the state and info of the specified master. \* \*\*SENTINEL MASTERS\*\* Show a list of monitored masters and their state. \* \*\*SENTINEL MONITOR\*\* Start Sentinel's monitoring. Refer to the [\_Reconfiguring Sentinel at Runtime\_ section](#reconfiguring-sentinel-at-runtime) for more information. \* \*\*SENTINEL MYID\*\* (`>= 6.2`) Return the ID of the Sentinel instance. \* \*\*SENTINEL PENDING-SCRIPTS\*\* This command returns information about pending scripts. \* \*\*SENTINEL REMOVE\*\* Stop Sentinel's monitoring. Refer to the [\_Reconfiguring Sentinel at Runtime\_ section](#reconfiguring-sentinel-at-runtime) for more information. \* \*\*SENTINEL REPLICAS ``\*\* (`>= 5.0`) Show a list of replicas for this master, and their state. \* \*\*SENTINEL
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ 0.008403084240853786, -0.10563309490680695, -0.03935984522104263, 0.04928354173898697, 0.049708977341651917, -0.06856502592563629, 0.015436844900250435, -0.01415012963116169, 0.0036462002899497747, 0.03697850555181503, -0.10004667937755585, -0.033056072890758514, 0.010635990649461746, -0.0...
0.20624
of the Sentinel instance. \* \*\*SENTINEL PENDING-SCRIPTS\*\* This command returns information about pending scripts. \* \*\*SENTINEL REMOVE\*\* Stop Sentinel's monitoring. Refer to the [\_Reconfiguring Sentinel at Runtime\_ section](#reconfiguring-sentinel-at-runtime) for more information. \* \*\*SENTINEL REPLICAS ``\*\* (`>= 5.0`) Show a list of replicas for this master, and their state. \* \*\*SENTINEL SENTINELS ``\*\* Show a list of sentinel instances for this master, and their state. \* \*\*SENTINEL SET\*\* Set Sentinel's monitoring configuration. Refer to the [\_Reconfiguring Sentinel at Runtime\_ section](#reconfiguring-sentinel-at-runtime) for more information. \* \*\*SENTINEL SIMULATE-FAILURE (crash-after-election|crash-after-promotion|help)\*\* (`>= 3.2`) This command simulates different Sentinel crash scenarios. \* \*\*SENTINEL RESET ``\*\* This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every replica and sentinel already discovered and associated with the master. For connection management and administration purposes, Sentinel supports the following subset of Redis' commands: \* \*\*ACL\*\* (`>= 6.2`) This command manages the Sentinel Access Control List. For more information refer to the [ACL](/topics/acl) documentation page and the [\_Sentinel Access Control List authentication\_](#sentinel-access-control-list-authentication). \* \*\*AUTH\*\* (`>= 5.0.1`) Authenticate a client connection. For more information refer to the `AUTH` command and the [\_Configuring Sentinel instances with authentication\_ section](#configuring-sentinel-instances-with-authentication). \* \*\*CLIENT\*\* This command manages client connections. For more information refer to its subcommands' pages. \* \*\*COMMAND\*\* (`>= 6.2`) This command returns information about commands. For more information refer to the `COMMAND` command and its various subcommands. \* \*\*HELLO\*\* (`>= 6.0`) Switch the connection's protocol. For more information refer to the `HELLO` command. \* \*\*INFO\*\* Return information and statistics about the Sentinel server. For more information see the `INFO` command. \* \*\*PING\*\* This command simply returns PONG. \* \*\*ROLE\*\* This command returns the string "sentinel" and a list of monitored masters. For more information refer to the `ROLE` command. \* \*\*SHUTDOWN\*\* Shut down the Sentinel instance. Lastly, Sentinel also supports the `SUBSCRIBE`, `UNSUBSCRIBE`, `PSUBSCRIBE` and `PUNSUBSCRIBE` commands. Refer to the [\_Pub/Sub Messages\_ section](#pubsub-messages) for more details. ### Reconfiguring Sentinel at Runtime Starting with Redis version 2.8.4, Sentinel provides an API in order to add, remove, or change the configuration of a given master. Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly. This means that changing the configuration of a single Sentinel does not automatically propagate the changes to the other Sentinels in the network. The following is a list of `SENTINEL` subcommands used in order to update the configuration of a Sentinel instance. \* \*\*SENTINEL MONITOR `` `` `` ``\*\* This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. It is identical to the `sentinel monitor` configuration directive in `sentinel.conf` configuration file, with the difference that you can't use a hostname in as `ip`, but you need to provide an IPv4 or IPv6 address. \* \*\*SENTINEL REMOVE ``\*\* is used in order to remove the specified master: the master will no longer be monitored, and will totally be removed from the internal state of the Sentinel, so it will no longer listed by `SENTINEL masters` and so forth. \* \*\*SENTINEL SET `` [`` `` ...]\*\* The SET command is very similar to the `CONFIG SET` command of Redis, and is used in order to change configuration parameters of a specific master. Multiple option / value pairs can be specified (or none at all). All the configuration parameters that can be configured via `sentinel.conf` are also configurable using the SET command. The following is an example
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.013478495180606842, -0.1027231365442276, 0.014863025397062302, 0.07365920394659042, 0.11655434966087341, -0.03712446242570877, -0.010037295520305634, -0.013440102338790894, -0.02914651855826378, 0.057161808013916016, -0.01568359136581421, -0.038208361715078354, -0.015672532841563225, -0...
0.213948
command of Redis, and is used in order to change configuration parameters of a specific master. Multiple option / value pairs can be specified (or none at all). All the configuration parameters that can be configured via `sentinel.conf` are also configurable using the SET command. The following is an example of `SENTINEL SET` command in order to modify the `down-after-milliseconds` configuration of a master called `objects-cache`: SENTINEL SET objects-cache-master down-after-milliseconds 1000 As already stated, `SENTINEL SET` can be used to set all the configuration parameters that are settable in the startup configuration file. Moreover it is possible to change just the master quorum configuration without removing and re-adding the master with `SENTINEL REMOVE` followed by `SENTINEL MONITOR`, but simply using: SENTINEL SET objects-cache-master quorum 5 Note that there is no equivalent GET command since `SENTINEL MASTER` provides all the configuration parameters in a simple to parse format (as a field/value pairs array). Starting with Redis version 6.2, Sentinel also allows getting and setting global configuration parameters which were only supported in the configuration file prior to that. \* \*\*SENTINEL CONFIG GET ``\*\* Get the current value of a global Sentinel configuration parameter. The specified name may be a wildcard, similar to the Redis `CONFIG GET` command. \* \*\*SENTINEL CONFIG SET `` ``\*\* Set the value of a global Sentinel configuration parameter. Global parameters that can be manipulated include: \* `resolve-hostnames`, `announce-hostnames`. See [\_IP addresses and DNS names\_](#ip-addresses-and-dns-names). \* `announce-ip`, `announce-port`. See [\_Sentinel, Docker, NAT, and possible issues\_](#sentinel-docker-nat-and-possible-issues). \* `sentinel-user`, `sentinel-pass`. See [\_Configuring Sentinel instances with authentication\_](#configuring-sentinel-instances-with-authentication). ### Adding or removing Sentinels Adding a new Sentinel to your deployment is a simple process because of the auto-discover mechanism implemented by Sentinel. All you need to do is to start the new Sentinel configured to monitor the currently active master. Within 10 seconds the Sentinel will acquire the list of other Sentinels and the set of replicas attached to the master. If you need to add multiple Sentinels at once, it is suggested to add it one after the other, waiting for all the other Sentinels to already know about the first one before adding the next. This is useful in order to still guarantee that majority can be achieved only in one side of a partition, in the chance failures should happen in the process of adding new Sentinels. This can be easily achieved by adding every new Sentinel with a 30 seconds delay, and during absence of network partitions. At the end of the process it is possible to use the command `SENTINEL MASTER mastername` in order to check if all the Sentinels agree about the total number of Sentinels monitoring the master. Removing a Sentinel is a bit more complex: \*\*Sentinels never forget already seen Sentinels\*\*, even if they are not reachable for a long time, since we don't want to dynamically change the majority needed to authorize a failover and the creation of a new configuration number. So in order to remove a Sentinel the following steps should be performed in absence of network partitions: 1. Stop the Sentinel process of the Sentinel you want to remove. 2. Send a `SENTINEL RESET \*` command to all the other Sentinel instances (instead of `\*` you can use the exact master name if you want to reset just a single master). One after the other, waiting at least 30 seconds between instances. 3. Check that all the Sentinels agree about the number of Sentinels currently active, by inspecting the output of `SENTINEL MASTER mastername` of every Sentinel. ### Removing the old master or unreachable replicas Sentinels never forget about replicas
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ 0.014644878916442394, -0.026520445942878723, -0.0784311518073082, 0.05402650311589241, -0.08426492661237717, 0.005845887586474419, 0.001972717698663473, 0.015854395925998688, -0.026906553655862808, 0.009067350067198277, -0.03756048530340195, 0.023364000022411346, 0.008104757405817509, -0.0...
0.1262
master). One after the other, waiting at least 30 seconds between instances. 3. Check that all the Sentinels agree about the number of Sentinels currently active, by inspecting the output of `SENTINEL MASTER mastername` of every Sentinel. ### Removing the old master or unreachable replicas Sentinels never forget about replicas of a given master, even when they are unreachable for a long time. This is useful, because Sentinels should be able to correctly reconfigure a returning replica after a network partition or a failure event. Moreover, after a failover, the failed over master is virtually added as a replica of the new master, this way it will be reconfigured to replicate with the new master as soon as it will be available again. However sometimes you want to remove a replica (that may be the old master) forever from the list of replicas monitored by Sentinels. In order to do this, you need to send a `SENTINEL RESET mastername` command to all the Sentinels: they'll refresh the list of replicas within the next 10 seconds, only adding the ones listed as correctly replicating from the current master `INFO` output. ### Pub/Sub messages A client can use a Sentinel as a Redis-compatible Pub/Sub server (but you can't use `PUBLISH`) in order to `SUBSCRIBE` or `PSUBSCRIBE` to channels and get notified about specific events. The channel name is the same as the name of the event. For instance the channel named `+sdown` will receive all the notifications related to instances entering an `SDOWN` (SDOWN means the instance is no longer reachable from the point of view of the Sentinel you are querying) condition. To get all the messages simply subscribe using `PSUBSCRIBE \*`. The following is a list of channels and message formats you can receive using this API. The first word is the channel / event name, the rest is the format of the data. Note: where \*instance details\* is specified it means that the following arguments are provided to identify the target instance: @ The part identifying the master (from the @ argument to the end) is optional and is only specified if the instance is not a master itself. \* \*\*+reset-master\*\* `` -- The master was reset. \* \*\*+slave\*\* `` -- A new replica was detected and attached. \* \*\*+failover-state-reconf-slaves\*\* `` -- Failover state changed to `reconf-slaves` state. \* \*\*+failover-detected\*\* `` -- A failover started by another Sentinel or any other external entity was detected (An attached replica turned into a master). \* \*\*+slave-reconf-sent\*\* `` -- The leader sentinel sent the `REPLICAOF` command to this instance in order to reconfigure it for the new replica. \* \*\*+slave-reconf-inprog\*\* `` -- The replica being reconfigured showed to be a replica of the new master ip:port pair, but the synchronization process is not yet complete. \* \*\*+slave-reconf-done\*\* `` -- The replica is now synchronized with the new master. \* \*\*-dup-sentinel\*\* `` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted). \* \*\*+sentinel\*\* `` -- A new sentinel for this master was detected and attached. \* \*\*+sdown\*\* `` -- The specified instance is now in Subjectively Down state. \* \*\*-sdown\*\* `` -- The specified instance is no longer in Subjectively Down state. \* \*\*+odown\*\* `` -- The specified instance is now in Objectively Down state. \* \*\*-odown\*\* `` -- The specified instance is no longer in Objectively Down state. \* \*\*+new-epoch\*\* `` -- The current epoch was updated. \* \*\*+try-failover\*\* `` -- New failover in progress, waiting to be elected by the majority. \* \*\*+elected-leader\*\* `` -- Won the
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.015303983353078365, -0.1238648071885109, 0.056636013090610504, 0.07189472019672394, 0.0735686868429184, -0.038985732942819595, -0.05896436423063278, -0.029199307784438133, 0.012716402299702168, 0.01766398176550865, -0.024376031011343002, 0.011566383764147758, 0.017224624752998352, -0.05...
0.099655
is now in Objectively Down state. \* \*\*-odown\*\* `` -- The specified instance is no longer in Objectively Down state. \* \*\*+new-epoch\*\* `` -- The current epoch was updated. \* \*\*+try-failover\*\* `` -- New failover in progress, waiting to be elected by the majority. \* \*\*+elected-leader\*\* `` -- Won the election for the specified epoch, can do the failover. \* \*\*+failover-state-select-slave\*\* `` -- New failover state is `select-slave`: we are trying to find a suitable replica for promotion. \* \*\*no-good-slave\*\* `` -- There is no good replica to promote. Currently we'll try after some time, but probably this will change and the state machine will abort the failover at all in this case. \* \*\*selected-slave\*\* `` -- We found the specified good replica to promote. \* \*\*failover-state-send-slaveof-noone\*\* `` -- We are trying to reconfigure the promoted replica as master, waiting for it to switch. \* \*\*failover-end-for-timeout\*\* `` -- The failover terminated for timeout, replicas will eventually be configured to replicate with the new master anyway. \* \*\*failover-end\*\* `` -- The failover terminated with success. All the replicas appears to be reconfigured to replicate with the new master. \* \*\*switch-master\*\* ` ` -- The master new IP and address is the specified one after a configuration change. This is \*\*the message most external users are interested in\*\*. \* \*\*+tilt\*\* -- Tilt mode entered. \* \*\*-tilt\*\* -- Tilt mode exited. ### Handling of -BUSY state The -BUSY error is returned by a Redis instance when a Lua script is running for more time than the configured Lua script time limit. When this happens before triggering a fail over Redis Sentinel will try to send a `SCRIPT KILL` command, that will only succeed if the script was read-only. If the instance is still in an error condition after this try, it will eventually be failed over. Replicas priority --- Redis instances have a configuration parameter called `replica-priority`. This information is exposed by Redis replica instances in their `INFO` output, and Sentinel uses it in order to pick a replica among the ones that can be used in order to failover a master: 1. If the replica priority is set to 0, the replica is never promoted to master. 2. Replicas with a \*lower\* priority number are preferred by Sentinel. For example if there is a replica S1 in the same data center of the current master, and another replica S2 in another data center, it is possible to set S1 with a priority of 10 and S2 with a priority of 100, so that if the master fails and both S1 and S2 are available, S1 will be preferred. For more information about the way replicas are selected, please check the [\_Replica selection and priority\_ section](#replica-selection-and-priority) of this documentation. ### Sentinel and Redis authentication When the master is configured to require authentication from clients, as a security measure, replicas need to also be aware of the credentials in order to authenticate with the master and create the master-replica connection used for the asynchronous replication protocol. ## Redis Access Control List authentication Starting with Redis 6, user authentication and permission is managed with the [Access Control List (ACL)](/topics/acl). In order for Sentinels to connect to Redis server instances when they are configured with ACL, the Sentinel configuration must include the following directives: sentinel auth-user sentinel auth-pass Where `` and `` are the username and password for accessing the group's instances. These credentials should be provisioned on all of the group's Redis instances with the minimal control permissions. For example: 127.0.0.1:6379> ACL SETUSER sentinel-user ON >somepassword allchannels +multi +slaveof +ping +exec +subscribe +config|rewrite +role +publish
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.08190207928419113, -0.0024760267697274685, 0.058194924145936966, -0.006555548403412104, -0.03816197067499161, -0.009642302989959717, -0.02820730023086071, -0.05269339308142662, -0.11819908767938614, 0.07507535815238953, 0.005574143026024103, -0.06385491788387299, 0.06629889458417892, -0...
0.120256
sentinel auth-pass Where `` and `` are the username and password for accessing the group's instances. These credentials should be provisioned on all of the group's Redis instances with the minimal control permissions. For example: 127.0.0.1:6379> ACL SETUSER sentinel-user ON >somepassword allchannels +multi +slaveof +ping +exec +subscribe +config|rewrite +role +publish +info +client|setname +client|kill +script|kill ### Redis password-only authentication Until Redis 6, authentication is achieved using the following configuration directives: \* `requirepass` in the master, in order to set the authentication password, and to make sure the instance will not process requests for non authenticated clients. \* `masterauth` in the replicas in order for the replicas to authenticate with the master in order to correctly replicate data from it. When Sentinel is used, there is not a single master, since after a failover replicas may play the role of masters, and old masters can be reconfigured in order to act as replicas, so what you want to do is to set the above directives in all your instances, both masters and replicas. This is also usually a sane setup since you don't want to protect data only in the master, having the same data accessible in the replicas. However, in the uncommon case where you need a replica that is accessible without authentication, you can still do it by setting up \*\*a replica priority of zero\*\*, to prevent this replica from being promoted to master, and configuring in this replica only the `masterauth` directive, without using the `requirepass` directive, so that data will be readable by unauthenticated clients. In order for Sentinels to connect to Redis server instances when they are configured with `requirepass`, the Sentinel configuration must include the `sentinel auth-pass` directive, in the format: sentinel auth-pass Configuring Sentinel instances with authentication --- Sentinel instances themselves can be secured by requiring clients to authenticate via the `AUTH` command. Starting with Redis 6.2, the [Access Control List (ACL)](/topics/acl) is available, whereas previous versions (starting with Redis 5.0.1) support password-only authentication. Note that Sentinel's authentication configuration should be \*\*applied to each of the instances\*\* in your deployment, and \*\*all instances should use the same configuration\*\*. Furthermore, ACL and password-only authentication should not be used together. ### Sentinel Access Control List authentication The first step in securing a Sentinel instance with ACL is preventing any unauthorized access to it. To do that, you'll need to disable the default superuser (or at the very least set it up with a strong password) and create a new one and allow it access to Pub/Sub channels: 127.0.0.1:5000> ACL SETUSER admin ON >admin-password allchannels +@all OK 127.0.0.1:5000> ACL SETUSER default off OK The default user is used by Sentinel to connect to other instances. You can provide the credentials of another superuser with the following configuration directives: sentinel sentinel-user sentinel sentinel-pass Where `` and `` are the Sentinel's superuser and password, respectively (e.g. `admin` and `admin-password` in the example above). Lastly, for authenticating incoming client connections, you can create a Sentinel restricted user profile such as the following: 127.0.0.1:5000> ACL SETUSER sentinel-user ON >user-password -@all +auth +client|getname +client|id +client|setname +command +hello +ping +role +sentinel|get-master-addr-by-name +sentinel|master +sentinel|myid +sentinel|replicas +sentinel|sentinels Refer to the documentation of your Sentinel client of choice for further information. ### Sentinel password-only authentication To use Sentinel with password-only authentication, add the `requirepass` configuration directive to \*\*all\*\* your Sentinel instances as follows: requirepass "your\_password\_here" When configured this way, Sentinels will do two things: 1. A password will be required from clients in order to send commands to Sentinels. This is obvious since this is how such configuration directive works in Redis in general. 2. Moreover the
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.025412673130631447, -0.06031021103262901, -0.0854908749461174, 0.04736017435789108, -0.014376995153725147, -0.0322730615735054, 0.0003414570528548211, -0.021510303020477295, 0.03190866857767105, 0.007820445112884045, -0.04610539972782135, -0.00033701187931001186, 0.06845833361148834, -0...
0.120902
\*\*all\*\* your Sentinel instances as follows: requirepass "your\_password\_here" When configured this way, Sentinels will do two things: 1. A password will be required from clients in order to send commands to Sentinels. This is obvious since this is how such configuration directive works in Redis in general. 2. Moreover the same password configured to access the local Sentinel, will be used by this Sentinel instance in order to authenticate to all the other Sentinel instances it connects to. This means that \*\*you will have to configure the same `requirepass` password in all the Sentinel instances\*\*. This way every Sentinel can talk with every other Sentinel without any need to configure for each Sentinel the password to access all the other Sentinels, that would be very impractical. Before using this configuration, make sure your client library can send the `AUTH` command to Sentinel instances. ### Sentinel clients implementation --- Sentinel requires explicit client support, unless the system is configured to execute a script that performs a transparent redirection of all the requests to the new master instance (virtual IP or other similar systems). The topic of client libraries implementation is covered in the document [Sentinel clients guidelines](/topics/sentinel-clients). ## More advanced concepts In the following sections we'll cover a few details about how Sentinel works, without resorting to implementation details and algorithms that will be covered in the final part of this document. ### SDOWN and ODOWN failure state Redis Sentinel has two different concepts of \*being down\*, one is called a \*Subjectively Down\* condition (SDOWN) and is a down condition that is local to a given Sentinel instance. Another is called \*Objectively Down\* condition (ODOWN) and is reached when enough Sentinels (at least the number configured as the `quorum` parameter of the monitored master) have an SDOWN condition, and get feedback from other Sentinels using the `SENTINEL is-master-down-by-addr` command. From the point of view of a Sentinel an SDOWN condition is reached when it does not receive a valid reply to PING requests for the number of seconds specified in the configuration as `is-master-down-after-milliseconds` parameter. An acceptable reply to PING is one of the following: \* PING replied with +PONG. \* PING replied with -LOADING error. \* PING replied with -MASTERDOWN error. Any other reply (or no reply at all) is considered non valid. However note that \*\*a logical master that advertises itself as a replica in the INFO output is considered to be down\*\*. Note that SDOWN requires that no acceptable reply is received for the whole interval configured, so for instance if the interval is 30000 milliseconds (30 seconds) and we receive an acceptable ping reply every 29 seconds, the instance is considered to be working. SDOWN is not enough to trigger a failover: it only means a single Sentinel believes a Redis instance is not available. To trigger a failover, the ODOWN state must be reached. To switch from SDOWN to ODOWN no strong consensus algorithm is used, but just a form of gossip: if a given Sentinel gets reports that a master is not working from enough Sentinels \*\*in a given time range\*\*, the SDOWN is promoted to ODOWN. If this acknowledge is later missing, the flag is cleared. A more strict authorization that uses an actual majority is required in order to really start the failover, but no failover can be triggered without reaching the ODOWN state. The ODOWN condition \*\*only applies to masters\*\*. For other kind of instances Sentinel doesn't require to act, so the ODOWN state is never reached for replicas and other sentinels, but only SDOWN is. However SDOWN has also semantic implications.
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.009867251850664616, -0.08884882181882858, -0.07029099017381668, 0.02457374893128872, -0.004978482145816088, -0.028430387377738953, 0.05024755001068115, -0.015735575929284096, 0.028805391862988472, 0.005449961870908737, -0.05735732614994049, -0.006967372260987759, 0.07734227925539017, 0....
0.113004
but no failover can be triggered without reaching the ODOWN state. The ODOWN condition \*\*only applies to masters\*\*. For other kind of instances Sentinel doesn't require to act, so the ODOWN state is never reached for replicas and other sentinels, but only SDOWN is. However SDOWN has also semantic implications. For example a replica in SDOWN state is not selected to be promoted by a Sentinel performing a failover. Sentinels and replicas auto discovery --- Sentinels stay connected with other Sentinels in order to reciprocally check the availability of each other, and to exchange messages. However you don't need to configure a list of other Sentinel addresses in every Sentinel instance you run, as Sentinel uses the Redis instances Pub/Sub capabilities in order to discover the other Sentinels that are monitoring the same masters and replicas. This feature is implemented by sending \*hello messages\* into the channel named `\_\_sentinel\_\_:hello`. Similarly you don't need to configure what is the list of the replicas attached to a master, as Sentinel will auto discover this list querying Redis. \* Every Sentinel publishes a message to every monitored master and replica Pub/Sub channel `\_\_sentinel\_\_:hello`, every two seconds, announcing its presence with ip, port, runid. \* Every Sentinel is subscribed to the Pub/Sub channel `\_\_sentinel\_\_:hello` of every master and replica, looking for unknown sentinels. When new sentinels are detected, they are added as sentinels of this master. \* Hello messages also include the full current configuration of the master. If the receiving Sentinel has a configuration for a given master which is older than the one received, it updates to the new configuration immediately. \* Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address (ip and port pair). In that case all the matching sentinels are removed, and the new added. Sentinel reconfiguration of instances outside the failover procedure --- Even when no failover is in progress, Sentinels will always try to set the current configuration on monitored instances. Specifically: \* Replicas (according to the current configuration) that claim to be masters, will be configured as replicas to replicate with the current master. \* Replicas connected to a wrong master, will be reconfigured to replicate with the right master. For Sentinels to reconfigure replicas, the wrong configuration must be observed for some time, that is greater than the period used to broadcast new configurations. This prevents Sentinels with a stale configuration (for example because they just rejoined from a partition) will try to change the replicas configuration before receiving an update. Also note how the semantics of always trying to impose the current configuration makes the failover more resistant to partitions: \* Masters failed over are reconfigured as replicas when they return available. \* Replicas partitioned away during a partition are reconfigured once reachable. The important lesson to remember about this section is: \*\*Sentinel is a system where each process will always try to impose the last logical configuration to the set of monitored instances\*\*. ### Replica selection and priority When a Sentinel instance is ready to perform a failover, since the master is in `ODOWN` state and the Sentinel received the authorization to failover from the majority of the Sentinel instances known, a suitable replica needs to be selected. The replica selection process evaluates the following information about replicas: 1. Disconnection time from the master. 2. Replica priority. 3. Replication offset processed. 4. Run ID. A replica that is found to be disconnected from the master for more than ten times the configured master timeout (down-after-milliseconds option),
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.009062771685421467, -0.09733639657497406, 0.04094330966472626, 0.08491846919059753, 0.049890268594026566, -0.006919878534972668, 0.02588556706905365, -0.009737632237374783, -0.0075541902333498, -0.010423519648611546, -0.05900121107697487, -0.01807982660830021, 0.026649072766304016, -0.0...
0.136374
be selected. The replica selection process evaluates the following information about replicas: 1. Disconnection time from the master. 2. Replica priority. 3. Replication offset processed. 4. Run ID. A replica that is found to be disconnected from the master for more than ten times the configured master timeout (down-after-milliseconds option), plus the time the master is also not available from the point of view of the Sentinel doing the failover, is considered to be not suitable for the failover and is skipped. In more rigorous terms, a replica whose the `INFO` output suggests it has been disconnected from the master for more than: (down-after-milliseconds \* 10) + milliseconds\_since\_master\_is\_in\_SDOWN\_state Is considered to be unreliable and is disregarded entirely. The replica selection only considers the replicas that passed the above test, and sorts it based on the above criteria, in the following order. 1. The replicas are sorted by `replica-priority` as configured in the `redis.conf` file of the Redis instance. A lower priority will be preferred. 2. If the priority is the same, the replication offset processed by the replica is checked, and the replica that received more data from the master is selected. 3. If multiple replicas have the same priority and processed the same data from the master, a further check is performed, selecting the replica with the lexicographically smaller run ID. Having a lower run ID is not a real advantage for a replica, but is useful in order to make the process of replica selection more deterministic, instead of resorting to select a random replica. In most cases, `replica-priority` does not need to be set explicitly so all instances will use the same default value. If there is a particular fail-over preference, `replica-priority` must be set on all instances, including masters, as a master may become a replica at some future point in time - and it will then need the proper `replica-priority` settings. A Redis instance can be configured with a special `replica-priority` of zero in order to be \*\*never selected\*\* by Sentinels as the new master. However a replica configured in this way will still be reconfigured by Sentinels in order to replicate with the new master after a failover, the only difference is that it will never become a master itself. ## Algorithms and internals In the following sections we will explore the details of Sentinel behavior. It is not strictly needed for users to be aware of all the details, but a deep understanding of Sentinel may help to deploy and operate Sentinel in a more effective way. ### Quorum The previous sections showed that every master monitored by Sentinel is associated to a configured \*\*quorum\*\*. It specifies the number of Sentinel processes that need to agree about the unreachability or error condition of the master in order to trigger a failover. However, after the failover is triggered, in order for the failover to actually be performed, \*\*at least a majority of Sentinels must authorize the Sentinel to failover\*\*. Sentinel never performs a failover in the partition where a minority of Sentinels exist. Let's try to make things a bit more clear: \* Quorum: the number of Sentinel processes that need to detect an error condition in order for a master to be flagged as \*\*ODOWN\*\*. \* The failover is triggered by the \*\*ODOWN\*\* state. \* Once the failover is triggered, the Sentinel trying to failover is required to ask for authorization to a majority of Sentinels (or more than the majority if the quorum is set to a number greater than the majority). The difference may seem subtle but is actually quite simple
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.013272356241941452, -0.07034130394458771, 0.03473624214529991, 0.069268137216568, 0.048868414014577866, -0.041160766035318375, -0.01352805644273758, 0.009173759259283543, 0.007396749686449766, 0.03495850786566734, -0.030377648770809174, 0.030731577426195145, 0.02130999229848385, -0.0585...
0.104407
state. \* Once the failover is triggered, the Sentinel trying to failover is required to ask for authorization to a majority of Sentinels (or more than the majority if the quorum is set to a number greater than the majority). The difference may seem subtle but is actually quite simple to understand and use. For example if you have 5 Sentinel instances, and the quorum is set to 2, a failover will be triggered as soon as 2 Sentinels believe that the master is not reachable, however one of the two Sentinels will be able to failover only if it gets authorization at least from 3 Sentinels. If instead the quorum is configured to 5, all the Sentinels must agree about the master error condition, and the authorization from all Sentinels is required in order to failover. This means that the quorum can be used to tune Sentinel in two ways: 1. If a quorum is set to a value smaller than the majority of Sentinels we deploy, we are basically making Sentinel more sensitive to master failures, triggering a failover as soon as even just a minority of Sentinels is no longer able to talk with the master. 2. If a quorum is set to a value greater than the majority of Sentinels, we are making Sentinel able to failover only when there are a very large number (larger than majority) of well connected Sentinels which agree about the master being down. ### Configuration epochs Sentinels require to get authorizations from a majority in order to start a failover for a few important reasons: When a Sentinel is authorized, it gets a unique \*\*configuration epoch\*\* for the master it is failing over. This is a number that will be used to version the new configuration after the failover is completed. Because a majority agreed that a given version was assigned to a given Sentinel, no other Sentinel will be able to use it. This means that every configuration of every failover is versioned with a unique version. We'll see why this is so important. Moreover Sentinels have a rule: if a Sentinel voted another Sentinel for the failover of a given master, it will wait some time to try to failover the same master again. This delay is the `2 \* failover-timeout` you can configure in `sentinel.conf`. This means that Sentinels will not try to failover the same master at the same time, the first to ask to be authorized will try, if it fails another will try after some time, and so forth. Redis Sentinel guarantees the \*liveness\* property that if a majority of Sentinels are able to talk, eventually one will be authorized to failover if the master is down. Redis Sentinel also guarantees the \*safety\* property that every Sentinel will failover the same master using a different \*configuration epoch\*. ### Configuration propagation Once a Sentinel is able to failover a master successfully, it will start to broadcast the new configuration so that the other Sentinels will update their information about a given master. For a failover to be considered successful, it requires that the Sentinel was able to send the `REPLICAOF NO ONE` command to the selected replica, and that the switch to master was later observed in the `INFO` output of the master. At this point, even if the reconfiguration of the replicas is in progress, the failover is considered to be successful, and all the Sentinels are required to start reporting the new configuration. The way a new configuration is propagated is the reason why we need that every Sentinel failover is authorized with
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ 0.004925278015434742, -0.05732840299606323, 0.08961298316717148, 0.03851398080587387, 0.00043973352876491845, -0.000551417819224298, -0.02622458152472973, 0.0439513698220253, 0.009201628156006336, 0.03232796490192413, -0.09140946716070175, -0.024160701781511307, 0.06212617829442024, -0.016...
0.146185
point, even if the reconfiguration of the replicas is in progress, the failover is considered to be successful, and all the Sentinels are required to start reporting the new configuration. The way a new configuration is propagated is the reason why we need that every Sentinel failover is authorized with a different version number (configuration epoch). Every Sentinel continuously broadcast its version of the configuration of a master using Redis Pub/Sub messages, both in the master and all the replicas. At the same time all the Sentinels wait for messages to see what is the configuration advertised by the other Sentinels. Configurations are broadcast in the `\_\_sentinel\_\_:hello` Pub/Sub channel. Because every configuration has a different version number, the greater version always wins over smaller versions. So for example the configuration for the master `mymaster` start with all the Sentinels believing the master is at 192.168.1.50:6379. This configuration has version 1. After some time a Sentinel is authorized to failover with version 2. If the failover is successful, it will start to broadcast a new configuration, let's say 192.168.1.50:9000, with version 2. All the other instances will see this configuration and will update their configuration accordingly, since the new configuration has a greater version. This means that Sentinel guarantees a second liveness property: a set of Sentinels that are able to communicate will all converge to the same configuration with the higher version number. Basically if the net is partitioned, every partition will converge to the higher local configuration. In the special case of no partitions, there is a single partition and every Sentinel will agree about the configuration. ### Consistency under partitions Redis Sentinel configurations are eventually consistent, so every partition will converge to the higher configuration available. However in a real-world system using Sentinel there are three different players: \* Redis instances. \* Sentinel instances. \* Clients. In order to define the behavior of the system we have to consider all three. The following is a simple network where there are 3 nodes, each running a Redis instance, and a Sentinel instance: +-------------+ | Sentinel 1 |----- Client A | Redis 1 (M) | +-------------+ | | +-------------+ | +------------+ | Sentinel 2 |-----+-- // ----| Sentinel 3 |----- Client B | Redis 2 (S) | | Redis 3 (M)| +-------------+ +------------+ In this system the original state was that Redis 3 was the master, while Redis 1 and 2 were replicas. A partition occurred isolating the old master. Sentinels 1 and 2 started a failover promoting Sentinel 1 as the new master. The Sentinel properties guarantee that Sentinel 1 and 2 now have the new configuration for the master. However Sentinel 3 has still the old configuration since it lives in a different partition. We know that Sentinel 3 will get its configuration updated when the network partition will heal, however what happens during the partition if there are clients partitioned with the old master? Clients will be still able to write to Redis 3, the old master. When the partition will rejoin, Redis 3 will be turned into a replica of Redis 1, and all the data written during the partition will be lost. Depending on your configuration you may want or not that this scenario happens: \* If you are using Redis as a cache, it could be handy that Client B is still able to write to the old master, even if its data will be lost. \* If you are using Redis as a store, this is not good and you need to configure the system in order to partially prevent this problem. Since
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ 0.013270863331854343, -0.09481886029243469, 0.0401248000562191, 0.02101440355181694, 0.0591590590775013, -0.04282427951693535, -0.02306370623409748, -0.027119923382997513, 0.027519404888153076, -0.0007685685995966196, -0.04105490446090698, 0.05295713618397713, 0.049762506037950516, -0.0613...
0.108308
could be handy that Client B is still able to write to the old master, even if its data will be lost. \* If you are using Redis as a store, this is not good and you need to configure the system in order to partially prevent this problem. Since Redis is asynchronously replicated, there is no way to totally prevent data loss in this scenario, however you can bound the divergence between Redis 3 and Redis 1 using the following Redis configuration option: min-replicas-to-write 1 min-replicas-max-lag 10 With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous \*not being able to write\* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds. Using this configuration the Redis 3 in the above example will become unavailable after 10 seconds. When the partition heals, the Sentinel 3 configuration will converge to the new one, and Client B will be able to fetch a valid configuration and continue. In general Redis + Sentinel as a whole are an \*\*eventually consistent system\*\* where the merge function is \*\*last failover wins\*\*, and the data from old masters are discarded to replicate the data of the current master, so there is always a window for losing acknowledged writes. This is due to Redis asynchronous replication and the discarding nature of the "virtual" merge function of the system. Note that this is not a limitation of Sentinel itself, and if you orchestrate the failover with a strongly consistent replicated state machine, the same properties will still apply. There are only two ways to avoid losing acknowledged writes: 1. Use synchronous replication (and a proper consensus algorithm to run a replicated state machine). 2. Use an eventually consistent system where different versions of the same object can be merged. Redis currently is not able to use any of the above systems, and is currently outside the development goals. However there are proxies implementing solution "2" on top of Redis stores such as SoundCloud [Roshi](https://github.com/soundcloud/roshi), or Netflix [Dynomite](https://github.com/Netflix/dynomite). Sentinel persistent state --- Sentinel state is persisted in the sentinel configuration file. For example every time a new configuration is received, or created (leader Sentinels), for a master, the configuration is persisted on disk together with the configuration epoch. This means that it is safe to stop and restart Sentinel processes. ### TILT mode Redis Sentinel is heavily dependent on the computer time: for instance in order to understand if an instance is available it remembers the time of the latest successful reply to the PING command, and compares it with the current time to understand how old it is. However if the computer time changes in an unexpected way, or if the computer is very busy, or the process blocked for some reason, Sentinel may start to behave in an unexpected way. The TILT mode is a special "protection" mode that a Sentinel can enter when something odd is detected that can lower the reliability of the system. The Sentinel timer interrupt is normally called 10 times per second, so we expect that more or less 100 milliseconds will elapse between two calls to the timer interrupt. What a Sentinel does is to register the previous time the timer interrupt was called, and compare it with the current call: if the time difference is negative or unexpectedly big (2 seconds or more)
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.06075255945324898, -0.059418536722660065, 0.004526928532868624, 0.03303515911102295, 0.011812726967036724, -0.015144620090723038, -0.019858725368976593, -0.014895564876496792, 0.05640653893351555, 0.013142130337655544, -0.02654126286506653, 0.06614470481872559, 0.1103183701634407, -0.06...
-0.000376
that more or less 100 milliseconds will elapse between two calls to the timer interrupt. What a Sentinel does is to register the previous time the timer interrupt was called, and compare it with the current call: if the time difference is negative or unexpectedly big (2 seconds or more) the TILT mode is entered (or if it was already entered the exit from the TILT mode postponed). When in TILT mode the Sentinel will continue to monitor everything, but: \* It stops acting at all. \* It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted. If everything appears to be normal for 30 second, the TILT mode is exited. In the Sentinel TILT mode, if we send the INFO command, we could get the following response: $ redis-cli -p 26379 127.0.0.1:26379> info (Other information from Sentinel server skipped.) # Sentinel sentinel\_masters:1 sentinel\_tilt:0 sentinel\_tilt\_since\_seconds:-1 sentinel\_running\_scripts:0 sentinel\_scripts\_queue\_length:0 sentinel\_simulate\_failure\_flags:0 master0:name=mymaster,status=ok,address=127.0.0.1:6379,slaves=0,sentinels=1 The field "sentinel\_tilt\_since\_seconds" indicates how many seconds the Sentinel already is in the TILT mode. If it is not in TILT mode, the value will be -1. Note that in some ways TILT mode could be replaced using the monotonic clock API that many kernels offer. However it is not still clear if this is a good solution since the current system avoids issues in case the process is just suspended or not executed by the scheduler for a long time. \*\*A note about the word slave used in this man page\*\*: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated.
https://github.com/redis/redis-doc/blob/master//docs/management/sentinel.md
master
redis
[ -0.013237474486231804, -0.09724252671003342, -0.0027047356124967337, 0.09935658425092697, 0.05473286285996437, -0.03768838196992874, 0.00514734024181962, 0.024973101913928986, 0.06521693617105484, 0.013727609999477863, 0.017511142417788506, -0.0042520128190517426, -0.029676910489797592, -0...
0.159743
Redis is able to start without a configuration file using a built-in default configuration, however this setup is only recommended for testing and development purposes. The proper way to configure Redis is by providing a Redis configuration file, usually called `redis.conf`. The `redis.conf` file contains a number of directives that have a very simple format: keyword argument1 argument2 ... argumentN This is an example of a configuration directive: replicaof 127.0.0.1 6380 It is possible to provide strings containing spaces as arguments using (double or single) quotes, as in the following example: requirepass "hello world" Single-quoted string can contain characters escaped by backslashes, and double-quoted strings can additionally include any ASCII symbols encoded using backslashed hexadecimal notation "\\xff". The list of configuration directives, and their meaning and intended usage is available in the self documented example redis.conf shipped into the Redis distribution. \* The self documented [redis.conf for Redis 7.2](https://raw.githubusercontent.com/redis/redis/7.2/redis.conf). \* The self documented [redis.conf for Redis 7.0](https://raw.githubusercontent.com/redis/redis/7.0/redis.conf). \* The self documented [redis.conf for Redis 6.2](https://raw.githubusercontent.com/redis/redis/6.2/redis.conf). \* The self documented [redis.conf for Redis 6.0](https://raw.githubusercontent.com/redis/redis/6.0/redis.conf). \* The self documented [redis.conf for Redis 5.0](https://raw.githubusercontent.com/redis/redis/5.0/redis.conf). \* The self documented [redis.conf for Redis 4.0](https://raw.githubusercontent.com/redis/redis/4.0/redis.conf). \* The self documented [redis.conf for Redis 3.2](https://raw.githubusercontent.com/redis/redis/3.2/redis.conf). \* The self documented [redis.conf for Redis 3.0](https://raw.githubusercontent.com/redis/redis/3.0/redis.conf). \* The self documented [redis.conf for Redis 2.8](https://raw.githubusercontent.com/redis/redis/2.8/redis.conf). \* The self documented [redis.conf for Redis 2.6](https://raw.githubusercontent.com/redis/redis/2.6/redis.conf). \* The self documented [redis.conf for Redis 2.4](https://raw.githubusercontent.com/redis/redis/2.4/redis.conf). Passing arguments via the command line --- You can also pass Redis configuration parameters using the command line directly. This is very useful for testing purposes. The following is an example that starts a new Redis instance using port 6380 as a replica of the instance running at 127.0.0.1 port 6379. ./redis-server --port 6380 --replicaof 127.0.0.1 6379 The format of the arguments passed via the command line is exactly the same as the one used in the redis.conf file, with the exception that the keyword is prefixed with `--`. Note that internally this generates an in-memory temporary config file (possibly concatenating the config file passed by the user, if any) where arguments are translated into the format of redis.conf. Changing Redis configuration while the server is running --- It is possible to reconfigure Redis on the fly without stopping and restarting the service, or querying the current configuration programmatically using the special commands `CONFIG SET` and `CONFIG GET`. Not all of the configuration directives are supported in this way, but most are supported as expected. Please refer to the `CONFIG SET` and `CONFIG GET` pages for more information. Note that modifying the configuration on the fly \*\*has no effects on the redis.conf file\*\* so at the next restart of Redis the old configuration will be used instead. Make sure to also modify the `redis.conf` file accordingly to the configuration you set using `CONFIG SET`. You can do it manually, or you can use `CONFIG REWRITE`, which will automatically scan your `redis.conf` file and update the fields which don't match the current configuration value. Fields non existing but set to the default value are not added. Comments inside your configuration file are retained. Configuring Redis as a cache --- If you plan to use Redis as a cache where every key will have an expire set, you may consider using the following configuration instead (assuming a max memory limit of 2 megabytes as an example): maxmemory 2mb maxmemory-policy allkeys-lru In this configuration there is no need for the application to set a time to live for keys using the `EXPIRE` command (or equivalent) since all the keys will be evicted using an approximated LRU algorithm as long as we hit the 2
https://github.com/redis/redis-doc/blob/master//docs/management/config.md
master
redis
[ -0.05040813609957695, -0.04949400946497917, -0.10447674244642258, 0.00535792950540781, -0.04438082501292229, -0.043714284896850586, 0.006555465050041676, 0.02969120256602764, -0.017111152410507202, -0.030371708795428276, -0.017012478783726692, -0.009721151553094387, 0.10911327600479126, -0...
0.06968
as an example): maxmemory 2mb maxmemory-policy allkeys-lru In this configuration there is no need for the application to set a time to live for keys using the `EXPIRE` command (or equivalent) since all the keys will be evicted using an approximated LRU algorithm as long as we hit the 2 megabyte memory limit. Basically, in this configuration Redis acts in a similar way to memcached. We have more extensive documentation about using Redis as an LRU cache [here](/topics/lru-cache).
https://github.com/redis/redis-doc/blob/master//docs/management/config.md
master
redis
[ -0.01908491924405098, -0.05130266025662422, -0.06461382657289505, -0.004595996346324682, -0.009833695366978645, -0.10285785049200058, -0.010935473255813122, 0.06289588660001755, -0.012130656279623508, 0.024585621431469917, 0.0447424054145813, 0.10194411873817444, 0.02457539178431034, -0.05...
0.127566
Persistence refers to the writing of data to durable storage, such as a solid-state disk (SSD). Redis provides a range of persistence options. These include: \* \*\*RDB\*\* (Redis Database): RDB persistence performs point-in-time snapshots of your dataset at specified intervals. \* \*\*AOF\*\* (Append Only File): AOF persistence logs every write operation received by the server. These operations can then be replayed again at server startup, reconstructing the original dataset. Commands are logged using the same format as the Redis protocol itself. \* \*\*No persistence\*\*: You can disable persistence completely. This is sometimes used when caching. \* \*\*RDB + AOF\*\*: You can also combine both AOF and RDB in the same instance. If you'd rather not think about the tradeoffs between these different persistence strategies, you may want to consider [Redis Enterprise's persistence options](https://docs.redis.com/latest/rs/databases/configure/database-persistence/), which can be pre-configured using a UI. To learn more about how to evaluate your Redis persistence strategy, read on. ## RDB advantages \* RDB is a very compact single-file point-in-time representation of your Redis data. RDB files are perfect for backups. For instance you may want to archive your RDB files every hour for the latest 24 hours, and to save an RDB snapshot every day for 30 days. This allows you to easily restore different versions of the data set in case of disasters. \* RDB is very good for disaster recovery, being a single compact file that can be transferred to far data centers, or onto Amazon S3 (possibly encrypted). \* RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest. The parent process will never perform disk I/O or alike. \* RDB allows faster restarts with big datasets compared to AOF. \* On replicas, RDB supports [partial resynchronizations after restarts and failovers](https://redis.io/topics/replication#partial-resynchronizations-after-restarts-and-failovers). ## RDB disadvantages \* RDB is NOT good if you need to minimize the chance of data loss in case Redis stops working (for example after a power outage). You can configure different \*save points\* where an RDB is produced (for instance after at least five minutes and 100 writes against the data set, you can have multiple save points). However you'll usually create an RDB snapshot every five minutes or more, so in case of Redis stopping working without a correct shutdown for any reason you should be prepared to lose the latest minutes of data. \* RDB needs to fork() often in order to persist on disk using a child process. fork() can be time consuming if the dataset is big, and may result in Redis stopping serving clients for some milliseconds or even for one second if the dataset is very big and the CPU performance is not great. AOF also needs to fork() but less frequently and you can tune how often you want to rewrite your logs without any trade-off on durability. ## AOF advantages \* Using AOF Redis is much more durable: you can have different fsync policies: no fsync at all, fsync every second, fsync at every query. With the default policy of fsync every second, write performance is still great. fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress, so you can only lose one second worth of writes. \* The AOF log is an append-only log, so there are no seeks, nor corruption problems if there is a power outage. Even if the log ends with a half-written command for some reason (disk full or other reasons) the redis-check-aof
https://github.com/redis/redis-doc/blob/master//docs/management/persistence.md
master
redis
[ -0.026268821209669113, -0.03714948892593384, -0.13177193701267242, 0.04007207229733467, -0.02009243704378605, 0.011566642671823502, -0.0006294981576502323, -0.011890474706888199, 0.04437762498855591, 0.009575779549777508, 0.010616892017424107, 0.10255138576030731, 0.03965500742197037, -0.0...
0.124762
you can only lose one second worth of writes. \* The AOF log is an append-only log, so there are no seeks, nor corruption problems if there is a power outage. Even if the log ends with a half-written command for some reason (disk full or other reasons) the redis-check-aof tool is able to fix it easily. \* Redis is able to automatically rewrite the AOF in background when it gets too big. The rewrite is completely safe as while Redis continues appending to the old file, a completely new one is produced with the minimal set of operations needed to create the current data set, and once this second file is ready Redis switches the two and starts appending to the new one. \* AOF contains a log of all the operations one after the other in an easy to understand and parse format. You can even easily export an AOF file. For instance even if you've accidentally flushed everything using the `FLUSHALL` command, as long as no rewrite of the log was performed in the meantime, you can still save your data set just by stopping the server, removing the latest command, and restarting Redis again. ## AOF disadvantages \* AOF files are usually bigger than the equivalent RDB files for the same dataset. \* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to \*every second\* performance is still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of a huge write load. \*\*Redis < 7.0\*\* \* AOF can use a lot of memory if there are writes to the database during a rewrite (these are buffered in memory and written to the new AOF at the end). \* All write commands that arrive during rewrite are written to disk twice. \* Redis could freeze writing and fsyncing these write commands to the new AOF file at the end of the rewrite. Ok, so what should I use? --- The general indication you should use both persistence methods is if you want a degree of data safety comparable to what PostgreSQL can provide you. If you care a lot about your data, but still can live with a few minutes of data loss in case of disasters, you can simply use RDB alone. There are many users using AOF alone, but we discourage it since to have an RDB snapshot from time to time is a great idea for doing database backups, for faster restarts, and in the event of bugs in the AOF engine. The following sections will illustrate a few more details about the two persistence models. ## Snapshotting By default Redis saves snapshots of the dataset on disk, in a binary file called `dump.rdb`. You can configure Redis to have it save the dataset every N seconds if there are at least M changes in the dataset, or you can manually call the `SAVE` or `BGSAVE` commands. For example, this configuration will make Redis automatically dump the dataset to disk every 60 seconds if at least 1000 keys changed: save 60 1000 This strategy is known as \_snapshotting\_. ### How it works Whenever Redis needs to dump the dataset to disk, this is what happens: \* Redis [forks](http://linux.die.net/man/2/fork). We now have a child and a parent process. \* The child starts to write the dataset to a temporary RDB file. \* When the child is done writing the new RDB file,
https://github.com/redis/redis-doc/blob/master//docs/management/persistence.md
master
redis
[ -0.026006802916526794, -0.03562110662460327, -0.04802185669541359, 0.0415194109082222, 0.04287159442901611, -0.14532065391540527, -0.028410566970705986, -0.013222446665167809, 0.045448631048202515, 0.08593522012233734, -0.0004984384286217391, 0.12062779814004898, -0.00976688601076603, -0.1...
0.099817
works Whenever Redis needs to dump the dataset to disk, this is what happens: \* Redis [forks](http://linux.die.net/man/2/fork). We now have a child and a parent process. \* The child starts to write the dataset to a temporary RDB file. \* When the child is done writing the new RDB file, it replaces the old one. This method allows Redis to benefit from copy-on-write semantics. ## Append-only file Snapshotting is not very durable. If your computer running Redis stops, your power line fails, or you accidentally `kill -9` your instance, the latest data written to Redis will be lost. While this may not be a big deal for some applications, there are use cases for full durability, and in these cases Redis snapshotting alone is not a viable option. The \_append-only file\_ is an alternative, fully-durable strategy for Redis. It became available in version 1.1. You can turn on the AOF in your configuration file: appendonly yes From now on, every time Redis receives a command that changes the dataset (e.g. `SET`) it will append it to the AOF. When you restart Redis it will re-play the AOF to rebuild the state. Since Redis 7.0.0, Redis uses a multi part AOF mechanism. That is, the original single AOF file is split into base file (at most one) and incremental files (there may be more than one). The base file represents an initial (RDB or AOF format) snapshot of the data present when the AOF is [rewritten](#log-rewriting). The incremental files contains incremental changes since the last base AOF file was created. All these files are put in a separate directory and are tracked by a manifest file. ### Log rewriting The AOF gets bigger and bigger as write operations are performed. For example, if you are incrementing a counter 100 times, you'll end up with a single key in your dataset containing the final value, but 100 entries in your AOF. 99 of those entries are not needed to rebuild the current state. The rewrite is completely safe. While Redis continues appending to the old file, a completely new one is produced with the minimal set of operations needed to create the current data set, and once this second file is ready Redis switches the two and starts appending to the new one. So Redis supports an interesting feature: it is able to rebuild the AOF in the background without interrupting service to clients. Whenever you issue a `BGREWRITEAOF`, Redis will write the shortest sequence of commands needed to rebuild the current dataset in memory. If you're using the AOF with Redis 2.2 you'll need to run `BGREWRITEAOF` from time to time. Since Redis 2.4 is able to trigger log rewriting automatically (see the example configuration file for more information). Since Redis 7.0.0, when an AOF rewrite is scheduled, the Redis parent process opens a new incremental AOF file to continue writing. The child process executes the rewrite logic and generates a new base AOF. Redis will use a temporary manifest file to track the newly generated base file and incremental file. When they are ready, Redis will perform an atomic replacement operation to make this temporary manifest file take effect. In order to avoid the problem of creating many incremental files in case of repeated failures and retries of an AOF rewrite, Redis introduces an AOF rewrite limiting mechanism to ensure that failed AOF rewrites are retried at a slower and slower rate. ### How durable is the append only file? You can configure how many times Redis will [`fsync`](http://linux.die.net/man/2/fsync) data on disk. There are three options: \* `appendfsync always`: `fsync`
https://github.com/redis/redis-doc/blob/master//docs/management/persistence.md
master
redis
[ -0.04535692557692528, -0.03695344552397728, -0.04178910329937935, -0.0077859098091721535, 0.07490044087171555, -0.08263066411018372, -0.01950065791606903, 0.049427881836891174, 0.018144559115171432, 0.06960712373256683, 0.027054758742451668, 0.09850171208381653, 0.012386481277644634, -0.08...
0.050691
rewrite, Redis introduces an AOF rewrite limiting mechanism to ensure that failed AOF rewrites are retried at a slower and slower rate. ### How durable is the append only file? You can configure how many times Redis will [`fsync`](http://linux.die.net/man/2/fsync) data on disk. There are three options: \* `appendfsync always`: `fsync` every time new commands are appended to the AOF. Very very slow, very safe. Note that the commands are appended to the AOF after a batch of commands from multiple clients or a pipeline are executed, so it means a single write and a single fsync (before sending the replies). \* `appendfsync everysec`: `fsync` every second. Fast enough (since version 2.4 likely to be as fast as snapshotting), and you may lose 1 second of data if there is a disaster. \* `appendfsync no`: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel's exact tuning. The suggested (and default) policy is to `fsync` every second. It is both fast and relatively safe. The `always` policy is very slow in practice, but it supports group commit, so if there are multiple parallel writes Redis will try to perform a single `fsync` operation. ### What should I do if my AOF gets truncated? It is possible the server crashed while writing the AOF file, or the volume where the AOF file is stored was full at the time of writing. When this happens the AOF still contains consistent data representing a given point-in-time version of the dataset (that may be old up to one second with the default AOF fsync policy), but the last command in the AOF could be truncated. The latest major versions of Redis will be able to load the AOF anyway, just discarding the last non well formed command in the file. In this case the server will emit a log like the following: ``` \* Reading RDB preamble from AOF file... \* Reading the remaining AOF tail... # !!! Warning: short read while loading the AOF file !!! # !!! Truncating the AOF at offset 439 !!! # AOF loaded anyway because aof-load-truncated is enabled ``` You can change the default configuration to force Redis to stop in such cases if you want, but the default configuration is to continue regardless of the fact the last command in the file is not well-formed, in order to guarantee availability after a restart. Older versions of Redis may not recover, and may require the following steps: \* Make a backup copy of your AOF file. \* Fix the original file using the `redis-check-aof` tool that ships with Redis: $ redis-check-aof --fix \* Optionally use `diff -u` to check what is the difference between two files. \* Restart the server with the fixed file. ### What should I do if my AOF gets corrupted? If the AOF file is not just truncated, but corrupted with invalid byte sequences in the middle, things are more complex. Redis will complain at startup and will abort: ``` \* Reading the remaining AOF tail... # Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix ``` The best thing to do is to run the `redis-check-aof` utility, initially without the `--fix` option, then understand the problem, jump to the given offset in the file, and see if it is possible to manually repair the file: The AOF uses the same format of the Redis protocol and is quite
https://github.com/redis/redis-doc/blob/master//docs/management/persistence.md
master
redis
[ -0.014192631468176842, -0.05600365251302719, -0.04689597710967064, 0.060380179435014725, 0.03240739181637764, -0.17412827908992767, -0.06490413844585419, 0.023423396050930023, 0.05163438245654106, 0.07198701053857803, -0.009567445144057274, 0.08426792919635773, -0.017162440344691277, -0.10...
0.103755
thing to do is to run the `redis-check-aof` utility, initially without the `--fix` option, then understand the problem, jump to the given offset in the file, and see if it is possible to manually repair the file: The AOF uses the same format of the Redis protocol and is quite simple to fix manually. Otherwise it is possible to let the utility fix the file for us, but in that case all the AOF portion from the invalid part to the end of the file may be discarded, leading to a massive amount of data loss if the corruption happened to be in the initial part of the file. ### How it works Log rewriting uses the same copy-on-write trick already in use for snapshotting. This is how it works: \*\*Redis >= 7.0\*\* \* Redis [forks](http://linux.die.net/man/2/fork), so now we have a child and a parent process. \* The child starts writing the new base AOF in a temporary file. \* The parent opens a new increments AOF file to continue writing updates. If the rewriting fails, the old base and increment files (if there are any) plus this newly opened increment file represent the complete updated dataset, so we are safe. \* When the child is done rewriting the base file, the parent gets a signal, and uses the newly opened increment file and child generated base file to build a temp manifest, and persist it. \* Profit! Now Redis does an atomic exchange of the manifest files so that the result of this AOF rewrite takes effect. Redis also cleans up the old base file and any unused increment files. \*\*Redis < 7.0\*\* \* Redis [forks](http://linux.die.net/man/2/fork), so now we have a child and a parent process. \* The child starts writing the new AOF in a temporary file. \* The parent accumulates all the new changes in an in-memory buffer (but at the same time it writes the new changes in the old append-only file, so if the rewriting fails, we are safe). \* When the child is done rewriting the file, the parent gets a signal, and appends the in-memory buffer at the end of the file generated by the child. \* Now Redis atomically renames the new file into the old one, and starts appending new data into the new file. ### How I can switch to AOF, if I'm currently using dump.rdb snapshots? If you want to enable AOF in a server that is currently using RDB snapshots, you need to convert the data by enabling AOF via CONFIG command on the live server first. \*\*IMPORTANT:\*\* not following this procedure (e.g. just changing the config and restarting the server) can result in data loss! \*\*Redis >= 2.2\*\* Preparations: \* Make a backup of your latest dump.rdb file. \* Transfer this backup to a safe place. Switch to AOF on live database: \* Enable AOF: `redis-cli config set appendonly yes` \* Optionally disable RDB: `redis-cli config set save ""` \* Make sure writes are appended to the append only file correctly. \* \*\*IMPORTANT:\*\* Update your `redis.conf` (potentially through `CONFIG REWRITE`) and ensure that it matches the configuration above. If you forget this step, when you restart the server, the configuration changes will be lost and the server will start again with the old configuration, resulting in a loss of your data. Next time you restart the server: \* Before restarting the server, wait for AOF rewrite to finish persisting the data. You can do that by watching `INFO persistence`, waiting for `aof\_rewrite\_in\_progress` and `aof\_rewrite\_scheduled` to be `0`, and validating that `aof\_last\_bgrewrite\_status` is `ok`. \* After restarting the
https://github.com/redis/redis-doc/blob/master//docs/management/persistence.md
master
redis
[ -0.06871099025011063, -0.02123037539422512, 0.011335649527609348, -0.028122929856181145, 0.04832563176751137, -0.15864941477775574, -0.003760841442272067, 0.003389270743355155, 0.01803559623658657, 0.08509005606174469, 0.02109266258776188, 0.07144910842180252, -0.02062125876545906, -0.0793...
0.037242
loss of your data. Next time you restart the server: \* Before restarting the server, wait for AOF rewrite to finish persisting the data. You can do that by watching `INFO persistence`, waiting for `aof\_rewrite\_in\_progress` and `aof\_rewrite\_scheduled` to be `0`, and validating that `aof\_last\_bgrewrite\_status` is `ok`. \* After restarting the server, check that your database contains the same number of keys it contained previously. \*\*Redis 2.0\*\* \* Make a backup of your latest dump.rdb file. \* Transfer this backup into a safe place. \* Stop all the writes against the database! \* Issue a `redis-cli BGREWRITEAOF`. This will create the append only file. \* Stop the server when Redis finished generating the AOF dump. \* Edit redis.conf end enable append only file persistence. \* Restart the server. \* Make sure that your database contains the same number of keys it contained before the switch. \* Make sure that writes are appended to the append only file correctly. ## Interactions between AOF and RDB persistence Redis >= 2.4 makes sure to avoid triggering an AOF rewrite when an RDB snapshotting operation is already in progress, or allowing a `BGSAVE` while the AOF rewrite is in progress. This prevents two Redis background processes from doing heavy disk I/O at the same time. When snapshotting is in progress and the user explicitly requests a log rewrite operation using `BGREWRITEAOF` the server will reply with an OK status code telling the user the operation is scheduled, and the rewrite will start once the snapshotting is completed. In the case both AOF and RDB persistence are enabled and Redis restarts the AOF file will be used to reconstruct the original dataset since it is guaranteed to be the most complete. ## Backing up Redis data Before starting this section, make sure to read the following sentence: \*\*Make Sure to Backup Your Database\*\*. Disks break, instances in the cloud disappear, and so forth: no backups means huge risk of data disappearing into /dev/null. Redis is very data backup friendly since you can copy RDB files while the database is running: the RDB is never modified once produced, and while it gets produced it uses a temporary name and is renamed into its final destination atomically using rename(2) only when the new snapshot is complete. This means that copying the RDB file is completely safe while the server is running. This is what we suggest: \* Create a cron job in your server creating hourly snapshots of the RDB file in one directory, and daily snapshots in a different directory. \* Every time the cron script runs, make sure to call the `find` command to make sure too old snapshots are deleted: for instance you can take hourly snapshots for the latest 48 hours, and daily snapshots for one or two months. Make sure to name the snapshots with date and time information. \* At least one time every day make sure to transfer an RDB snapshot \*outside your data center\* or at least \*outside the physical machine\* running your Redis instance. ### Backing up AOF persistence If you run a Redis instance with only AOF persistence enabled, you can still perform backups. Since Redis 7.0.0, AOF files are split into multiple files which reside in a single directory determined by the `appenddirname` configuration. During normal operation all you need to do is copy/tar the files in this directory to achieve a backup. However, if this is done during a [rewrite](#log-rewriting), you might end up with an invalid backup. To work around this you must disable AOF rewrites during the backup: 1. Turn off automatic rewrites with
https://github.com/redis/redis-doc/blob/master//docs/management/persistence.md
master
redis
[ 0.02303253673017025, -0.09099069237709045, -0.04970495030283928, 0.00048639794113114476, -0.02452365681529045, -0.06231139227747917, -0.02546599693596363, -0.00771087221801281, 0.0457211509346962, 0.08184179663658142, -0.013571641407907009, 0.04320867732167244, 0.019166231155395508, -0.075...
0.012493
all you need to do is copy/tar the files in this directory to achieve a backup. However, if this is done during a [rewrite](#log-rewriting), you might end up with an invalid backup. To work around this you must disable AOF rewrites during the backup: 1. Turn off automatic rewrites with `CONFIG SET` `auto-aof-rewrite-percentage 0` Make sure you don't manually start a rewrite (using `BGREWRITEAOF`) during this time. 2. Check there's no current rewrite in progress using `INFO` `persistence` and verifying `aof\_rewrite\_in\_progress` is 0. If it's 1, then you'll need to wait for the rewrite to complete. 3. Now you can safely copy the files in the `appenddirname` directory. 4. Re-enable rewrites when done: `CONFIG SET` `auto-aof-rewrite-percentage ` \*\*Note:\*\* If you want to minimize the time AOF rewrites are disabled you may create hard links to the files in `appenddirname` (in step 3 above) and then re-enable rewrites (step 4) after the hard links are created. Now you can copy/tar the hardlinks and delete them when done. This works because Redis guarantees that it only appends to files in this directory, or completely replaces them if necessary, so the content should be consistent at any given point in time. \*\*Note:\*\* If you want to handle the case of the server being restarted during the backup and make sure no rewrite will automatically start after the restart you can change step 1 above to also persist the updated configuration via `CONFIG REWRITE`. Just make sure to re-enable automatic rewrites when done (step 4) and persist it with another `CONFIG REWRITE`. Prior to version 7.0.0 backing up the AOF file can be done simply by copying the aof file (like backing up the RDB snapshot). The file may lack the final part but Redis will still be able to load it (see the previous sections about [truncated AOF files](#what-should-i-do-if-my-aof-gets-truncated)). ## Disaster recovery Disaster recovery in the context of Redis is basically the same story as backups, plus the ability to transfer those backups in many different external data centers. This way data is secured even in the case of some catastrophic event affecting the main data center where Redis is running and producing its snapshots. We'll review the most interesting disaster recovery techniques that don't have too high costs. \* Amazon S3 and other similar services are a good way for implementing your disaster recovery system. Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form. You can encrypt your data using `gpg -c` (in symmetric encryption mode). Make sure to store your password in many different safe places (for instance give a copy to the most important people of your organization). It is recommended to use multiple storage services for improved data safety. \* Transfer your snapshots using SCP (part of SSH) to far servers. This is a fairly simple and safe route: get a small VPS in a place that is very far from you, install ssh there, and generate a ssh client key without passphrase, then add it in the `authorized\_keys` file of your small VPS. You are ready to transfer backups in an automated fashion. Get at least two VPS in two different providers for best results. It is important to understand that this system can easily fail if not implemented in the right way. At least, make absolutely sure that after the transfer is completed you are able to verify the file size (that should match the one of the file you copied) and possibly the SHA1 digest, if you are using a VPS. You also need some kind of independent alert system if
https://github.com/redis/redis-doc/blob/master//docs/management/persistence.md
master
redis
[ -0.024636657908558846, -0.024229777976870537, 0.01359512284398079, -0.009378047659993172, 0.04117593914270401, -0.06708372384309769, -0.04032246023416519, -0.005876598414033651, 0.01171881053596735, 0.08862552046775818, 0.023061677813529968, 0.03882790356874466, -0.02109210193157196, -0.05...
-0.056525
At least, make absolutely sure that after the transfer is completed you are able to verify the file size (that should match the one of the file you copied) and possibly the SHA1 digest, if you are using a VPS. You also need some kind of independent alert system if the transfer of fresh backups is not working for some reason.
https://github.com/redis/redis-doc/blob/master//docs/management/persistence.md
master
redis
[ -0.018092362210154533, -0.03893586993217468, 0.020574213936924934, -0.06333129853010178, 0.09040848910808563, -0.08059395849704742, -0.04860258102416992, -0.057680923491716385, 0.03006238304078579, -0.007757821120321751, 0.048142578452825546, -0.008145729079842567, 0.0338069386780262, -0.0...
-0.056549
Redis scales horizontally with a deployment topology called Redis Cluster. This topic will teach you how to set up, test, and operate Redis Cluster in production. You will learn about the availability and consistency characteristics of Redis Cluster from the end user's point of view. If you plan to run a production Redis Cluster deployment or want to understand better how Redis Cluster works internally, consult the [Redis Cluster specification](/topics/cluster-spec). To learn how Redis Enterprise handles scaling, see [Linear Scaling with Redis Enterprise](https://redis.com/redis-enterprise/technology/linear-scaling-redis-enterprise/). ## Redis Cluster 101 Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes. Redis Cluster also provides some degree of availability during partitions—in practical terms, the ability to continue operations when some nodes fail or are unable to communicate. However, the cluster will become unavailable in the event of larger failures (for example, when the majority of masters are unavailable). So, with Redis Cluster, you get the ability to: \* Automatically split your dataset among multiple nodes. \* Continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster. #### Redis Cluster TCP ports Every Redis Cluster node requires two open TCP connections: a Redis TCP port used to serve clients, e.g., 6379, and second port known as the \_cluster bus port\_. By default, the cluster bus port is set by adding 10000 to the data port (e.g., 16379); however, you can override this in the `cluster-port` configuration. Cluster bus is a node-to-node communication channel that uses a binary protocol, which is more suited to exchanging information between nodes due to little bandwidth and processing time. Nodes use the cluster bus for failure detection, configuration updates, failover authorization, and so forth. Clients should never try to communicate with the cluster bus port, but rather use the Redis command port. However, make sure you open both ports in your firewall, otherwise Redis cluster nodes won't be able to communicate. For a Redis Cluster to work properly you need, for each node: 1. The client communication port (usually 6379) used to communicate with clients and be open to all the clients that need to reach the cluster, plus all the other cluster nodes that use the client port for key migrations. 2. The cluster bus port must be reachable from all the other cluster nodes. If you don't open both TCP ports, your cluster will not work as expected. #### Redis Cluster and Docker Currently, Redis Cluster does not support NATted environments and in general environments where IP addresses or TCP ports are remapped. Docker uses a technique called \_port mapping\_: programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using. This is useful for running multiple containers using the same ports, at the same time, in the same server. To make Docker compatible with Redis Cluster, you need to use Docker's \_host networking mode\_. Please see the `--net=host` option in the [Docker documentation](https://docs.docker.com/engine/userguide/networking/dockernetworks/) for more information. #### Redis Cluster data sharding Redis Cluster does not use consistent hashing, but a different form of sharding where every key is conceptually part of what we call a \*\*hash slot\*\*. There are 16384 hash slots in Redis Cluster, and to compute the hash slot for a given key, we simply take the CRC16 of the key modulo 16384. Every node in a Redis Cluster is responsible for a subset of the hash slots, so, for example, you may have a cluster with 3 nodes, where: \* Node A
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ 0.00893899891525507, -0.07188446819782257, -0.04274929687380791, 0.018197467550635338, -0.020913006737828255, -0.07991746813058853, -0.0338410884141922, 0.02358138933777809, -0.04001723602414131, -0.004408087581396103, -0.01574981026351452, 0.016130393370985985, 0.05502571910619736, -0.031...
0.143249
and to compute the hash slot for a given key, we simply take the CRC16 of the key modulo 16384. Every node in a Redis Cluster is responsible for a subset of the hash slots, so, for example, you may have a cluster with 3 nodes, where: \* Node A contains hash slots from 0 to 5500. \* Node B contains hash slots from 5501 to 11000. \* Node C contains hash slots from 11001 to 16383. This makes it easy to add and remove cluster nodes. For example, if I want to add a new node D, I need to move some hash slots from nodes A, B, C to D. Similarly, if I want to remove node A from the cluster, I can just move the hash slots served by A to B and C. Once node A is empty, I can remove it from the cluster completely. Moving hash slots from a node to another does not require stopping any operations; therefore, adding and removing nodes, or changing the percentage of hash slots held by a node, requires no downtime. Redis Cluster supports multiple key operations as long as all of the keys involved in a single command execution (or whole transaction, or Lua script execution) belong to the same hash slot. The user can force multiple keys to be part of the same hash slot by using a feature called \*hash tags\*. Hash tags are documented in the Redis Cluster specification, but the gist is that if there is a substring between {} brackets in a key, only what is inside the string is hashed. For example, the keys `user:{123}:profile` and `user:{123}:account` are guaranteed to be in the same hash slot because they share the same hash tag. As a result, you can operate on these two keys in the same multi-key operation. #### Redis Cluster master-replica model To remain available when a subset of master nodes are failing or are not able to communicate with the majority of nodes, Redis Cluster uses a master-replica model where every hash slot has from 1 (the master itself) to N replicas (N-1 additional replica nodes). In our example cluster with nodes A, B, C, if node B fails the cluster is not able to continue, since we no longer have a way to serve hash slots in the range 5501-11000. However, when the cluster is created (or at a later time), we add a replica node to every master, so that the final cluster is composed of A, B, C that are master nodes, and A1, B1, C1 that are replica nodes. This way, the system can continue if node B fails. Node B1 replicates B, and B fails, the cluster will promote node B1 as the new master and will continue to operate correctly. However, note that if nodes B and B1 fail at the same time, Redis Cluster will not be able to continue to operate. #### Redis Cluster consistency guarantees Redis Cluster does not guarantee \*\*strong consistency\*\*. In practical terms this means that under certain conditions it is possible that Redis Cluster will lose writes that were acknowledged by the system to the client. The first reason why Redis Cluster can lose writes is because it uses asynchronous replication. This means that during writes the following happens: \* Your client writes to the master B. \* The master B replies OK to your client. \* The master B propagates the write to its replicas B1, B2 and B3. As you can see, B does not wait for an acknowledgement from B1, B2, B3 before replying
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ 0.011518135666847229, -0.007014083210378885, -0.08334773778915405, -0.0006504403427243233, -0.0010888103861361742, -0.07189338654279709, -0.026923155412077904, -0.029489006847143173, 0.029425591230392456, 0.024040276184678078, 0.029973139986395836, 0.03872251138091087, 0.0687176063656807, ...
0.118207
following happens: \* Your client writes to the master B. \* The master B replies OK to your client. \* The master B propagates the write to its replicas B1, B2 and B3. As you can see, B does not wait for an acknowledgement from B1, B2, B3 before replying to the client, since this would be a prohibitive latency penalty for Redis, so if your client writes something, B acknowledges the write, but crashes before being able to send the write to its replicas, one of the replicas (that did not receive the write) can be promoted to master, losing the write forever. This is very similar to what happens with most databases that are configured to flush data to disk every second, so it is a scenario you are already able to reason about because of past experiences with traditional database systems not involving distributed systems. Similarly you can improve consistency by forcing the database to flush data to disk before replying to the client, but this usually results in prohibitively low performance. That would be the equivalent of synchronous replication in the case of Redis Cluster. Basically, there is a trade-off to be made between performance and consistency. Redis Cluster has support for synchronous writes when absolutely needed, implemented via the `WAIT` command. This makes losing writes a lot less likely. However, note that Redis Cluster does not implement strong consistency even when synchronous replication is used: it is always possible, under more complex failure scenarios, that a replica that was not able to receive the write will be elected as master. There is another notable scenario where Redis Cluster will lose writes, that happens during a network partition where a client is isolated with a minority of instances including at least a master. Take as an example our 6 nodes cluster composed of A, B, C, A1, B1, C1, with 3 masters and 3 replicas. There is also a client, that we will call Z1. After a partition occurs, it is possible that in one side of the partition we have A, C, A1, B1, C1, and in the other side we have B and Z1. Z1 is still able to write to B, which will accept its writes. If the partition heals in a very short time, the cluster will continue normally. However, if the partition lasts enough time for B1 to be promoted to master on the majority side of the partition, the writes that Z1 has sent to B in the meantime will be lost. {{% alert title="Note" color="info" %}} There is a \*\*maximum window\*\* to the amount of writes Z1 will be able to send to B: if enough time has elapsed for the majority side of the partition to elect a replica as master, every master node in the minority side will have stopped accepting writes. {{% /alert %}} This amount of time is a very important configuration directive of Redis Cluster, and is called the \*\*node timeout\*\*. After node timeout has elapsed, a master node is considered to be failing, and can be replaced by one of its replicas. Similarly, after node timeout has elapsed without a master node to be able to sense the majority of the other master nodes, it enters an error state and stops accepting writes. ## Redis Cluster configuration parameters We are about to create an example cluster deployment. Before we continue, let's introduce the configuration parameters that Redis Cluster introduces in the `redis.conf` file. \* \*\*cluster-enabled ``\*\*: If yes, enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ -0.012510817497968674, -0.07654902338981628, -0.013328630477190018, 0.04765569791197777, -0.005841184873133898, -0.044843729585409164, -0.03311464563012123, -0.023230260238051414, 0.07451347261667252, 0.031132681295275688, -0.04869953915476799, 0.09756337106227875, 0.11451797187328339, -0....
0.078446
## Redis Cluster configuration parameters We are about to create an example cluster deployment. Before we continue, let's introduce the configuration parameters that Redis Cluster introduces in the `redis.conf` file. \* \*\*cluster-enabled ``\*\*: If yes, enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a standalone instance as usual. \* \*\*cluster-config-file ``\*\*: Note that despite the name of this option, this is not a user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception. \* \*\*cluster-node-timeout ``\*\*: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its replicas. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries. \* \*\*cluster-slave-validity-factor ``\*\*: If set to zero, a replica will always consider itself valid, and will therefore always try to failover a master, regardless of the amount of time the link between the master and the replica remained disconnected. If the value is positive, a maximum disconnection time is calculated as the \*node timeout\* value multiplied by the factor provided with this option, and if the node is a replica, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example, if the node timeout is set to 5 seconds and the validity factor is set to 10, a replica disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no replica that is able to failover it. In that case the cluster will return to being available only when the original master rejoins the cluster. \* \*\*cluster-migration-barrier ``\*\*: Minimum number of replicas a master will remain connected with, for another replica to migrate to a master which is no longer covered by any replica. See the appropriate section about replica migration in this tutorial for more information. \* \*\*cluster-require-full-coverage ``\*\*: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed. \* \*\*cluster-allow-reads-when-down ``\*\*: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when the cluster is marked as failed, either when a node can't reach a quorum of masters or when full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ -0.005058981478214264, -0.0250448789447546, -0.06220295652747154, 0.08382658660411835, 0.0641281008720398, -0.017600663006305695, 0.00723333191126585, -0.012596477754414082, -0.006592470686882734, -0.006430542562156916, 0.008783607743680477, 0.057776615023612976, 0.05003809928894043, -0.08...
0.164415
changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible. ## Create and use a Redis Cluster To create and use a Redis Cluster, follow these steps: \* [Create a Redis Cluster](#create-a-redis-cluster) \* [Interact with the cluster](#interact-with-the-cluster) \* [Write an example app with redis-rb-cluster](#write-an-example-app-with-redis-rb-cluster) \* [Reshard the cluster](#reshard-the-cluster) \* [A more interesting example application](#a-more-interesting-example-application) \* [Test the failover](#test-the-failover) \* [Manual failover](#manual-failover) \* [Add a new node](#add-a-new-node) \* [Remove a node](#remove-a-node) \* [Replica migration](#replica-migration) \* [Upgrade nodes in a Redis Cluster](#upgrade-nodes-in-a-redis-cluster) \* [Migrate to Redis Cluster](#migrate-to-redis-cluster) But, first, familiarize yourself with the requirements for creating a cluster. #### Requirements to create a Redis Cluster To create a cluster, the first thing you need is to have a few empty Redis instances running in \_cluster mode\_. At minimum, set the following directives in the `redis.conf` file: ``` port 7000 cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes ``` To enable cluster mode, set the `cluster-enabled` directive to `yes`. Every instance also contains the path of a file where the configuration for this node is stored, which by default is `nodes.conf`. This file is never touched by humans; it is simply generated at startup by the Redis Cluster instances, and updated every time it is needed. Note that the \*\*minimal cluster\*\* that works as expected must contain at least three master nodes. For deployment, we strongly recommend a six-node cluster, with three masters and three replicas. You can test this locally by creating the following directories named after the port number of the instance you'll run inside any given directory. For example: ``` mkdir cluster-test cd cluster-test mkdir 7000 7001 7002 7003 7004 7005 ``` Create a `redis.conf` file inside each of the directories, from 7000 to 7005. As a template for your configuration file just use the small example above, but make sure to replace the port number `7000` with the right port number according to the directory name. You can start each instance as follows, each running in a separate terminal tab: ``` cd 7000 redis-server ./redis.conf ``` You'll see from the logs that every node assigns itself a new ID: [82462] 26 Nov 11:56:55.329 \* No cluster configuration found, I'm 97a3a64667477371c4479320d683e4c8db5858b1 This ID will be used forever by this specific instance in order for the instance to have a unique name in the context of the cluster. Every node remembers every other node using this IDs, and not by IP or port. IP addresses and ports may change, but the unique node identifier will never change for all the life of the node. We call this identifier simply \*\*Node ID\*\*. #### Create a Redis Cluster Now that we have a number of instances running, you need to create your cluster by writing some meaningful configuration to the nodes. You can configure and execute individual instances manually or use the create-cluster script. Let's go over how you do it manually. To create the cluster, run: redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \ 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \ --cluster-replicas 1 The command used here is \*\*create\*\*, since we want to create a new cluster. The option `--cluster-replicas 1` means that we want a replica for every master created. The other arguments are the list of addresses of the instances I
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ 0.004968269262462854, -0.07250837981700897, -0.010480138473212719, 0.061505407094955444, 0.002524367766454816, -0.05038146302103996, -0.040043603628873825, 0.024370821192860603, 0.0007511575822718441, 0.016076989471912384, -0.05244207754731178, 0.037763021886348724, 0.07112780958414078, -0...
0.118197
127.0.0.1:7000 127.0.0.1:7001 \ 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \ --cluster-replicas 1 The command used here is \*\*create\*\*, since we want to create a new cluster. The option `--cluster-replicas 1` means that we want a replica for every master created. The other arguments are the list of addresses of the instances I want to use to create the new cluster. `redis-cli` will propose a configuration. Accept the proposed configuration by typing \*\*yes\*\*. The cluster will be configured and \*joined\*, which means that instances will be bootstrapped into talking with each other. Finally, if everything has gone well, you'll see a message like this: [OK] All 16384 slots covered This means that there is at least one master instance serving each of the 16384 available slots. If you don't want to create a Redis Cluster by configuring and executing individual instances manually as explained above, there is a much simpler system (but you'll not learn the same amount of operational details). Find the `utils/create-cluster` directory in the Redis distribution. There is a script called `create-cluster` inside (same name as the directory it is contained into), it's a simple bash script. In order to start a 6 nodes cluster with 3 masters and 3 replicas just type the following commands: 1. `create-cluster start` 2. `create-cluster create` Reply to `yes` in step 2 when the `redis-cli` utility wants you to accept the cluster layout. You can now interact with the cluster, the first node will start at port 30001 by default. When you are done, stop the cluster with: 3. `create-cluster stop` Please read the `README` inside this directory for more information on how to run the script. #### Interact with the cluster To connect to Redis Cluster, you'll need a cluster-aware Redis client. See the [documentation](/docs/clients) for your client of choice to determine its cluster support. You can also test your Redis Cluster using the `redis-cli` command line utility: ``` $ redis-cli -c -p 7000 redis 127.0.0.1:7000> set foo bar -> Redirected to slot [12182] located at 127.0.0.1:7002 OK redis 127.0.0.1:7002> set hello world -> Redirected to slot [866] located at 127.0.0.1:7000 OK redis 127.0.0.1:7000> get foo -> Redirected to slot [12182] located at 127.0.0.1:7002 "bar" redis 127.0.0.1:7002> get hello -> Redirected to slot [866] located at 127.0.0.1:7000 "world" ``` {{% alert title="Note" color="info" %}} If you created the cluster using the script, your nodes may listen on different ports, starting from 30001 by default. {{% /alert %}} The `redis-cli` cluster support is very basic, so it always uses the fact that Redis Cluster nodes are able to redirect a client to the right node. A serious client is able to do better than that, and cache the map between hash slots and nodes addresses, to directly use the right connection to the right node. The map is refreshed only when something changed in the cluster configuration, for example after a failover or after the system administrator changed the cluster layout by adding or removing nodes. #### Write an example app with redis-rb-cluster Before going forward showing how to operate the Redis Cluster, doing things like a failover, or a resharding, we need to create some example application or at least to be able to understand the semantics of a simple Redis Cluster client interaction. In this way we can run an example and at the same time try to make nodes failing, or start a resharding, to see how Redis Cluster behaves under real world conditions. It is not very helpful to see what happens while nobody is writing to the cluster. This section explains some basic usage of [redis-rb-cluster](https://github.com/antirez/redis-rb-cluster) showing two examples.
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ 0.0029859091155231, -0.06876183301210403, -0.05662044882774353, 0.012005507946014404, -0.032665103673934937, -0.04607637599110603, -0.012739812955260277, -0.030327625572681427, 0.004748154431581497, -0.015607481822371483, -0.01909252069890499, -0.05589509382843971, 0.09609350562095642, -0....
0.124604
at the same time try to make nodes failing, or start a resharding, to see how Redis Cluster behaves under real world conditions. It is not very helpful to see what happens while nobody is writing to the cluster. This section explains some basic usage of [redis-rb-cluster](https://github.com/antirez/redis-rb-cluster) showing two examples. The first is the following, and is the [`example.rb`](https://github.com/antirez/redis-rb-cluster/blob/master/example.rb) file inside the redis-rb-cluster distribution: ``` 1 require './cluster' 2 3 if ARGV.length != 2 4 startup\_nodes = [ 5 {:host => "127.0.0.1", :port => 7000}, 6 {:host => "127.0.0.1", :port => 7001} 7 ] 8 else 9 startup\_nodes = [ 10 {:host => ARGV[0], :port => ARGV[1].to\_i} 11 ] 12 end 13 14 rc = RedisCluster.new(startup\_nodes,32,:timeout => 0.1) 15 16 last = false 17 18 while not last 19 begin 20 last = rc.get("\_\_last\_\_") 21 last = 0 if !last 22 rescue => e 23 puts "error #{e.to\_s}" 24 sleep 1 25 end 26 end 27 28 ((last.to\_i+1)..1000000000).each{|x| 29 begin 30 rc.set("foo#{x}",x) 31 puts rc.get("foo#{x}") 32 rc.set("\_\_last\_\_",x) 33 rescue => e 34 puts "error #{e.to\_s}" 35 end 36 sleep 0.1 37 } ``` The application does a very simple thing, it sets keys in the form `foo` to `number`, one after the other. So if you run the program the result is the following stream of commands: \* SET foo0 0 \* SET foo1 1 \* SET foo2 2 \* And so forth... The program looks more complex than it should usually as it is designed to show errors on the screen instead of exiting with an exception, so every operation performed with the cluster is wrapped by `begin` `rescue` blocks. The \*\*line 14\*\* is the first interesting line in the program. It creates the Redis Cluster object, using as argument a list of \*startup nodes\*, the maximum number of connections this object is allowed to take against different nodes, and finally the timeout after a given operation is considered to be failed. The startup nodes don't need to be all the nodes of the cluster. The important thing is that at least one node is reachable. Also note that redis-rb-cluster updates this list of startup nodes as soon as it is able to connect with the first node. You should expect such a behavior with any other serious client. Now that we have the Redis Cluster object instance stored in the \*\*rc\*\* variable, we are ready to use the object like if it was a normal Redis object instance. This is exactly what happens in \*\*line 18 to 26\*\*: when we restart the example we don't want to start again with `foo0`, so we store the counter inside Redis itself. The code above is designed to read this counter, or if the counter does not exist, to assign it the value of zero. However note how it is a while loop, as we want to try again and again even if the cluster is down and is returning errors. Normal applications don't need to be so careful. \*\*Lines between 28 and 37\*\* start the main loop where the keys are set or an error is displayed. Note the `sleep` call at the end of the loop. In your tests you can remove the sleep if you want to write to the cluster as fast as possible (relatively to the fact that this is a busy loop without real parallelism of course, so you'll get the usually 10k ops/second in the best of the conditions). Normally writes are slowed down in order for the example application to be easier to follow by humans. Starting the application produces the following output:
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ -0.021853094920516014, -0.03802219405770302, -0.08068274706602097, 0.08743464201688766, 0.06582150608301163, -0.05307117849588394, -0.032063886523246765, -0.03078790381550789, -0.016806069761514664, 0.030564801767468452, -0.031202463433146477, 0.04634694755077362, 0.041641540825366974, -0....
0.111725
fact that this is a busy loop without real parallelism of course, so you'll get the usually 10k ops/second in the best of the conditions). Normally writes are slowed down in order for the example application to be easier to follow by humans. Starting the application produces the following output: ``` ruby ./example.rb 1 2 3 4 5 6 7 8 9 ^C (I stopped the program here) ``` This is not a very interesting program and we'll use a better one in a moment but we can already see what happens during a resharding when the program is running. #### Reshard the cluster Now we are ready to try a cluster resharding. To do this, please keep the example.rb program running, so that you can see if there is some impact on the program running. Also, you may want to comment the `sleep` call to have some more serious write load during resharding. Resharding basically means to move hash slots from a set of nodes to another set of nodes. Like cluster creation, it is accomplished using the redis-cli utility. To start a resharding, just type: redis-cli --cluster reshard 127.0.0.1:7000 You only need to specify a single node, redis-cli will find the other nodes automatically. Currently redis-cli is only able to reshard with the administrator support, you can't just say move 5% of slots from this node to the other one (but this is pretty trivial to implement). So it starts with questions. The first is how much of a resharding do you want to do: How many slots do you want to move (from 1 to 16384)? We can try to reshard 1000 hash slots, that should already contain a non trivial amount of keys if the example is still running without the sleep call. Then redis-cli needs to know what is the target of the resharding, that is, the node that will receive the hash slots. I'll use the first master node, that is, 127.0.0.1:7000, but I need to specify the Node ID of the instance. This was already printed in a list by redis-cli, but I can always find the ID of a node with the following command if I need: ``` $ redis-cli -p 7000 cluster nodes | grep myself 97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5460 ``` Ok so my target node is 97a3a64667477371c4479320d683e4c8db5858b1. Now you'll get asked from what nodes you want to take those keys. I'll just type `all` in order to take a bit of hash slots from all the other master nodes. After the final confirmation you'll see a message for every slot that redis-cli is going to move from a node to another, and a dot will be printed for every actual key moved from one side to the other. While the resharding is in progress you should be able to see your example program running unaffected. You can stop and restart it multiple times during the resharding if you want. At the end of the resharding, you can test the health of the cluster with the following command: redis-cli --cluster check 127.0.0.1:7000 All the slots will be covered as usual, but this time the master at 127.0.0.1:7000 will have more hash slots, something around 6461. Resharding can be performed automatically without the need to manually enter the parameters in an interactive way. This is possible using a command line like the following: redis-cli --cluster reshard : --cluster-from --cluster-to --cluster-slots --cluster-yes This allows to build some automatism if you are likely to reshard often, however currently there is no way for `redis-cli` to automatically rebalance the cluster
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ -0.006668249145150185, -0.08346865326166153, -0.10448949784040451, 0.0884690135717392, 0.023279927670955658, -0.08661819994449615, -0.046138931065797806, 0.0356975682079792, -0.054904449731111526, 0.017223969101905823, -0.029367689043283463, 0.07040345668792725, 0.029286060482263565, -0.08...
0.112376
the parameters in an interactive way. This is possible using a command line like the following: redis-cli --cluster reshard : --cluster-from --cluster-to --cluster-slots --cluster-yes This allows to build some automatism if you are likely to reshard often, however currently there is no way for `redis-cli` to automatically rebalance the cluster checking the distribution of keys across the cluster nodes and intelligently moving slots as needed. This feature will be added in the future. The `--cluster-yes` option instructs the cluster manager to automatically answer "yes" to the command's prompts, allowing it to run in a non-interactive mode. Note that this option can also be activated by setting the `REDISCLI\_CLUSTER\_YES` environment variable. #### A more interesting example application The example application we wrote early is not very good. It writes to the cluster in a simple way without even checking if what was written is the right thing. From our point of view the cluster receiving the writes could just always write the key `foo` to `42` to every operation, and we would not notice at all. So in the `redis-rb-cluster` repository, there is a more interesting application that is called `consistency-test.rb`. It uses a set of counters, by default 1000, and sends `INCR` commands in order to increment the counters. However instead of just writing, the application does two additional things: \* When a counter is updated using `INCR`, the application remembers the write. \* It also reads a random counter before every write, and check if the value is what we expected it to be, comparing it with the value it has in memory. What this means is that this application is a simple \*\*consistency checker\*\*, and is able to tell you if the cluster lost some write, or if it accepted a write that we did not receive acknowledgment for. In the first case we'll see a counter having a value that is smaller than the one we remember, while in the second case the value will be greater. Running the consistency-test application produces a line of output every second: ``` $ ruby consistency-test.rb 925 R (0 err) | 925 W (0 err) | 5030 R (0 err) | 5030 W (0 err) | 9261 R (0 err) | 9261 W (0 err) | 13517 R (0 err) | 13517 W (0 err) | 17780 R (0 err) | 17780 W (0 err) | 22025 R (0 err) | 22025 W (0 err) | 25818 R (0 err) | 25818 W (0 err) | ``` The line shows the number of \*\*R\*\*eads and \*\*W\*\*rites performed, and the number of errors (query not accepted because of errors since the system was not available). If some inconsistency is found, new lines are added to the output. This is what happens, for example, if I reset a counter manually while the program is running: ``` $ redis-cli -h 127.0.0.1 -p 7000 set key\_217 0 OK (in the other tab I see...) 94774 R (0 err) | 94774 W (0 err) | 98821 R (0 err) | 98821 W (0 err) | 102886 R (0 err) | 102886 W (0 err) | 114 lost | 107046 R (0 err) | 107046 W (0 err) | 114 lost | ``` When I set the counter to 0 the real value was 114, so the program reports 114 lost writes (`INCR` commands that are not remembered by the cluster). This program is much more interesting as a test case, so we'll use it to test the Redis Cluster failover. #### Test the failover To trigger the failover, the simplest thing we can do (that is
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ 0.01863222010433674, -0.0611729770898819, -0.09653838723897934, 0.05841529741883278, -0.021970026195049286, -0.05888441950082779, 0.04156653583049774, -0.0119472062215209, -0.009206175804138184, -0.016871344298124313, -0.005518864840269089, -0.03156936541199684, 0.08264194428920746, -0.064...
0.153023
program reports 114 lost writes (`INCR` commands that are not remembered by the cluster). This program is much more interesting as a test case, so we'll use it to test the Redis Cluster failover. #### Test the failover To trigger the failover, the simplest thing we can do (that is also the semantically simplest failure that can occur in a distributed system) is to crash a single process, in our case a single master. {{% alert title="Note" color="info" %}} During this test, you should take a tab open with the consistency test application running. {{% /alert %}} We can identify a master and crash it with the following command: ``` $ redis-cli -p 7000 cluster nodes | grep master 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385482984082 0 connected 5960-10921 2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 master - 0 1385482983582 0 connected 11423-16383 97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422 ``` Ok, so 7000, 7001, and 7002 are masters. Let's crash node 7002 with the \*\*DEBUG SEGFAULT\*\* command: ``` $ redis-cli -p 7002 debug segfault Error: Server closed the connection ``` Now we can look at the output of the consistency test to see what it reported. ``` 18849 R (0 err) | 18849 W (0 err) | 23151 R (0 err) | 23151 W (0 err) | 27302 R (0 err) | 27302 W (0 err) | ... many error warnings here ... 29659 R (578 err) | 29660 W (577 err) | 33749 R (578 err) | 33750 W (577 err) | 37918 R (578 err) | 37919 W (577 err) | 42077 R (578 err) | 42078 W (577 err) | ``` As you can see during the failover the system was not able to accept 578 reads and 577 writes, however no inconsistency was created in the database. This may sound unexpected as in the first part of this tutorial we stated that Redis Cluster can lose writes during the failover because it uses asynchronous replication. What we did not say is that this is not very likely to happen because Redis sends the reply to the client, and the commands to replicate to the replicas, about at the same time, so there is a very small window to lose data. However the fact that it is hard to trigger does not mean that it is impossible, so this does not change the consistency guarantees provided by Redis cluster. We can now check what is the cluster setup after the failover (note that in the meantime I restarted the crashed instance so that it rejoins the cluster as a replica): ``` $ redis-cli -p 7000 cluster nodes 3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385503418521 0 connected a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385503419023 0 connected 97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385503419023 3 connected 11423-16383 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385503417005 0 connected 5960-10921 2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385503418016 3 connected ``` Now the masters are running on ports 7000, 7001 and 7005. What was previously a master, that is the Redis instance running on port 7002, is now a replica of 7005. The output of the `CLUSTER NODES` command may look intimidating, but it is actually pretty simple, and is composed of the following tokens: \* Node ID \* ip:port \* flags: master, replica, myself, fail, ... \* if it is a replica, the Node ID of the master \* Time of the last pending PING still waiting for a reply. \* Time of the last PONG received. \* Configuration epoch for this node (see the Cluster specification). \* Status
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ 0.0157025083899498, -0.062441349029541016, -0.013738328590989113, 0.06134828180074692, 0.05392728000879288, -0.04955471679568291, -0.035633791238069534, 0.03078249655663967, -0.031738877296447754, 0.029172636568546295, -0.008871158584952354, -0.0014911189209669828, 0.07881700992584229, -0....
0.1265
\* flags: master, replica, myself, fail, ... \* if it is a replica, the Node ID of the master \* Time of the last pending PING still waiting for a reply. \* Time of the last PONG received. \* Configuration epoch for this node (see the Cluster specification). \* Status of the link to this node. \* Slots served... #### Manual failover Sometimes it is useful to force a failover without actually causing any problem on a master. For example, to upgrade the Redis process of one of the master nodes it is a good idea to failover it to turn it into a replica with minimal impact on availability. Manual failovers are supported by Redis Cluster using the `CLUSTER FAILOVER` command, that must be executed in one of the replicas of the master you want to failover. Manual failovers are special and are safer compared to failovers resulting from actual master failures. They occur in a way that avoids data loss in the process, by switching clients from the original master to the new master only when the system is sure that the new master processed all the replication stream from the old one. This is what you see in the replica log when you perform a manual failover: # Manual failover user request accepted. # Received replication offset for paused master manual failover: 347540 # All master replication stream processed, manual failover can start. # Start of election delayed for 0 milliseconds (rank #0, offset 347540). # Starting a failover election for epoch 7545. # Failover election won: I'm the new master. Basically clients connected to the master we are failing over are stopped. At the same time the master sends its replication offset to the replica, that waits to reach the offset on its side. When the replication offset is reached, the failover starts, and the old master is informed about the configuration switch. When the clients are unblocked on the old master, they are redirected to the new master. {{% alert title="Note" color="info" %}} To promote a replica to master, it must first be known as a replica by a majority of the masters in the cluster. Otherwise, it cannot win the failover election. If the replica has just been added to the cluster (see [Add a new node as a replica](#add-a-new-node-as-a-replica)), you may need to wait a while before sending the `CLUSTER FAILOVER` command, to make sure the masters in cluster are aware of the new replica. {{% /alert %}} #### Add a new node Adding a new node is basically the process of adding an empty node and then moving some data into it, in case it is a new master, or telling it to setup as a replica of a known node, in case it is a replica. We'll show both, starting with the addition of a new master instance. In both cases the first step to perform is \*\*adding an empty node\*\*. This is as simple as to start a new node in port 7006 (we already used from 7000 to 7005 for our existing 6 nodes) with the same configuration used for the other nodes, except for the port number, so what you should do in order to conform with the setup we used for the previous nodes: \* Create a new tab in your terminal application. \* Enter the `cluster-test` directory. \* Create a directory named `7006`. \* Create a redis.conf file inside, similar to the one used for the other nodes but using 7006 as port number. \* Finally start the server with `../redis-server ./redis.conf` At this point the
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ -0.035962361842393875, -0.018856311216950417, -0.013000092469155788, 0.030026020482182503, 0.04922162741422653, -0.07014614343643188, 0.03143289312720299, 0.0031294208019971848, -0.021284082904458046, 0.006150117143988609, -0.015409900806844234, 0.042946819216012955, 0.049512650817632675, ...
0.146823
new tab in your terminal application. \* Enter the `cluster-test` directory. \* Create a directory named `7006`. \* Create a redis.conf file inside, similar to the one used for the other nodes but using 7006 as port number. \* Finally start the server with `../redis-server ./redis.conf` At this point the server should be running. Now we can use \*\*redis-cli\*\* as usual in order to add the node to the existing cluster. redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 As you can see I used the \*\*add-node\*\* command specifying the address of the new node as first argument, and the address of a random existing node in the cluster as second argument. In practical terms redis-cli here did very little to help us, it just sent a `CLUSTER MEET` message to the node, something that is also possible to accomplish manually. However redis-cli also checks the state of the cluster before to operate, so it is a good idea to perform cluster operations always via redis-cli even when you know how the internals work. Now we can connect to the new node to see if it really joined the cluster: ``` redis 127.0.0.1:7006> cluster nodes 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385543178575 0 connected 5960-10921 3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385543179583 0 connected f093c80dde814da99c5cf72a7dd01590792b783b :0 myself,master - 0 0 0 connected 2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543178072 3 connected a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385543178575 0 connected 97a3a64667477371c4479320d683e4c8db5858b1 127.0.0.1:7000 master - 0 1385543179080 0 connected 0-5959 10922-11422 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385543177568 3 connected 11423-16383 ``` Note that since this node is already connected to the cluster it is already able to redirect client queries correctly and is generally speaking part of the cluster. However it has two peculiarities compared to the other masters: \* It holds no data as it has no assigned hash slots. \* Because it is a master without assigned slots, it does not participate in the election process when a replica wants to become a master. Now it is possible to assign hash slots to this node using the resharding feature of `redis-cli`. It is basically useless to show this as we already did in a previous section, there is no difference, it is just a resharding having as a target the empty node. ##### Add a new node as a replica Adding a new replica can be performed in two ways. The obvious one is to use redis-cli again, but with the --cluster-slave option, like this: redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave Note that the command line here is exactly like the one we used to add a new master, so we are not specifying to which master we want to add the replica. In this case, what happens is that redis-cli will add the new node as replica of a random master among the masters with fewer replicas. However you can specify exactly what master you want to target with your new replica with the following command line: redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave --cluster-master-id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e This way we assign the new replica to a specific master. A more manual way to add a replica to a specific master is to add the new node as an empty master, and then turn it into a replica using the `CLUSTER REPLICATE` command. This also works if the node was added as a replica but you want to move it as a replica of a different master. For example in order to add a replica for the node 127.0.0.1:7005 that is currently serving hash slots in the range 11423-16383, that has a Node ID 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e, all I need
https://github.com/redis/redis-doc/blob/master//docs/management/scaling.md
master
redis
[ 0.008886884897947311, -0.08978790789842606, -0.11324114352464676, 0.0364999882876873, -0.01945330947637558, -0.04553304240107536, -0.0011749311815947294, -0.022573331370949745, -0.0323294997215271, 0.016034703701734543, 0.0017806042451411486, -0.04845412075519562, 0.06343386322259903, -0.0...
0.106839