text
stringlengths 1
1k
| id
int64 0
8.58k
|
---|---|
coming up with
all the new updates:
kubectl rollout resume deployment/nginx-deployment
The output is similar to this:
deployment.apps/nginx-deployment resumed
Watch the status of the rollout until it's done.
kubectl get rs -w
The output is similar to this:
NAME DESIRED CURRENT READY AGE
nginx-2142116321 2 2 2 2m
nginx-3926361531 2 2 0 6s
nginx-3926361531 2 2 1 18s
nginx-2142116321 1 2 2 2m•
•
•
•
| 400 |
nginx-2142116321 1 2 2 2m
nginx-3926361531 3 2 1 18s
nginx-3926361531 3 2 1 18s
nginx-2142116321 1 1 1 2m
nginx-3926361531 3 3 1 18s
nginx-3926361531 3 3 2 19s
nginx-2142116321 0 1 1 2m
nginx-2142116321 0 1 1 2m
nginx-2142116321 0 0 0 2m
nginx-3926361531 3 3 3 20s
Get the status of the latest rollout:
kubectl get rs
The output is similar to this:
NAME DESIRED CURRENT READY AGE
nginx-2142116321 0 0 0 2m
nginx-3926361531 3 3 3 28s
Note: You cannot rollback a paused Deployment until you resume it.
Deployment status
A Deployment enters various states during its lifecycle. It can be progressing while rolling out a
new ReplicaSet, it can be complete , or it can | 401 |
fail to progress .
Progressing Deployment
Kubernetes marks a Deployment as progressing when one of the following tasks is performed:
The Deployment creates a new ReplicaSet.
The Deployment is scaling up its newest ReplicaSet.
The Deployment is scaling down its older ReplicaSet(s).
New Pods become ready or available (ready for at least MinReadySeconds ).
When the rollout becomes “progressing”, the Deployment controller adds a condition with the
following attributes to the Deployment's .status.conditions :
type: Progressing
status: "True"
reason: NewReplicaSetCreated | reason: FoundNewReplicaSet | reason: ReplicaSetUpdated
You can monitor the progress for a Deployment by using kubectl rollout status .
Complete Deployment
Kubernetes marks a Deployment as complete when it has the following characteristics:
All of the replicas associated with the Deployment have been updated to the latest
version you've specified, meaning any updates you've requested have been completed.
All of the repl | 402 |
icas associated with the Deployment are available.•
•
•
•
•
•
•
•
•
| 403 |
No old replicas for the Deployment are running.
When the rollout becomes “complete”, the Deployment controller sets a condition with the
following attributes to the Deployment's .status.conditions :
type: Progressing
status: "True"
reason: NewReplicaSetAvailable
This Progressing condition will retain a status value of "True" until a new rollout is initiated.
The condition holds even when availability of replicas changes (which does instead affect the
Available condition).
You can check if a Deployment has completed by using kubectl rollout status . If the rollout
completed successfully, kubectl rollout status returns a zero exit code.
kubectl rollout status deployment/nginx-deployment
The output is similar to this:
Waiting for rollout to finish: 2 of 3 updated replicas are available...
deployment "nginx-deployment" successfully rolled out
and the exit status from kubectl rollout is 0 (success):
echo $?
0
Failed Deployment
Your Deployment may get stuck trying to deploy its newest | 404 |
ReplicaSet without ever
completing. This can occur due to some of the following factors:
Insufficient quota
Readiness probe failures
Image pull errors
Insufficient permissions
Limit ranges
Application runtime misconfiguration
One way you can detect this condition is to specify a deadline parameter in your Deployment
spec: ( .spec.progressDeadlineSeconds ). .spec.progressDeadlineSeconds denotes the number of
seconds the Deployment controller waits before indicating (in the Deployment status) that the
Deployment progress has stalled.
The following kubectl command sets the spec with progressDeadlineSeconds to make the
controller report lack of progress of a rollout for a Deployment after 10 minutes:
kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
The output is similar to this:
deployment.apps/nginx-deployment patched•
•
•
•
•
•
•
•
•
| 405 |
Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition
with the following attributes to the Deployment's .status.conditions :
type: Progressing
status: "False"
reason: ProgressDeadlineExceeded
This condition can also fail early and is then set to status value of "False" due to reasons as
ReplicaSetCreateError . Also, the deadline is not taken into account anymore once the
Deployment rollout completes.
See the Kubernetes API conventions for more information on status conditions.
Note: Kubernetes takes no action on a stalled Deployment other than to report a status
condition with reason: ProgressDeadlineExceeded . Higher level orchestrators can take
advantage of it and act accordingly, for example, rollback the Deployment to its previous
version.
Note: If you pause a Deployment rollout, Kubernetes does not check progress against your
specified deadline. You can safely pause a Deployment rollout in the middle of a rollout and
resume without triggeri | 406 |
ng the condition for exceeding the deadline.
You may experience transient errors with your Deployments, either due to a low timeout that
you have set or due to any other kind of error that can be treated as transient. For example, let's
suppose you have insufficient quota. If you describe the Deployment you will notice the
following section:
kubectl describe deployment nginx-deployment
The output is similar to this:
<...>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
ReplicaFailure True FailedCreate
<...>
If you run kubectl get deployment nginx-deployment -o yaml , the Deployment status is similar
to this:
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: Replica set "nginx-deployment-4262182780" is progressing.
reason: ReplicaSetUpdated
status: "True"
| 407 |
type: Progressing
- lastTransitionTime: 2016-10-04T12:25:42Z
lastUpdateTime: 2016-10-04T12:25:42Z
message: Deployment has minimum availability.•
•
| 408 |
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
object-counts, requested: pods=1, used: pods=3, limited: pods=2'
reason: FailedCreate
status: "True"
type: ReplicaFailure
observedGeneration: 3
replicas: 2
unavailableReplicas: 2
Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status
and the reason for the Progressing condition:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing False ProgressDeadlineExceeded
ReplicaFailure True FailedCreate
You can address an issue of insufficient quota by scaling down your Deployment, by scaling
down other controllers you may be running, or by increasing quota in your namespace. If you
sa | 409 |
tisfy the quota conditions and the Deployment controller then completes the Deployment
rollout, you'll see the Deployment's status update with a successful condition ( status: "True" and
reason: NewReplicaSetAvailable ).
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
type: Available with status: "True" means that your Deployment has minimum availability.
Minimum availability is dictated by the parameters specified in the deployment strategy. type:
Progressing with status: "True" means that your Deployment is either in the middle of a rollout
and it is progressing or that it has successfully completed its progress and the minimum
required new replicas are available (see the Reason of the condition for the particulars - in our
case reason: NewReplicaSetAvailable means that the Deployment is complete).
You can check if a Deployment has failed to progress by us | 410 |
ing kubectl rollout status . kubectl
rollout status returns a non-zero exit code if the Deployment has exceeded the progression
deadline.
kubectl rollout status deployment/nginx-deployment
The output is similar to this:
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
error: deployment "nginx" exceeded its progress deadlin | 411 |
and the exit status from kubectl rollout is 1 (indicating an error):
echo $?
1
Operating on a failed deployment
All actions that apply to a complete Deployment also apply to a failed Deployment. You can
scale it up/down, roll back to a previous revision, or even pause it if you need to apply multiple
tweaks in the Deployment Pod template.
Clean up Policy
You can set .spec.revisionHistoryLimit field in a Deployment to specify how many old
ReplicaSets for this Deployment you want to retain. The rest will be garbage-collected in the
background. By default, it is 10.
Note: Explicitly setting this field to 0, will result in cleaning up all the history of your
Deployment thus that Deployment will not be able to roll back.
Canary Deployment
If you want to roll out releases to a subset of users or servers using the Deployment, you can
create multiple Deployments, one for each release, following the canary pattern described in
managing resources .
Writing a Deployment Spec
As with all othe | 412 |
r Kubernetes configs, a Deployment needs .apiVersion , .kind , and .metadata
fields. For general information about working with config files, see deploying applications ,
configuring containers, and using kubectl to manage resources documents.
When the control plane creates new Pods for a Deployment, the .metadata.name of the
Deployment is part of the basis for naming those Pods. The name of a Deployment must be a
valid DNS subdomain value, but this can produce unexpected results for the Pod hostnames.
For best compatibility, the name should follow the more restrictive rules for a DNS label .
A Deployment also needs a .spec section .
Pod Template
The .spec.template and .spec.selector are the only required fields of the .spec .
The .spec.template is a Pod template . It has exactly the same schema as a Pod, except it is nested
and does not have an apiVersion or kind.
In addition to required fields for a Pod, a Pod template in a Deployment must specify
appropriate labels and an ap | 413 |
propriate restart policy. For labels, make sure not to overlap with
other controllers. See selector | 414 |
Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not
specified.
Replicas
.spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1.
Should you manually scale a Deployment, example via kubectl scale deployment deployment --
replicas=X , and then you update that Deployment based on a manifest (for example: by
running kubectl apply -f deployment.yaml ), then applying that manifest overwrites the manual
scaling that you previously did.
If a HorizontalPodAutoscaler (or any similar API for horizontal scaling) is managing scaling for
a Deployment, don't set .spec.replicas .
Instead, allow the Kubernetes control plane to manage the .spec.replicas field automatically.
Selector
.spec.selector is a required field that specifies a label selector for the Pods targeted by this
Deployment.
.spec.selector must match .spec.template.metadata.labels , or it will be rejected by the API.
In API version apps/v1 , .spe | 415 |
c.selector and .metadata.labels do not default
to .spec.template.metadata.labels if not set. So they must be set explicitly. Also note
that .spec.selector is immutable after creation of the Deployment in apps/v1 .
A Deployment may terminate Pods whose labels match the selector if their template is different
from .spec.template or if the total number of such Pods exceeds .spec.replicas . It brings up new
Pods with .spec.template if the number of Pods is less than the desired number.
Note: You should not create other Pods whose labels match this selector, either directly, by
creating another Deployment, or by creating another controller such as a ReplicaSet or a
ReplicationController. If you do so, the first Deployment thinks that it created these other Pods.
Kubernetes does not stop you from doing this.
If you have multiple controllers that have overlapping selectors, the controllers will fight with
each other and won't behave correctly.
Strategy
.spec.strategy specifies the str | 416 |
ategy used to replace old Pods by new ones. .spec.strategy.type
can be "Recreate" or "RollingUpdate". "RollingUpdate" is the default value.
Recreate Deployment
All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate .
Note: This will only guarantee Pod termination previous to creation for upgrades. If you
upgrade a Deployment, all Pods of the old revision will be terminated immediately. Successful
removal is awaited before any Pod of the new revision is created. If you manually delete a Pod,
the lifecycle is controlled by the ReplicaSet and the replacement will be created immediatel | 417 |
(even if the old Pod is still in a Terminating state). If you need an "at most" guarantee for your
Pods, you should consider using a StatefulSet .
Rolling Update Deployment
The Deployment updates Pods in a rolling update fashion when
.spec.strategy.type==RollingUpdate . You can specify maxUnavailable and maxSurge to control
the rolling update process.
Max Unavailable
.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum
number of Pods that can be unavailable during the update process. The value can be an absolute
number (for example, 5) or a percentage of desired Pods (for example, 10%). The absolute
number is calculated from percentage by rounding down. The value cannot be 0 if
.spec.strategy.rollingUpdate.maxSurge is 0. The default value is 25%.
For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of
desired Pods immediately when the rolling update starts. Once new Pods are ready, old
ReplicaSet can be scal | 418 |
ed down further, followed by scaling up the new ReplicaSet, ensuring that
the total number of Pods available at all times during the update is at least 70% of the desired
Pods.
Max Surge
.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the maximum number
of Pods that can be created over the desired number of Pods. The value can be an absolute
number (for example, 5) or a percentage of desired Pods (for example, 10%). The value cannot be
0 if MaxUnavailable is 0. The absolute number is calculated from the percentage by rounding
up. The default value is 25%.
For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately
when the rolling update starts, such that the total number of old and new Pods does not exceed
130% of desired Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up
further, ensuring that the total number of Pods running at any time during the update is at
most 130% of desired Pods.
Here are some | 419 |
Rolling Update Deployment examples that use the maxUnavailable and
maxSurge :
Max Unavailable
Max Surge
Hybrid
apiVersion : apps/v1
kind: Deployment
metadata :
name : nginx-deployment
labels :
app: nginx
spec:
replicas : 3
selector :•
•
| 420 |
matchLabels :
app: nginx
template :
metadata :
labels :
app: nginx
spec:
containers :
- name : nginx
image : nginx:1.14.2
ports :
- containerPort : 80
strategy :
type: RollingUpdate
rollingUpdate :
maxUnavailable : 1
apiVersion : apps/v1
kind: Deployment
metadata :
name : nginx-deployment
labels :
app: nginx
spec:
replicas : 3
selector :
matchLabels :
app: nginx
template :
metadata :
labels :
app: nginx
spec:
containers :
- name : nginx
image : nginx:1.14.2
ports :
- containerPort : 80
strategy :
type: RollingUpdate
rollingUpdate :
maxSurge : 1
apiVersion : apps/v1
kind: Deployment
metadata :
name : nginx-deployment
labels :
app: nginx
spec:
replicas : 3
selector :
matchLabels : | 421 |
app: nginx
template :
metadata :
labels :
app: nginx
spec:
containers :
- name : nginx
image : nginx:1.14.2
ports :
- containerPort : 80
strategy :
type: RollingUpdate
rollingUpdate :
maxSurge : 1
maxUnavailable : 1
Progress Deadline Seconds
.spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you
want to wait for your Deployment to progress before the system reports back that the
Deployment has failed progressing - surfaced as a condition with type: Progressing , status:
"False" . and reason: ProgressDeadlineExceeded in the status of the resource. The Deployment
controller will keep retrying the Deployment. This defaults to 600. In the future, once automatic
rollback will be implemented, the Deployment controller will roll back a Deployment as soon as
it observes such a condition.
If specified, this field needs to be greater than .spec.minReadySeconds .
Min Ready Seconds
.spec.min | 422 |
ReadySeconds is an optional field that specifies the minimum number of seconds for
which a newly created Pod should be ready without any of its containers crashing, for it to be
considered available. This defaults to 0 (the Pod will be considered available as soon as it is
ready). To learn more about when a Pod is considered ready, see Container Probes .
Revision History Limit
A Deployment's revision history is stored in the ReplicaSets it controls.
.spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to
retain to allow rollback. These old ReplicaSets consume resources in etcd and crowd the output
of kubectl get rs . The configuration of each Deployment revision is stored in its ReplicaSets;
therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of
Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the
frequency and stability of new Deployments.
More specifically, | 423 |
setting this field to zero means that all old ReplicaSets with 0 replicas will be
cleaned up. In this case, a new Deployment rollout cannot be undone, since its revision history
is cleaned up | 424 |
Paused
.spec.paused is an optional boolean field for pausing and resuming a Deployment. The only
difference between a paused Deployment and one that is not paused, is that any changes into
the PodTemplateSpec of the paused Deployment will not trigger new rollouts as long as it is
paused. A Deployment is not paused by default when it is created.
What's next
Learn more about Pods .
Run a stateless application using a Deployment .
Read the Deployment to understand the Deployment API.
Read about PodDisruptionBudget and how you can use it to manage application
availability during disruptions.
Use kubectl to create a Deployment .
ReplicaSet
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time.
Usually, you define a Deployment and let that Deployment manage ReplicaSets automatically.
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As
such, it is often used to guarantee the availability of a specified numb | 425 |
er of identical Pods.
How a ReplicaSet works
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can
acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod
template specifying the data of new Pods it should create to meet the number of replicas
criteria. A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach
the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template.
A ReplicaSet is linked to its Pods via the Pods' metadata.ownerReferences field, which specifies
what resource the current object is owned by. All Pods acquired by a ReplicaSet have their
owning ReplicaSet's identifying information within their ownerReferences field. It's through
this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans
accordingly.
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no
Owner | 426 |
Reference or the OwnerReference is not a Controller and it matches a ReplicaSet's
selector, it will be immediately acquired by said ReplicaSet.
When to use a ReplicaSet
A ReplicaSet ensures that a specified number of pod replicas are running at any given time.
However, a Deployment is a higher-level concept that manages ReplicaSets and provides
declarative updates to Pods along with a lot of other useful features. Therefore, we recommend
using Deployments instead of directly using ReplicaSets, unless you require custom update
orchestration or don't require updates at all.•
•
•
•
| 427 |
This actually means that you may never need to manipulate ReplicaSet objects: use a
Deployment instead, and define your application in the spec section.
Example
controllers/frontend.yaml
apiVersion : apps/v1
kind: ReplicaSet
metadata :
name : frontend
labels :
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas : 3
selector :
matchLabels :
tier: frontend
template :
metadata :
labels :
tier: frontend
spec:
containers :
- name : php-redis
image : gcr.io/google_samples/gb-frontend:v3
Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster will create the
defined ReplicaSet and the Pods that it manages.
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
You can then get the current ReplicaSets deployed:
kubectl get rs
And see the frontend one you created:
NAME DESIRED CURRENT READY AGE
frontend 3 3 3 | 428 |
6s
You can also check on the state of the ReplicaSet:
kubectl describe rs/frontend
And you will see output similar to:
Name: frontend
Namespace: default
Selector: tier=frontend
Labels: app=guestboo | 429 |
tier=frontend
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"ReplicaSet","metadata":{"annotations":{},"labels":
{"app":"guestbook","tier":"frontend"},"name":"frontend",...
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: tier=frontend
Containers:
php-redis:
Image: gcr.io/google_samples/gb-frontend:v3
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 117s replicaset-controller Created pod: frontend-wtsmm
Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-b2zdv
Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-vcmts
And lastly you can check f | 430 |
or the Pods brought up:
kubectl get pods
You should see Pod information similar to:
NAME READY STATUS RESTARTS AGE
frontend-b2zdv 1/1 Running 0 6m36s
frontend-vcmts 1/1 Running 0 6m36s
frontend-wtsmm 1/1 Running 0 6m36s
You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To
do this, get the yaml of one of the Pods running:
kubectl get pods frontend-b2zdv -o yaml
The output will look similar to this, with the frontend ReplicaSet's info set in the metadata's
ownerReferences field:
apiVersion : v1
kind: Pod
metadata :
creationTimestamp : "2020-02-12T07:06:16Z"
generateName : frontend-
labels :
tier: frontend
name : frontend-b2zdv
namespace : default
ownerReferences :
- apiVersion : apps/v1
blockOwnerDeletion : tru | 431 |
controller : true
kind: ReplicaSet
name : frontend
uid: f391f6db-bb9b-4c09-ae74-6a1f77f3d5cf
...
Non-Template Pod acquisitions
While you can create bare Pods with no problems, it is strongly recommended to make sure
that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The
reason for this is because a ReplicaSet is not limited to owning Pods specified by its template--
it can acquire other Pods in the manner specified in the previous sections.
Take the previous frontend ReplicaSet example, and the Pods specified in the following
manifest:
pods/pod-rs.yaml
apiVersion : v1
kind: Pod
metadata :
name : pod1
labels :
tier: frontend
spec:
containers :
- name : hello1
image : gcr.io/google-samples/hello-app:2.0
---
apiVersion : v1
kind: Pod
metadata :
name : pod2
labels :
tier: frontend
spec:
containers :
- name : hello2
image : gcr.io/google-samples/hello-app:1.0
As those Pods do not have a Controller (or any | 432 |
object) as their owner reference and match the
selector of the frontend ReplicaSet, they will immediately be acquired by it.
Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its
initial Pod replicas to fulfill its replica count requirement:
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the
ReplicaSet would be over its desired count | 433 |
Fetching the Pods:
kubectl get pods
The output shows that the new Pods are either already terminated, or in the process of being
terminated:
NAME READY STATUS RESTARTS AGE
frontend-b2zdv 1/1 Running 0 10m
frontend-vcmts 1/1 Running 0 10m
frontend-wtsmm 1/1 Running 0 10m
pod1 0/1 Terminating 0 1s
pod2 0/1 Terminating 0 1s
If you create the Pods first:
kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
And then create the ReplicaSet however:
kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
You shall see that the ReplicaSet has acquired the Pods and has only created new ones
according to its spec until the number of its new Pods and the original matches its desired
count. As fetching the Pods:
kubectl get pods
Will reveal in its output:
NAME READY STATUS RESTARTS AGE
frontend-hmmj2 1/1 Ru | 434 |
nning 0 9s
pod1 1/1 Running 0 36s
pod2 1/1 Running 0 36s
In this manner, a ReplicaSet can own a non-homogeneous set of Pods
Writing a ReplicaSet manifest
As with all other Kubernetes API objects, a ReplicaSet needs the apiVersion , kind, and metadata
fields. For ReplicaSets, the kind is always a ReplicaSet.
When the control plane creates new Pods for a ReplicaSet, the .metadata.name of the ReplicaSet
is part of the basis for naming those Pods. The name of a ReplicaSet must be a valid DNS
subdomain value, but this can produce unexpected results for the Pod hostnames. For best
compatibility, the name should follow the more restrictive rules for a DNS label .
A ReplicaSet also needs a .spec section .
Pod Template
The .spec.template is a pod template which is also required to have labels in place. In our
frontend.yaml example we had one label: tier: frontend . Be careful not to overlap with the
selectors of other contr | 435 |
ollers, lest they try to adopt this Pod | 436 |
For the template's restart policy field, .spec.template.spec.restartPolicy , the only allowed value
is Always , which is the default.
Pod Selector
The .spec.selector field is a label selector . As discussed earlier these are the labels used to
identify potential Pods to acquire. In our frontend.yaml example, the selector was:
matchLabels :
tier: frontend
In the ReplicaSet, .spec.template.metadata.labels must match spec.selector , or it will be rejected
by the API.
Note: For 2 ReplicaSets specifying the same .spec.selector but
different .spec.template.metadata.labels and .spec.template.spec fields, each ReplicaSet ignores
the Pods created by the other ReplicaSet.
Replicas
You can specify how many Pods should run concurrently by setting .spec.replicas . The
ReplicaSet will create/delete its Pods to match this number.
If you do not specify .spec.replicas , then it defaults to 1.
Working with ReplicaSets
Deleting a ReplicaSet and its Pods
To delete a ReplicaSet and all of its Po | 437 |
ds, use kubectl delete . The Garbage collector
automatically deletes all of the dependent Pods by default.
When using the REST API or the client-go library, you must set propagationPolicy to
Background or Foreground in the -d option. For example:
kubectl proxy --port =8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
Deleting just a ReplicaSet
You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the --
cascade=orphan option. When using the REST API or the client-go library, you must set
propagationPolicy to Orphan . For example:
kubectl proxy --port =8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json | 438 |
Once the original is deleted, you can create a new ReplicaSet to replace it. As long as the old
and new .spec.selector are the same, then the new one will adopt the old Pods. However, it will
not make any effort to make existing Pods match a new, different pod template. To update Pods
to a new spec in a controlled way, use a Deployment , as ReplicaSets do not support a rolling
update directly.
Isolating Pods from a ReplicaSet
You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to
remove Pods from service for debugging, data recovery, etc. Pods that are removed in this way
will be replaced automatically ( assuming that the number of replicas is not also changed).
Scaling a ReplicaSet
A ReplicaSet can be easily scaled up or down by simply updating the .spec.replicas field. The
ReplicaSet controller ensures that a desired number of Pods with a matching label selector are
available and operational.
When scaling down, the ReplicaSet controller choose | 439 |
s which pods to delete by sorting the
available pods to prioritize scaling down pods based on the following general algorithm:
Pending (and unschedulable) pods are scaled down first
If controller.kubernetes.io/pod-deletion-cost annotation is set, then the pod with the
lower value will come first.
Pods on nodes with more replicas come before pods on nodes with fewer replicas.
If the pods' creation times differ, the pod that was created more recently comes before the
older pod (the creation times are bucketed on an integer log scale when the
LogarithmicScaleDown feature gate is enabled)
If all of the above match, then selection is random.
Pod deletion cost
FEATURE STATE: Kubernetes v1.22 [beta]
Using the controller.kubernetes.io/pod-deletion-cost annotation, users can set a preference
regarding which pods to remove first when downscaling a ReplicaSet.
The annotation should be set on the pod, the range is [-2147483648, 2147483647]. It represents
the cost of deleting a pod compared t | 440 |
o other pods belonging to the same ReplicaSet. Pods with
lower deletion cost are preferred to be deleted before pods with higher deletion cost.
The implicit value for this annotation for pods that don't set it is 0; negative values are
permitted. Invalid values will be rejected by the API server.
This feature is beta and enabled by default. You can disable it using the feature gate
PodDeletionCost in both kube-apiserver and kube-controller-manager.
Note:
This is honored on a best-effort basis, so it does not offer any guarantees on pod deletion
order.1.
2.
3.
4.
| 441 |
Users should avoid updating the annotation frequently, such as updating it based on a
metric value, because doing so will generate a significant number of pod updates on the
apiserver.
Example Use Case
The different pods of an application could have different utilization levels. On scale down, the
application may prefer to remove the pods with lower utilization. To avoid frequently updating
the pods, the application should update controller.kubernetes.io/pod-deletion-cost once before
issuing a scale down (setting the annotation to a value proportional to pod utilization level).
This works if the application itself controls the down scaling; for example, the driver pod of a
Spark deployment.
ReplicaSet as a Horizontal Pod Autoscaler Target
A ReplicaSet can also be a target for Horizontal Pod Autoscalers (HPA) . That is, a ReplicaSet can
be auto-scaled by an HPA. Here is an example HPA targeting the ReplicaSet we created in the
previous example.
controllers/hpa-rs.yaml
apiVersion : au | 442 |
toscaling/v1
kind: HorizontalPodAutoscaler
metadata :
name : frontend-scaler
spec:
scaleTargetRef :
kind: ReplicaSet
name : frontend
minReplicas : 3
maxReplicas : 10
targetCPUUtilizationPercentage : 50
Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster should create
the defined HPA that autoscales the target ReplicaSet depending on the CPU usage of the
replicated Pods.
kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml
Alternatively, you can use the kubectl autoscale command to accomplish the same (and it's
easier!)
kubectl autoscale rs frontend --max =10 --min =3 --cpu-percent =50
Alternatives to ReplicaSet
Deployment (recommended)
Deployment is an object which can own ReplicaSets and update them and their Pods via
declarative, server-side rolling updates. While ReplicaSets can be used independently, today
they're mainly used by Deployments as a mechanism to orchestrate Pod creation, deletion and
updates. When you use Dep | 443 |
loyments you don't have to worry about managing the ReplicaSets | 444 |
that they create. Deployments own and manage their ReplicaSets. As such, it is recommended
to use Deployments when you want ReplicaSets.
Bare Pods
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted
or terminated for any reason, such as in the case of node failure or disruptive node
maintenance, such as a kernel upgrade. For this reason, we recommend that you use a
ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process
supervisor, only it supervises multiple Pods across multiple nodes instead of individual
processes on a single node. A ReplicaSet delegates local container restarts to some agent on the
node such as Kubelet.
Job
Use a Job instead of a ReplicaSet for Pods that are expected to terminate on their own (that is,
batch jobs).
DaemonSet
Use a DaemonSet instead of a ReplicaSet for Pods that provide a machine-level function, such as
machine monitoring or machine logging. These Pods have a lif | 445 |
etime that is tied to a machine
lifetime: the Pod needs to be running on the machine before other Pods start, and are safe to
terminate when the machine is otherwise ready to be rebooted/shutdown.
ReplicationController
ReplicaSets are the successors to ReplicationControllers . The two serve the same purpose, and
behave similarly, except that a ReplicationController does not support set-based selector
requirements as described in the labels user guide . As such, ReplicaSets are preferred over
ReplicationControllers
What's next
Learn about Pods .
Learn about Deployments .
Run a Stateless Application Using a Deployment , which relies on ReplicaSets to work.
ReplicaSet is a top-level resource in the Kubernetes REST API. Read the ReplicaSet object
definition to understand the API for replica sets.
Read about PodDisruptionBudget and how you can use it to manage application
availability during disruptions.
StatefulSets
A StatefulSet runs a group of Pods, and maintains a sticky identity for | 446 |
each of those Pods. This
is useful for managing applications that need persistent storage or a stable, unique network
identity.
StatefulSet is the workload API object used to manage stateful applications.•
•
•
•
| 447 |
Manages the deployment and scaling of a set of Pods , and provides guarantees about the ordering
and uniqueness of these Pods.
Like a Deployment , a StatefulSet manages Pods that are based on an identical container spec.
Unlike a Deployment, a StatefulSet maintains a sticky identity for each of its Pods. These pods
are created from the same spec, but are not interchangeable: each has a persistent identifier that
it maintains across any rescheduling.
If you want to use storage volumes to provide persistence for your workload, you can use a
StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to
failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods
that replace any that have failed.
Using StatefulSets
StatefulSets are valuable for applications that require one or more of the following.
Stable, unique network identifiers.
Stable, persistent storage.
Ordered, graceful deployment and scaling.
Ordered, auto | 448 |
mated rolling updates.
In the above, stable is synonymous with persistence across Pod (re)scheduling. If an application
doesn't require any stable identifiers or ordered deployment, deletion, or scaling, you should
deploy your application using a workload object that provides a set of stateless replicas.
Deployment or ReplicaSet may be better suited to your stateless needs.
Limitations
The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner
based on the requested storage class , or pre-provisioned by an admin.
Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the
StatefulSet. This is done to ensure data safety, which is generally more valuable than an
automatic purge of all related StatefulSet resources.
StatefulSets currently require a Headless Service to be responsible for the network
identity of the Pods. You are responsible for creating this Service.
StatefulSets do not provide any guarantees on the terminati | 449 |
on of pods when a StatefulSet
is deleted. To achieve ordered and graceful termination of the pods in the StatefulSet, it is
possible to scale the StatefulSet down to 0 prior to deletion.
When using Rolling Updates with the default Pod Management Policy (OrderedReady ),
it's possible to get into a broken state that requires manual intervention to repair .
Components
The example below demonstrates the components of a StatefulSet.
apiVersion : v1
kind: Service
metadata :
name : nginx
labels :•
•
•
•
•
•
•
•
| 450 |
app: nginx
spec:
ports :
- port: 80
name : web
clusterIP : None
selector :
app: nginx
---
apiVersion : apps/v1
kind: StatefulSet
metadata :
name : web
spec:
selector :
matchLabels :
app: nginx # has to match .spec.template.metadata.labels
serviceName : "nginx"
replicas : 3 # by default is 1
minReadySeconds : 10 # by default is 0
template :
metadata :
labels :
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds : 10
containers :
- name : nginx
image : registry.k8s.io/nginx-slim:0.8
ports :
- containerPort : 80
name : web
volumeMounts :
- name : www
mountPath : /usr/share/nginx/html
volumeClaimTemplates :
- metadata :
name : www
spec:
accessModes : [ "ReadWriteOnce" ]
storageClassName : "my-storage-class"
resources :
requests :
storage : 1Gi
Note: This example us | 451 |
es the ReadWriteOnce access mode, for simplicity. For production use,
the Kubernetes project recommends using the ReadWriteOncePod access mode instead.
In the above example:
A Headless Service, named nginx , is used to control the network domain.
The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx
container will be launched in unique Pods.•
| 452 |
The volumeClaimTemplates will provide stable storage using PersistentVolumes
provisioned by a PersistentVolume Provisioner.
The name of a StatefulSet object must be a valid DNS label .
Pod Selector
You must set the .spec.selector field of a StatefulSet to match the labels of
its .spec.template.metadata.labels . Failing to specify a matching Pod Selector will result in a
validation error during StatefulSet creation.
Volume Claim Templates
You can set the .spec.volumeClaimTemplates which can provide stable storage using
PersistentVolumes provisioned by a PersistentVolume Provisioner.
Minimum ready seconds
FEATURE STATE: Kubernetes v1.25 [stable]
.spec.minReadySeconds is an optional field that specifies the minimum number of seconds for
which a newly created Pod should be running and ready without any of its containers crashing,
for it to be considered available. This is used to check progression of a rollout when using a
Rolling Update strategy. This field defaults to 0 (the Pod | 453 |
will be considered available as soon as
it is ready). To learn more about when a Pod is considered ready, see Container Probes .
Pod Identity
StatefulSet Pods have a unique identity that consists of an ordinal, a stable network identity,
and stable storage. The identity sticks to the Pod, regardless of which node it's (re)scheduled on.
Ordinal Index
For a StatefulSet with N replicas , each Pod in the StatefulSet will be assigned an integer ordinal,
that is unique over the Set. By default, pods will be assigned ordinals from 0 up through N-1.
The StatefulSet controller will also add a pod label with this index: apps.kubernetes.io/pod-
index .
Start ordinal
FEATURE STATE: Kubernetes v1.27 [beta]
.spec.ordinals is an optional field that allows you to configure the integer ordinals assigned to
each Pod. It defaults to nil. You must enable the StatefulSetStartOrdinal feature gate to use this
field. Once enabled, you can configure the following options:
.spec.ordinals.start : If the .sp | 454 |
ec.ordinals.start field is set, Pods will be assigned ordinals
from .spec.ordinals.start up through .spec.ordinals.start + .spec.replicas - 1 .•
| 455 |
Stable Network ID
Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal
of the Pod. The pattern for the constructed hostname is $(statefulset name)-$(ordinal) . The
example above will create three Pods named web-0,web-1,web-2 . A StatefulSet can use a
Headless Service to control the domain of its Pods. The domain managed by this Service takes
the form: $(service name).$(namespace).svc.cluster.local , where "cluster.local" is the cluster
domain. As each Pod is created, it gets a matching DNS subdomain, taking the form: $
(podname).$(governing service domain) , where the governing service is defined by the
serviceName field on the StatefulSet.
Depending on how DNS is configured in your cluster, you may not be able to look up the DNS
name for a newly-run Pod immediately. This behavior can occur when other clients in the
cluster have already sent queries for the hostname of the Pod before it was created. Negative
caching (normal in DNS) means t | 456 |
hat the results of previous failed lookups are remembered and
reused, even after the Pod is running, for at least a few seconds.
If you need to discover Pods promptly after they are created, you have a few options:
Query the Kubernetes API directly (for example, using a watch) rather than relying on
DNS lookups.
Decrease the time of caching in your Kubernetes DNS provider (typically this means
editing the config map for CoreDNS, which currently caches for 30 seconds).
As mentioned in the limitations section, you are responsible for creating the Headless Service
responsible for the network identity of the pods.
Here are some examples of choices for Cluster Domain, Service name, StatefulSet name, and
how that affects the DNS names for the StatefulSet's Pods.
Cluster
DomainService
(ns/
name)StatefulSet
(ns/name)StatefulSet Domain Pod DNSPod
Hostname
cluster.localdefault/
nginxdefault/web nginx.default.svc.cluster.localweb-
{0..N-1}.nginx.default.svc.cluster.localweb-
{0..N-1}
cluster.loc | 457 |
alfoo/
nginxfoo/web nginx.foo.svc.cluster.localweb-
{0..N-1}.nginx.foo.svc.cluster.localweb-
{0..N-1}
kube.localfoo/
nginxfoo/web nginx.foo.svc.kube.local web-{0..N-1}.nginx.foo.svc.kube.localweb-
{0..N-1}
Note: Cluster Domain will be set to cluster.local unless otherwise configured .
Stable Storage
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one
PersistentVolumeClaim. In the nginx example above, each Pod receives a single
PersistentVolume with a StorageClass of my-storage-class and 1 GiB of provisioned storage. If
no StorageClass is specified, then the default StorageClass will be used. When a Pod is
(re)scheduled onto a node, its volumeMounts mount the PersistentVolumes associated with its
PersistentVolume Claims. Note that, the PersistentVolumes associated with the Pods'
PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be
done manually.•
| 458 |
Pod Name Label
When the StatefulSet controller creates a Pod, it adds a label, statefulset.kubernetes.io/pod-
name , that is set to the name of the Pod. This label allows you to attach a Service to a specific
Pod in the StatefulSet.
Pod index label
FEATURE STATE: Kubernetes v1.28 [beta]
When the StatefulSet controller creates a Pod, the new Pod is labelled with apps.kubernetes.io/
pod-index . The value of this label is the ordinal index of the Pod. This label allows you to route
traffic to a particular pod index, filter logs/metrics using the pod index label, and more. Note the
feature gate PodIndexLabel must be enabled for this feature, and it is enabled by default.
Deployment and Scaling Guarantees
For a StatefulSet with N replicas, when Pods are being deployed, they are created
sequentially, in order from {0..N-1}.
When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
Before a scaling operation is applied to a Pod, all of its predecessors must be Runn | 459 |
ing and
Ready.
Before a Pod is terminated, all of its successors must be completely shutdown.
The StatefulSet should not specify a pod.Spec.TerminationGracePeriodSeconds of 0. This
practice is unsafe and strongly discouraged. For further explanation, please refer to force
deleting StatefulSet Pods .
When the nginx example above is created, three Pods will be deployed in the order web-0,
web-1, web-2. web-1 will not be deployed before web-0 is Running and Ready , and web-2 will
not be deployed until web-1 is Running and Ready. If web-0 should fail, after web-1 is Running
and Ready, but before web-2 is launched, web-2 will not be launched until web-0 is successfully
relaunched and becomes Running and Ready.
If a user were to scale the deployed example by patching the StatefulSet such that replicas=1 ,
web-2 would be terminated first. web-1 would not be terminated until web-2 is fully shutdown
and deleted. If web-0 were to fail after web-2 has been terminated and is completely shutdown,
| 460 |
but prior to web-1's termination, web-1 would not be terminated until web-0 is Running and
Ready.
Pod Management Policies
StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and
identity guarantees via its .spec.podManagementPolicy field.
OrderedReady Pod Management
OrderedReady pod management is the default for StatefulSets. It implements the behavior
described above .•
•
•
| 461 |
Parallel Pod Management
Parallel pod management tells the StatefulSet controller to launch or terminate all Pods in
parallel, and to not wait for Pods to become Running and Ready or completely terminated prior
to launching or terminating another Pod. This option only affects the behavior for scaling
operations. Updates are not affected.
Update strategies
A StatefulSet's .spec.updateStrategy field allows you to configure and disable automated rolling
updates for containers, labels, resource request/limits, and annotations for the Pods in a
StatefulSet. There are two possible values:
OnDelete
When a StatefulSet's .spec.updateStrategy.type is set to OnDelete , the StatefulSet
controller will not automatically update the Pods in a StatefulSet. Users must manually
delete Pods to cause the controller to create new Pods that reflect modifications made to a
StatefulSet's .spec.template .
RollingUpdate
The RollingUpdate update strategy implements automated, rolling updates for the Pods in
a | 462 |
StatefulSet. This is the default update strategy.
Rolling Updates
When a StatefulSet's .spec.updateStrategy.type is set to RollingUpdate , the StatefulSet controller
will delete and recreate each Pod in the StatefulSet. It will proceed in the same order as Pod
termination (from the largest ordinal to the smallest), updating each Pod one at a time.
The Kubernetes control plane waits until an updated Pod is Running and Ready prior to
updating its predecessor. If you have set .spec.minReadySeconds (see Minimum Ready Seconds ),
the control plane additionally waits that amount of time after the Pod turns ready, before
moving on.
Partitioned rolling updates
The RollingUpdate update strategy can be partitioned, by specifying
a .spec.updateStrategy.rollingUpdate.partition . If a partition is specified, all Pods with an ordinal
that is greater than or equal to the partition will be updated when the StatefulSet's
.spec.template is updated. All Pods with an ordinal that is less than the part | 463 |
ition will not be
updated, and, even if they are deleted, they will be recreated at the previous version. If a
StatefulSet's .spec.updateStrategy.rollingUpdate.partition is greater than its .spec.replicas ,
updates to its .spec.template will not be propagated to its Pods. In most cases you will not need
to use a partition, but they are useful if you want to stage an update, roll out a canary, or
perform a phased roll out.
Maximum unavailable Pods
FEATURE STATE: Kubernetes v1.24 [alpha]
You can control the maximum number of Pods that can be unavailable during an update by
specifying the .spec.updateStrategy.rollingUpdate.maxUnavailable field. The value can be a | 464 |
absolute number (for example, 5) or a percentage of desired Pods (for example, 10%). Absolute
number is calculated from the percentage value by rounding it up. This field cannot be 0. The
default setting is 1.
This field applies to all Pods in the range 0 to replicas - 1 . If there is any unavailable Pod in the
range 0 to replicas - 1 , it will be counted towards maxUnavailable .
Note: The maxUnavailable field is in Alpha stage and it is honored only by API servers that are
running with the MaxUnavailableStatefulSet feature gate enabled.
Forced rollback
When using Rolling Updates with the default Pod Management Policy (OrderedReady ), it's
possible to get into a broken state that requires manual intervention to repair.
If you update the Pod template to a configuration that never becomes Running and Ready (for
example, due to a bad binary or application-level configuration error), StatefulSet will stop the
rollout and wait.
In this state, it's not enough to revert the Pod template | 465 |
to a good configuration. Due to a known
issue , StatefulSet will continue to wait for the broken Pod to become Ready (which never
happens) before it will attempt to revert it back to the working configuration.
After reverting the template, you must also delete any Pods that StatefulSet had already
attempted to run with the bad configuration. StatefulSet will then begin to recreate the Pods
using the reverted template.
PersistentVolumeClaim retention
FEATURE STATE: Kubernetes v1.27 [beta]
The optional .spec.persistentVolumeClaimRetentionPolicy field controls if and how PVCs are
deleted during the lifecycle of a StatefulSet. You must enable the StatefulSetAutoDeletePVC
feature gate on the API server and the controller manager to use this field. Once enabled, there
are two policies you can configure for each StatefulSet:
whenDeleted
configures the volume retention behavior that applies when the StatefulSet is deleted
whenScaled
configures the volume retention behavior that applies w | 466 |
hen the replica count of the
StatefulSet is reduced; for example, when scaling down the set.
For each policy that you can configure, you can set the value to either Delete or Retain .
Delete
The PVCs created from the StatefulSet volumeClaimTemplate are deleted for each Pod
affected by the policy. With the whenDeleted policy all PVCs from the
volumeClaimTemplate are deleted after their Pods have been deleted. With the
whenScaled policy, only PVCs corresponding to Pod replicas being scaled down are
deleted, after their Pods have been deleted.
Retain (default)
PVCs from the volumeClaimTemplate are not affected when their Pod is deleted. This is
the behavior before this new feature | 467 |
Bear in mind that these policies only apply when Pods are being removed due to the
StatefulSet being deleted or scaled down. For example, if a Pod associated with a StatefulSet
fails due to node failure, and the control plane creates a replacement Pod, the StatefulSet retains
the existing PVC. The existing volume is unaffected, and the cluster will attach it to the node
where the new Pod is about to launch.
The default for policies is Retain , matching the StatefulSet behavior before this new feature.
Here is an example policy.
apiVersion : apps/v1
kind: StatefulSet
...
spec:
persistentVolumeClaimRetentionPolicy :
whenDeleted : Retain
whenScaled : Delete
...
The StatefulSet controller adds owner references to its PVCs, which are then deleted by the
garbage collector after the Pod is terminated. This enables the Pod to cleanly unmount all
volumes before the PVCs are deleted (and before the backing PV and volume are deleted,
depending on the retain policy). When you set th | 468 |
e whenDeleted policy to Delete , an owner
reference to the StatefulSet instance is placed on all PVCs associated with that StatefulSet.
The whenScaled policy must delete PVCs only when a Pod is scaled down, and not when a Pod
is deleted for another reason. When reconciling, the StatefulSet controller compares its desired
replica count to the actual Pods present on the cluster. Any StatefulSet Pod whose id greater
than the replica count is condemned and marked for deletion. If the whenScaled policy is
Delete , the condemned Pods are first set as owners to the associated StatefulSet template PVCs,
before the Pod is deleted. This causes the PVCs to be garbage collected after only the
condemned Pods have terminated.
This means that if the controller crashes and restarts, no Pod will be deleted before its owner
reference has been updated appropriate to the policy. If a condemned Pod is force-deleted while
the controller is down, the owner reference may or may not have been set up, depen | 469 |
ding on
when the controller crashed. It may take several reconcile loops to update the owner references,
so some condemned Pods may have set up owner references and others may not. For this
reason we recommend waiting for the controller to come back up, which will verify owner
references before terminating Pods. If that is not possible, the operator should verify the owner
references on PVCs to ensure the expected objects are deleted when Pods are force-deleted.
Replicas
.spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1.
Should you manually scale a deployment, example via kubectl scale statefulset statefulset --
replicas=X , and then you update that StatefulSet based on a manifest (for example: by running
kubectl apply -f statefulset.yaml ), then applying that manifest overwrites the manual scaling
that you previously did.
If a HorizontalPodAutoscaler (or any similar API for horizontal scaling) is managing scaling for
a Statefulset, don' | 470 |
t set .spec.replicas . Instead, allow the Kubernetes control plane to manage
the .spec.replicas field automatically | 471 |
What's next
Learn about Pods .
Find out how to use StatefulSets
Follow an example of deploying a stateful application .
Follow an example of deploying Cassandra with Stateful Sets .
Follow an example of running a replicated stateful application .
Learn how to scale a StatefulSet .
Learn what's involved when you delete a StatefulSet .
Learn how to configure a Pod to use a volume for storage .
Learn how to configure a Pod to use a PersistentVolume for storage .
StatefulSet is a top-level resource in the Kubernetes REST API. Read the StatefulSet object
definition to understand the API for stateful sets.
Read about PodDisruptionBudget and how you can use it to manage application
availability during disruptions.
DaemonSet
A DaemonSet defines Pods that provide node-local facilities. These might be fundamental to the
operation of your cluster, such as a networking helper tool, or be part of an add-on.
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to | 472 |
the
cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage
collected. Deleting a DaemonSet will clean up the Pods it created.
Some typical uses of a DaemonSet are:
running a cluster storage daemon on every node
running a logs collection daemon on every node
running a node monitoring daemon on every node
In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A
more complex setup might use multiple DaemonSets for a single type of daemon, but with
different flags and/or different memory and cpu requests for different hardware types.
Writing a DaemonSet Spec
Create a DaemonSet
You can describe a DaemonSet in a YAML file. For example, the daemonset.yaml file below
describes a DaemonSet that runs the fluentd-elasticsearch Docker image:
controllers/daemonset.yaml
apiVersion : apps/v1
kind: DaemonSet
metadata :
name : fluentd-elasticsearch
namespace : kube-system
labels :
k8s-app : fluentd-logging•
•
◦ | 473 |
◦
◦
◦
◦
◦
◦
•
•
•
•
| 474 |
spec:
selector :
matchLabels :
name : fluentd-elasticsearch
template :
metadata :
labels :
name : fluentd-elasticsearch
spec:
tolerations :
# these tolerations are to have the daemonset runnable on control plane nodes
# remove them if your control plane nodes should not run pods
- key: node-role.kubernetes.io/control-plane
operator : Exists
effect : NoSchedule
- key: node-role.kubernetes.io/master
operator : Exists
effect : NoSchedule
containers :
- name : fluentd-elasticsearch
image : quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources :
limits :
memory : 200Mi
requests :
cpu: 100m
memory : 200Mi
volumeMounts :
- name : varlog
mountPath : /var/log
# it may be desirable to set a high priority class to ensure that a DaemonSet Pod
# preempts running Pods
# priori | 475 |
tyClassName: important
terminationGracePeriodSeconds : 30
volumes :
- name : varlog
hostPath :
path: /var/log
Create a DaemonSet based on the YAML file:
kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml
Required Fields
As with all other Kubernetes config, a DaemonSet needs apiVersion , kind, and metadata fields.
For general information about working with config files, see running stateless applications and
object management using kubectl .
The name of a DaemonSet object must be a valid DNS subdomain name .
A DaemonSet also needs a .spec section | 476 |
Pod Template
The .spec.template is one of the required fields in .spec .
The .spec.template is a pod template . It has exactly the same schema as a Pod, except it is nested
and does not have an apiVersion or kind.
In addition to required fields for a Pod, a Pod template in a DaemonSet has to specify
appropriate labels (see pod selector ).
A Pod Template in a DaemonSet must have a RestartPolicy equal to Always , or be unspecified,
which defaults to Always .
Pod Selector
The .spec.selector field is a pod selector. It works the same as the .spec.selector of a Job.
You must specify a pod selector that matches the labels of the .spec.template . Also, once a
DaemonSet is created, its .spec.selector can not be mutated. Mutating the pod selector can lead
to the unintentional orphaning of Pods, and it was found to be confusing to users.
The .spec.selector is an object consisting of two fields:
matchLabels - works the same as the .spec.selector of a ReplicationController .
matchExpress | 477 |
ions - allows to build more sophisticated selectors by specifying key, list of
values and an operator that relates the key and values.
When the two are specified the result is ANDed.
The .spec.selector must match the .spec.template.metadata.labels . Config with these two not
matching will be rejected by the API.
Running Pods on select Nodes
If you specify a .spec.template.spec.nodeSelector , then the DaemonSet controller will create
Pods on nodes which match that node selector . Likewise if you specify
a .spec.template.spec.affinity , then DaemonSet controller will create Pods on nodes which
match that node affinity . If you do not specify either, then the DaemonSet controller will create
Pods on all nodes.
How Daemon Pods are scheduled
A DaemonSet can be used to ensure that all eligible nodes run a copy of a Pod. The DaemonSet
controller creates a Pod for each eligible node and adds the spec.affinity.nodeAffinity field of the
Pod to match the target host. After the Pod is created, | 478 |
the default scheduler typically takes over
and then binds the Pod to the target host by setting the .spec.nodeName field. If the new Pod
cannot fit on the node, the default scheduler may preempt (evict) some of the existing Pods
based on the priority of the new Pod.
Note: If it's important that the DaemonSet pod run on each node, it's often desirable to set
the .spec.template.spec.priorityClassName of the DaemonSet to a PriorityClass with a higher
priority to ensure that this eviction occurs.•
| 479 |
The user can specify a different scheduler for the Pods of the DaemonSet, by setting the
.spec.template.spec.schedulerName field of the DaemonSet.
The original node affinity specified at the .spec.template.spec.affinity.nodeAffinity field (if
specified) is taken into consideration by the DaemonSet controller when evaluating the eligible
nodes, but is replaced on the created Pod with the node affinity that matches the name of the
eligible node.
nodeAffinity :
requiredDuringSchedulingIgnoredDuringExecution :
nodeSelectorTerms :
- matchFields :
- key: metadata.name
operator : In
values :
- target-host-name
Taints and tolerations
The DaemonSet controller automatically adds a set of tolerations to DaemonSet Pods:
Tolerations for DaemonSet pods
Toleration key Effect Details
node.kubernetes.io/
not-readyNoExecuteDaemonSet Pods can be scheduled onto nodes that are not
healthy or ready to accept Pods. Any DaemonSet Pods
running on such nodes will not b | 480 |
e evicted.
node.kubernetes.io/
unreachableNoExecuteDaemonSet Pods can be scheduled onto nodes that are
unreachable from the node controller. Any DaemonSet
Pods running on such nodes will not be evicted.
node.kubernetes.io/
disk-pressureNoScheduleDaemonSet Pods can be scheduled onto nodes with disk
pressure issues.
node.kubernetes.io/
memory-pressureNoScheduleDaemonSet Pods can be scheduled onto nodes with
memory pressure issues.
node.kubernetes.io/pid-
pressureNoScheduleDaemonSet Pods can be scheduled onto nodes with
process pressure issues.
node.kubernetes.io/
unschedulableNoScheduleDaemonSet Pods can be scheduled onto nodes that are
unschedulable.
node.kubernetes.io/
network-unavailableNoScheduleOnly added for DaemonSet Pods that request host
networking , i.e., Pods having spec.hostNetwork: true .
Such DaemonSet Pods can be scheduled onto nodes with
unavailable network.
You can add your own tolerations to the Pods of a DaemonSet as well, by defining these in the
Pod template of the D | 481 |
aemonSet.
Because the DaemonSet controller sets the node.kubernetes.io/unschedulable:NoSchedule
toleration automatically, Kubernetes can run DaemonSet Pods on nodes that are marked as
unschedulable .
If you use a DaemonSet to provide an important node-level function, such as cluster
networking , it is helpful that Kubernetes places DaemonSet Pods on nodes before they are
ready. For example, without that special toleration, you could end up in a deadlock situatio | 482 |
where the node is not marked as ready because the network plugin is not running there, and at
the same time the network plugin is not running on that node because the node is not yet
ready.
Communicating with Daemon Pods
Some possible patterns for communicating with Pods in a DaemonSet are:
Push : Pods in the DaemonSet are configured to send updates to another service, such as
a stats database. They do not have clients.
NodeIP and Known Port : Pods in the DaemonSet can use a hostPort , so that the pods
are reachable via the node IPs. Clients know the list of node IPs somehow, and know the
port by convention.
DNS : Create a headless service with the same pod selector, and then discover
DaemonSets using the endpoints resource or retrieve multiple A records from DNS.
Service : Create a service with the same Pod selector, and use the service to reach a
daemon on a random node. (No way to reach specific node.)
Updating a DaemonSet
If node labels are changed, the DaemonSet will promptly ad | 483 |
d Pods to newly matching nodes
and delete Pods from newly not-matching nodes.
You can modify the Pods that a DaemonSet creates. However, Pods do not allow all fields to be
updated. Also, the DaemonSet controller will use the original template the next time a node
(even with the same name) is created.
You can delete a DaemonSet. If you specify --cascade=orphan with kubectl , then the Pods will
be left on the nodes. If you subsequently create a new DaemonSet with the same selector, the
new DaemonSet adopts the existing Pods. If any Pods need replacing the DaemonSet replaces
them according to its updateStrategy .
You can perform a rolling update on a DaemonSet.
Alternatives to DaemonSet
Init scripts
It is certainly possible to run daemon processes by directly starting them on a node (e.g. using
init, upstartd , or systemd ). This is perfectly fine. However, there are several advantages to
running such processes via a DaemonSet:
Ability to monitor and manage logs for daemons in the same | 484 |
way as applications.
Same config language and tools (e.g. Pod templates, kubectl ) for daemons and
applications.
Running daemons in containers with resource limits increases isolation between daemons
from app containers. However, this can also be accomplished by running the daemons in
a container but not in a Pod.•
•
•
•
•
•
| 485 |
Bare Pods
It is possible to create Pods directly which specify a particular node to run on. However, a
DaemonSet replaces Pods that are deleted or terminated for any reason, such as in the case of
node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, you
should use a DaemonSet rather than creating individual Pods.
Static Pods
It is possible to create Pods by writing a file to a certain directory watched by Kubelet. These
are called static pods . Unlike DaemonSet, static Pods cannot be managed with kubectl or other
Kubernetes API clients. Static Pods do not depend on the apiserver, making them useful in
cluster bootstrapping cases. Also, static Pods may be deprecated in the future.
Deployments
DaemonSets are similar to Deployments in that they both create Pods, and those Pods have
processes which are not expected to terminate (e.g. web servers, storage servers).
Use a Deployment for stateless services, like frontends, where scaling up and down the numb | 486 |
er
of replicas and rolling out updates are more important than controlling exactly which host the
Pod runs on. Use a DaemonSet when it is important that a copy of a Pod always run on all or
certain hosts, if the DaemonSet provides node-level functionality that allows other Pods to run
correctly on that particular node.
For example, network plugins often include a component that runs as a DaemonSet. The
DaemonSet component makes sure that the node where it's running has working cluster
networking.
What's next
Learn about Pods .
Learn about static Pods , which are useful for running Kubernetes control plane
components.
Find out how to use DaemonSets
Perform a rolling update on a DaemonSet
Perform a rollback on a DaemonSet (for example, if a roll out didn't work how you
expected).
Understand how Kubernetes assigns Pods to Nodes .
Learn about device plugins and add ons , which often run as DaemonSets.
DaemonSet is a top-level resource in the Kubernetes REST API. Read the DaemonSet
obje | 487 |
ct definition to understand the API for daemon sets.
Jobs
Jobs represent one-off tasks that run to completion and then stop.
A Job creates one or more Pods and will continue to retry execution of the Pods until a
specified number of them successfully terminate. As pods successfully complete, the Job tracks
the successful completions. When a specified number of successful completions is reached, the•
◦
•
◦
◦
•
•
| 488 |
task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. Suspending a Job will
delete its active Pods until the Job is resumed again.
A simple case is to create one Job object in order to reliably run one Pod to completion. The Job
object will start a new Pod if the first Pod fails or is deleted (for example due to a node
hardware failure or a node reboot).
You can also use a Job to run multiple Pods in parallel.
If you want to run a Job (either a single task, or several in parallel) on a schedule, see CronJob .
Running an example Job
Here is an example Job config. It computes π to 2000 places and prints it out. It takes around 10s
to complete.
controllers/job.yaml
apiVersion : batch/v1
kind: Job
metadata :
name : pi
spec:
template :
spec:
containers :
- name : pi
image : perl:5.34.0
command : ["perl" , "-Mbignum=bpi" , "-wle" , "print bpi(2000)" ]
restartPolicy : Never
backoffLimit : 4
You can run the example with thi | 489 |
s command:
kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml
The output is similar to this:
job.batch/pi created
Check on the status of the Job with kubectl :
kubectl describe job pi
kubectl get job pi -o yaml
Name: pi
Namespace: default
Selector: batch.kubernetes.io/controller-uid =c9948307-e56d-4b5d-8302-ae2d7b7da67c
Labels: batch.kubernetes.io/controller-uid =c9948307-e56d-4b5d-8302-ae2d7b7da67c
batch.kubernetes.io/job-name =pi
...
Annotations: batch.kubernetes.io/job-tracking: ""•
| 490 |
Parallelism: 1
Completions: 1
Start Time: Mon, 02 Dec 2019 15:20:11 +0200
Completed At: Mon, 02 Dec 2019 15:21:16 +0200
Duration: 65s
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
Pod Template:
Labels: batch.kubernetes.io/controller-uid =c9948307-e56d-4b5d-8302-ae2d7b7da67c
batch.kubernetes.io/job-name =pi
Containers:
pi:
Image: perl:5.34.0
Port: <none>
Host Port: <none>
Command:
perl
-Mbignum =bpi
-wle
print bpi (2000)
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4
Normal Completed 18s job-controller Job completed
apiVersion: batch/v1
kind: Job
metadata:
annotations: batch.kubernetes.io/job-tracking: ""
...
creationTimestamp: "2022-11-10T17 | 491 |
:53:53Z"
generation: 1
labels:
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
batch.kubernetes.io/job-name: pi
name: pi
namespace: default
resourceVersion: "4751"
uid: 204fb678-040b-497f-9266-35ffa8716d14
spec:
backoffLimit: 4
completionMode: NonIndexed
completions: 1
parallelism: 1
selector:
matchLabels:
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
suspend: false
template | 492 |
metadata:
creationTimestamp: null
labels:
batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223
batch.kubernetes.io/job-name: pi
spec:
containers:
- command:
- perl
- -Mbignum =bpi
- -wle
- print bpi (2000)
image: perl:5.34.0
imagePullPolicy: IfNotPresent
name: pi
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
active: 1
ready: 0
startTime: "2022-11-10T17:53:57Z"
uncountedTerminatedPods: {}
To view completed Pods of a Job, use kubectl get pods .
To list all the Pods that belong to a Job in a machine readable form, you can use a command like
this:
pods =$(kubectl get pods --selector =batch.kubernetes.io/job-name =pi --output =json | 493 |
path ='{.items
[*].metadata.name}' )
echo $pods
The output is similar to this:
pi-5rwd7
Here, the selector is the same as the selector for the Job. The --output=jsonpath option specifies
an expression with the name from each Pod in the returned list.
View the standard output of one of the pods:
kubectl logs $pods
Another way to view the logs of a Job:
kubectl logs jobs/pi
The output is similar to this | 494 |
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986
280348253421170679821480865132823066470938446095505822317253594081284811174502841027
019385211055596446229489549303819644288109756659334461284756482337867831652712019091
456485669234603486104543266482133936072602491412737245870066063155881748815209209628
292540917153643678925903600113305305488204665213841469519415116094330572703657595919
530921861173819326117931051185480744623799627495673518857527248912279381830119491298
336733624406566430860213949463952247371907021798609437027705392171762931767523846748
184676694051320005681271452635608277857713427577896091736371787214684409012249534301
465495853710507922796892589235420199561121290219608640344181598136297747713099605187
072113499999983729780499510597317328160963185950244594553469083026425223082533446850
352619311881710100031378387528865875332083814206171776691473035982534904287554687311
59562863882353787593751957781857780532171226806613001927876611195 | 495 |
9092164201989380952
572010654858632788659361533818279682303019520353018529689957736225994138912497217752
834791315155748572424541506959508295331168617278558890750983817546374649393192550604
009277016711390098488240128583616035637076601047101819429555961989467678374494482553
797747268471040475346462080466842590694912933136770289891521047521620569660240580381
501935112533824300355876402474964732639141992726042699227967823547816360093417216412
199245863150302861829745557067498385054945885869269956909272107975093029553211653449
872027559602364806654991198818347977535663698074265425278625518184175746728909777727
938000816470600161452491921732172147723501414419735685481613611573525521334757418494
684385233239073941433345477624168625189835694855620992192221842725502542568876717904
946016534668049886272327917860857843838279679766814541009538837863609506800642251252
051173929848960841284886269456042419652850222106611863067442786220391949450471237137
869609563643719172874677646575739624138908658 | 496 |
326459958133904780275901
Writing a Job spec
As with all other Kubernetes config, a Job needs apiVersion , kind, and metadata fields.
When the control plane creates new Pods for a Job, the .metadata.name of the Job is part of the
basis for naming those Pods. The name of a Job must be a valid DNS subdomain value, but this
can produce unexpected results for the Pod hostnames. For best compatibility, the name should
follow the more restrictive rules for a DNS label . Even when the name is a DNS subdomain, the
name must be no longer than 63 characters.
A Job also needs a .spec section .
Job Labels
Job labels will have batch.kubernetes.io/ prefix for job-name and controller-uid .
Pod Template
The .spec.template is the only required field of the .spec .
The .spec.template is a pod template . It has exactly the same schema as a Pod, except it is nested
and does not have an apiVersion or kind.
In addition to required fields for a Pod, a pod template in a Job must specify appropriate la | 497 |
bels
(see pod selector ) and an appropriate restart policy.
Only a RestartPolicy equal to Never or OnFailure is allowed | 498 |
Pod selector
The .spec.selector field is optional. In almost all cases you should not specify it. See section
specifying your own pod selector .
Parallel execution for Jobs
There are three main types of task suitable to run as a Job:
Non-parallel Jobs
normally, only one Pod is started, unless the Pod fails.
the Job is complete as soon as its Pod terminates successfully.
Parallel Jobs with a fixed completion count :
specify a non-zero positive value for .spec.completions .
the Job represents the overall task, and is complete when there are
.spec.completions successful Pods.
when using .spec.completionMode="Indexed" , each Pod gets a different index in the
range 0 to .spec.completions-1 .
Parallel Jobs with a work queue :
do not specify .spec.completions , default to .spec.parallelism .
the Pods must coordinate amongst themselves or an external service to determine
what each should work on. For example, a Pod might fetch a batch of up to N items
from the work queue.
each Pod is indepe | 499 |