text
stringlengths
1
1k
id
int64
0
8.58k
1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice To see logs for the init containers in this Pod, run: kubectl logs myapp-pod -c init-myservice # Inspect the first init container kubectl logs myapp-pod -c init-mydb # Inspect the second init container At this point, those init containers will be waiting to discover Services named mydb and myservice . Here's a configuration you can use to make those Services appear: --- apiVersion : v1 kind: Service metadata : name : myservice spec: ports : - protocol : TCP port: 80 targetPort : 9376 --- apiVersion : v1 kind: Service metadata : name : mydb spec: ports
300
- protocol : TCP port: 80 targetPort : 9377 To create the mydb and myservice services: kubectl apply -f services.yaml The output is similar to this: service/myservice created service/mydb created You'll then see that those init containers complete, and that the myapp-pod Pod moves into the Running state: kubectl get -f myapp.yaml The output is similar to this: NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 9m This simple example should provide some inspiration for you to create your own init containers. What's next contains a link to a more detailed example. Detailed behavior During Pod startup, the kubelet delays running init containers until the networking and storage are ready. Then the kubelet runs the Pod's init containers in the order they appear in the Pod's spec. Each init container must exit successfully before the next container starts. If a container fails to start due to the runtime or exits with failure, it is retried
301
according to the Pod restartPolicy . However, if the Pod restartPolicy is set to Always, the init containers use restartPolicy OnFailure. A Pod cannot be Ready until all init containers have succeeded. The ports on an init container are not aggregated under a Service. A Pod that is initializing is in the Pending state but should have a condition Initialized set to false. If the Pod restarts , or is restarted, all init containers must execute again. Changes to the init container spec are limited to the container image field. Altering an init container image field is equivalent to restarting the Pod. Because init containers can be restarted, retried, or re-executed, init container code should be idempotent. In particular, code that writes to files on EmptyDirs should be prepared for the possibility that an output file already exists. Init containers have all of the fields of an app container. However, Kubernetes prohibits readinessProbe from being used because init containers cann
302
ot define readiness distinct from completion. This is enforced during validation
303
Use activeDeadlineSeconds on the Pod to prevent init containers from failing forever. The active deadline includes init containers. However it is recommended to use activeDeadlineSeconds only if teams deploy their application as a Job, because activeDeadlineSeconds has an effect even after initContainer finished. The Pod which is already running correctly would be killed by activeDeadlineSeconds if you set. The name of each app and init container in a Pod must be unique; a validation error is thrown for any container sharing a name with another. Resource sharing within containers Given the order of execution for init, sidecar and app containers, the following rules for resource usage apply: The highest of any particular resource request or limit defined on all init containers is the effective init request/limit . If any resource has no resource limit specified this is considered as the highest limit. The Pod's effective request/limit for a resource is the higher of: the sum of all
304
app containers request/limit for a resource the effective init request/limit for a resource Scheduling is done based on effective requests/limits, which means init containers can reserve resources for initialization that are not used during the life of the Pod. The QoS (quality of service) tier of the Pod's effective QoS tier is the QoS tier for init containers and app containers alike. Quota and limits are applied based on the effective Pod request and limit. Pod level control groups (cgroups) are based on the effective Pod request and limit, the same as the scheduler. Pod restart reasons A Pod can restart, causing re-execution of init containers, for the following reasons: The Pod infrastructure container is restarted. This is uncommon and would have to be done by someone with root access to nodes. All containers in a Pod are terminated while restartPolicy is set to Always, forcing a restart, and the init container completion record has been lost due to garbage collection . The Pod
305
will not be restarted when the init container image is changed, or the init container completion record has been lost due to garbage collection. This applies for Kubernetes v1.20 and later. If you are using an earlier version of Kubernetes, consult the documentation for the version you are using. What's next Learn more about the following: Creating a Pod that has an init container . Debug init containers . Overview of kubelet and kubectl . Types of probes : liveness, readiness, startup probe. Sidecar containers .• • ◦ ◦ • • • • • • • •
306
Sidecar Containers FEATURE STATE: Kubernetes v1.29 [beta] Sidecar containers are the secondary containers that run along with the main application container within the same Pod. These containers are used to enhance or to extend the functionality of the main application container by providing additional services, or functionality such as logging, monitoring, security, or data synchronization, without directly altering the primary application code. Enabling sidecar containers Enabled by default with Kubernetes 1.29, a feature gate named SidecarContainers allows you to specify a restartPolicy for containers listed in a Pod's initContainers field. These restartable sidecar containers are independent with other init containers and main application container within the same pod. These can be started, stopped, or restarted without effecting the main application container and other init containers. Sidecar containers and Pod lifecycle If an init container is created with its restartPol
307
icy set to Always , it will start and remain running during the entire life of the Pod. This can be helpful for running supporting services separated from the main application containers. If a readinessProbe is specified for this init container, its result will be used to determine the ready state of the Pod. Since these containers are defined as init containers, they benefit from the same ordering and sequential guarantees as other init containers, allowing them to be mixed with other init containers into complex Pod initialization flows. Compared to regular init containers, sidecars defined within initContainers continue to run after they have started. This is important when there is more than one entry inside .spec.initContainers for a Pod. After a sidecar-style init container is running (the kubelet has set the started status for that init container to true), the kubelet then starts the next init container from the ordered .spec.initContainers list. That status either becom
308
es true because there is a process running in the container and no startup probe defined, or as a result of its startupProbe succeeding. Here's an example of a Deployment with two containers, one of which is a sidecar: application/deployment-sidecar.yaml apiVersion : apps/v1 kind: Deployment metadata : name : myapp labels : app: myapp spec: replicas :
309
selector : matchLabels : app: myapp template : metadata : labels : app: myapp spec: containers : - name : myapp image : alpine:latest command : ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done' ] volumeMounts : - name : data mountPath : /opt initContainers : - name : logshipper image : alpine:latest restartPolicy : Always command : ['sh', '-c', 'tail -F /opt/logs.txt' ] volumeMounts : - name : data mountPath : /opt volumes : - name : data emptyDir : {} This feature is also useful for running Jobs with sidecars, as the sidecar container will not prevent the Job from completing after the main container has finished. Here's an example of a Job with two containers, one of which is a sidecar: application/job/job-sidecar.yaml apiVersion : batch/v1 kind: Job m
310
etadata : name : myjob spec: template : spec: containers : - name : myjob image : alpine:latest command : ['sh', '-c', 'echo "logging" > /opt/logs.txt' ] volumeMounts : - name : data mountPath : /opt initContainers : - name : logshipper image : alpine:latest restartPolicy : Always command : ['sh', '-c', 'tail -F /opt/logs.txt'
311
volumeMounts : - name : data mountPath : /opt restartPolicy : Never volumes : - name : data emptyDir : {} Differences from regular containers Sidecar containers run alongside regular containers in the same pod. However, they do not execute the primary application logic; instead, they provide supporting functionality to the main application. Sidecar containers have their own independent lifecycles. They can be started, stopped, and restarted independently of regular containers. This means you can update, scale, or maintain sidecar containers without affecting the primary application. Sidecar containers share the same network and storage namespaces with the primary container This co-location allows them to interact closely and share resources. Differences from init containers Sidecar containers work alongside the main container, extending its functionality and providing additional services. Sidecar containers run concurrently with t
312
he main application container. They are active throughout the lifecycle of the pod and can be started and stopped independently of the main container. Unlike init containers , sidecar containers support probes to control their lifecycle. These containers can interact directly with the main application containers, sharing the same network namespace, filesystem, and environment variables. They work closely together to provide additional functionality. Resource sharing within containers Given the order of execution for init, sidecar and app containers, the following rules for resource usage apply: The highest of any particular resource request or limit defined on all init containers is the effective init request/limit . If any resource has no resource limit specified this is considered as the highest limit. The Pod's effective request/limit for a resource is the sum of pod overhead and the higher of: the sum of all non-init containers(app and sidecar containers) request/limit for a res
313
ource the effective init request/limit for a resource Scheduling is done based on effective requests/limits, which means init containers can reserve resources for initialization that are not used during the life of the Pod. The QoS (quality of service) tier of the Pod's effective QoS tier is the QoS tier for all init, sidecar and app containers alike.• • ◦ ◦ •
314
Quota and limits are applied based on the effective Pod request and limit. Pod level control groups (cgroups) are based on the effective Pod request and limit, the same as the scheduler. What's next Read a blog post on native sidecar containers . Read about creating a Pod that has an init container . Learn about the types of probes : liveness, readiness, startup probe. Learn about pod overhead . Ephemeral Containers FEATURE STATE: Kubernetes v1.25 [stable] This page provides an overview of ephemeral containers: a special type of container that runs temporarily in an existing Pod to accomplish user-initiated actions such as troubleshooting. You use ephemeral containers to inspect services rather than to build applications. Understanding ephemeral containers Pods are the fundamental building block of Kubernetes applications. Since Pods are intended to be disposable and replaceable, you cannot add a container to a Pod once it has been created. Instead, you usually delete and replace Pod
315
s in a controlled fashion using deployments . Sometimes it's necessary to inspect the state of an existing Pod, however, for example to troubleshoot a hard-to-reproduce bug. In these cases you can run an ephemeral container in an existing Pod to inspect its state and run arbitrary commands. What is an ephemeral container? Ephemeral containers differ from other containers in that they lack guarantees for resources or execution, and they will never be automatically restarted, so they are not appropriate for building applications. Ephemeral containers are described using the same ContainerSpec as regular containers, but many fields are incompatible and disallowed for ephemeral containers. Ephemeral containers may not have ports, so fields such as ports , livenessProbe , readinessProbe are disallowed. Pod resource allocations are immutable, so setting resources is disallowed. For a complete list of allowed fields, see the EphemeralContainer reference documentation . Ephemeral container
316
s are created using a special ephemeralcontainers handler in the API rather than by adding them directly to pod.spec , so it's not possible to add an ephemeral container using kubectl edit . Like regular containers, you may not change or remove an ephemeral container after you have added it to a Pod. Note: Ephemeral containers are not supported by static pods .• • • • • •
317
Uses for ephemeral containers Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities. In particular, distroless images enable you to deploy minimal container images that reduce attack surface and exposure to bugs and vulnerabilities. Since distroless images do not include a shell or any debugging utilities, it's difficult to troubleshoot distroless images using kubectl exec alone. When using ephemeral containers, it's helpful to enable process namespace sharing so you can view processes in other containers. What's next Learn how to debug pods using ephemeral containers . Disruptions This guide is for application owners who want to build highly available applications, and thus need to understand what types of disruptions can happen to Pods. It is also for cluster administrators who want to perform automated cluster actions, like upgrading and autoscaling
318
clusters. Voluntary and involuntary disruptions Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error. We call these unavoidable cases involuntary disruptions to an application. Examples are: a hardware failure of the physical machine backing the node cluster administrator deletes VM (instance) by mistake cloud provider or hypervisor failure makes VM disappear a kernel panic the node disappears from the cluster due to cluster network partition eviction of a pod due to the node being out-of-resources . Except for the out-of-resources condition, all these conditions should be familiar to most users; they are not specific to Kubernetes. We call other cases voluntary disruptions . These include both actions initiated by the application owner and those initiated by a Cluster Administrator. Typical application owner actions include: deleting the deployment or other controller that manages the pod updating
319
a deployment's pod template causing a restart directly deleting a pod (e.g. by accident)• • • • • • • • •
320
Cluster administrator actions include: Draining a node for repair or upgrade. Draining a node from a cluster to scale the cluster down (learn about Cluster Autoscaling ). Removing a pod from a node to permit something else to fit on that node. These actions might be taken directly by the cluster administrator, or by automation run by the cluster administrator, or by your cluster hosting provider. Ask your cluster administrator or consult your cloud provider or distribution documentation to determine if any sources of voluntary disruptions are enabled for your cluster. If none are enabled, you can skip creating Pod Disruption Budgets. Caution: Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, deleting deployments or pods bypasses Pod Disruption Budgets. Dealing with disruptions Here are some ways to mitigate involuntary disruptions: Ensure your pod requests the resources it needs. Replicate your application if you need higher availability. (Learn
321
about running replicated stateless and stateful applications.) For even higher availability when running replicated applications, spread applications across racks (using anti-affinity ) or across zones (if using a multi-zone cluster .) The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are no automated voluntary disruptions (only user-triggered ones). However, your cluster administrator or hosting provider may run some additional services which cause voluntary disruptions. For example, rolling out node software updates can cause voluntary disruptions. Also, some implementations of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes. Your cluster administrator or hosting provider should have documented what level of voluntary disruptions, if any, to expect. Certain configuration options, such as using PriorityClasses in your pod spec can also cause voluntary (and involuntary) disruptions. Pod disruption budgets F
322
EATURE STATE: Kubernetes v1.21 [stable] Kubernetes offers features to help you run highly available applications even when you introduce frequent voluntary disruptions. As an application owner, you can create a PodDisruptionBudget (PDB) for each application. A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might want to ensure that the number of replicas serving load never falls below a certain percentage of the total. Cluster managers and hosting providers should use tools which respect PodDisruptionBudgets by calling the Eviction API instead of directly deleting pods or deployments.• • • • •
323
For example, the kubectl drain subcommand lets you mark a node as going out of service. When you run kubectl drain , the tool tries to evict all of the Pods on the Node you're taking out of service. The eviction request that kubectl submits on your behalf may be temporarily rejected, so the tool periodically retries all failed requests until all Pods on the target node are terminated, or until a configurable timeout is reached. A PDB specifies the number of replicas that an application can tolerate having, relative to how many it is intended to have. For example, a Deployment which has a .spec.replicas: 5 is supposed to have 5 pods at any given time. If its PDB allows for there to be 4 at a time, then the Eviction API will allow voluntary disruption of one (but not two) pods at a time. The group of pods that comprise the application is specified using a label selector, the same as the one used by the application's controller (deployment, stateful-set, etc). The "intended" number of
324
pods is computed from the .spec.replicas of the workload resource that is managing those pods. The control plane discovers the owning workload resource by examining the .metadata.ownerReferences of the Pod. Involuntary disruptions cannot be prevented by PDBs; however they do count against the budget. Pods which are deleted or unavailable due to a rolling upgrade to an application do count against the disruption budget, but workload resources (such as Deployment and StatefulSet) are not limited by PDBs when doing rolling upgrades. Instead, the handling of failures during application updates is configured in the spec for the specific workload resource. It is recommended to set AlwaysAllow Unhealthy Pod Eviction Policy to your PodDisruptionBudgets to support eviction of misbehaving applications during a node drain. The default behavior is to wait for the application pods to become healthy before the drain can proceed. When a pod is evicted using the eviction API, it is gracefully te
325
rminated , honoring the terminationGracePeriodSeconds setting in its PodSpec . PodDisruptionBudget example Consider a cluster with 3 nodes, node-1 through node-3 . The cluster is running several applications. One of them has 3 replicas initially called pod-a , pod-b , and pod-c . Another, unrelated pod without a PDB, called pod-x , is also shown. Initially, the pods are laid out as follows: node-1 node-2 node-3 pod-a available pod-b available pod-c available pod-x available All 3 pods are part of a deployment, and they collectively have a PDB which requires there be at least 2 of the 3 pods to be available at all times. For example, assume the cluster administrator wants to reboot into a new kernel version to fix a bug in the kernel. The cluster administrator first tries to drain node-1 using the kubectl drain command. That tool tries to evict pod-a and pod-x . This succeeds immediately. Both pods go into the terminating state at the same time. This puts the cluster in this state
326
node-1 draining node-2 node-3 pod-a terminating pod-b available pod-c available pod-x terminating The deployment notices that one of the pods is terminating, so it creates a replacement called pod-d . Since node-1 is cordoned, it lands on another node. Something has also created pod-y as a replacement for pod-x . (Note: for a StatefulSet, pod-a , which would be called something like pod-0 , would need to terminate completely before its replacement, which is also called pod-0 but has a different UID, could be created. Otherwise, the example applies to a StatefulSet as well.) Now the cluster is in this state: node-1 draining node-2 node-3 pod-a terminating pod-b available pod-c available pod-x terminating pod-d starting pod-y At some point, the pods terminate, and the cluster looks like this: node-1 drained node-2 node-3 pod-b available pod-c available pod-d starting pod-y At this point, if an impatient cluster administrator tries to drain node-2 or node-3 , the drain command will b
327
lock, because there are only 2 available pods for the deployment, and its PDB requires at least 2. After some time passes, pod-d becomes available. The cluster state now looks like this: node-1 drained node-2 node-3 pod-b available pod-c available pod-d available pod-y Now, the cluster administrator tries to drain node-2 . The drain command will try to evict the two pods in some order, say pod-b first and then pod-d . It will succeed at evicting pod-b . But, when it tries to evict pod-d , it will be refused because that would leave only one pod available for the deployment. The deployment creates a replacement for pod-b called pod-e . Because there are not enough resources in the cluster to schedule pod-e the drain will again block. The cluster may end up in this state: node-1 drained node-2 node-3 no node pod-b terminating pod-c available pod-e pending pod-d available pod-y At this point, the cluster administrator needs to add a node back to the cluster to proceed with the upgrade
328
You can see how Kubernetes varies the rate at which disruptions can happen, according to: how many replicas an application needs how long it takes to gracefully shutdown an instance how long it takes a new instance to start up the type of controller the cluster's resource capacity Pod disruption conditions FEATURE STATE: Kubernetes v1.26 [beta] Note: In order to use this behavior, you must have the PodDisruptionConditions feature gate enabled in your cluster. When enabled, a dedicated Pod DisruptionTarget condition is added to indicate that the Pod is about to be deleted due to a disruption . The reason field of the condition additionally indicates one of the following reasons for the Pod termination: PreemptionByScheduler Pod is due to be preempted by a scheduler in order to accommodate a new Pod with a higher priority. For more information, see Pod priority preemption . DeletionByTaintManager Pod is due to be deleted by Taint Manager (which is part of the node lifecycle contro
329
ller within kube-controller-manager ) due to a NoExecute taint that the Pod does not tolerate; see taint -based evictions. EvictionByEvictionAPI Pod has been marked for eviction using the Kubernetes API . DeletionByPodGC Pod, that is bound to a no longer existing Node, is due to be deleted by Pod garbage collection . TerminationByKubelet Pod has been terminated by the kubelet, because of either node pressure eviction or the graceful node shutdown . Note: A Pod disruption might be interrupted. The control plane might re-attempt to continue the disruption of the same Pod, but it is not guaranteed. As a result, the DisruptionTarget condition might be added to a Pod, but that Pod might then not actually be deleted. In such a situation, after some time, the Pod disruption condition will be cleared. When the PodDisruptionConditions feature gate is enabled, along with cleaning up the pods, the Pod garbage collector (PodGC) will also mark them as failed if they are in a non-terminal phas
330
e (see also Pod garbage collection ). When using a Job (or CronJob), you may want to use these Pod disruption conditions as part of your Job's Pod failure policy .• • • •
331
Separating Cluster Owner and Application Owner Roles Often, it is useful to think of the Cluster Manager and Application Owner as separate roles with limited knowledge of each other. This separation of responsibilities may make sense in these scenarios: when there are many application teams sharing a Kubernetes cluster, and there is natural specialization of roles when third-party tools or services are used to automate cluster management Pod Disruption Budgets support this separation of roles by providing an interface between the roles. If you do not have such a separation of responsibilities in your organization, you may not need to use Pod Disruption Budgets. How to perform Disruptive Actions on your Cluster If you are a Cluster Administrator, and you need to perform a disruptive action on all the nodes in your cluster, such as a node or system software upgrade, here are some options: Accept downtime during the upgrade. Failover to another complete replica cluster. No downtime, but m
332
ay be costly both for the duplicated nodes and for human effort to orchestrate the switchover. Write disruption tolerant applications and use PDBs. No downtime. Minimal resource duplication. Allows more automation of cluster administration. Writing disruption-tolerant applications is tricky, but the work to tolerate voluntary disruptions largely overlaps with work to support autoscaling and tolerating involuntary disruptions. What's next Follow steps to protect your application by configuring a Pod Disruption Budget . Learn more about draining nodes Learn about updating a deployment including steps to maintain its availability during the rollout. Pod Quality of Service Classes This page introduces Quality of Service (QoS) classes in Kubernetes, and explains how Kubernetes assigns a QoS class to each Pod as a consequence of the resource constraints that you specify for the containers in that Pod. Kubernetes relies on this classification to make decisions about which Pods to evict when
333
there are not enough available resources on a Node.• • • • ◦ • ◦ ◦ ◦ ◦ • •
334
Quality of Service classes Kubernetes classifies the Pods that you run and allocates each Pod into a specific quality of service (QoS) class . Kubernetes uses that classification to influence how different pods are handled. Kubernetes does this classification based on the resource requests of the Containers in that Pod, along with how those requests relate to resource limits. This is known as Quality of Service (QoS) class. Kubernetes assigns every Pod a QoS class based on the resource requests and limits of its component Containers. QoS classes are used by Kubernetes to decide which Pods to evict from a Node experiencing Node Pressure . The possible QoS classes are Guaranteed , Burstable , and BestEffort . When a Node runs out of resources, Kubernetes will first evict BestEffort Pods running on that Node, followed by Burstable and finally Guaranteed Pods. When this eviction is due to resource pressure, only Pods exceeding resource requests are candidates for eviction. Guarantee
335
d Pods that are Guaranteed have the strictest resource limits and are least likely to face eviction. They are guaranteed not to be killed until they exceed their limits or there are no lower-priority Pods that can be preempted from the Node. They may not acquire resources beyond their specified limits. These Pods can also make use of exclusive CPUs using the static CPU management policy. Criteria For a Pod to be given a QoS class of Guaranteed : Every Container in the Pod must have a memory limit and a memory request. For every Container in the Pod, the memory limit must equal the memory request. Every Container in the Pod must have a CPU limit and a CPU request. For every Container in the Pod, the CPU limit must equal the CPU request. Burstable Pods that are Burstable have some lower-bound resource guarantees based on the request, but do not require a specific limit. If a limit is not specified, it defaults to a limit equivalent to the capacity of the Node, which allows the Pods to
336
flexibly increase their resources if resources are available. In the event of Pod eviction due to Node resource pressure, these Pods are evicted only after all BestEffort Pods are evicted. Because a Burstable Pod can include a Container that has no resource limits or requests, a Pod that is Burstable can try to use any amount of node resources. Criteria A Pod is given a QoS class of Burstable if: The Pod does not meet the criteria for QoS class Guaranteed . At least one Container in the Pod has a memory or CPU request or limit.• • • • •
337
BestEffort Pods in the BestEffort QoS class can use node resources that aren't specifically assigned to Pods in other QoS classes. For example, if you have a node with 16 CPU cores available to the kubelet, and you assign 4 CPU cores to a Guaranteed Pod, then a Pod in the BestEffort QoS class can try to use any amount of the remaining 12 CPU cores. The kubelet prefers to evict BestEffort Pods if the node comes under resource pressure. Criteria A Pod has a QoS class of BestEffort if it doesn't meet the criteria for either Guaranteed or Burstable . In other words, a Pod is BestEffort only if none of the Containers in the Pod have a memory limit or a memory request, and none of the Containers in the Pod have a CPU limit or a CPU request. Containers in a Pod can request other resources (not CPU or memory) and still be classified as BestEffort . Memory QoS with cgroup v2 FEATURE STATE: Kubernetes v1.29 [] Memory QoS uses the memory controller of cgroup v2 to guarantee memory resour
338
ces in Kubernetes. Memory requests and limits of containers in pod are used to set specific interfaces memory.min and memory.high provided by the memory controller. When memory.min is set to memory requests, memory resources are reserved and never reclaimed by the kernel; this is how Memory QoS ensures memory availability for Kubernetes pods. And if memory limits are set in the container, this means that the system needs to limit container memory usage; Memory QoS uses memory.high to throttle workload approaching its memory limit, ensuring that the system is not overwhelmed by instantaneous memory allocation. Memory QoS relies on QoS class to determine which settings to apply; however, these are different mechanisms that both provide controls over quality of service. Some behavior is independent of QoS class Certain behavior is independent of the QoS class assigned by Kubernetes. For example: Any Container exceeding a resource limit will be killed and restarted by the kubelet with
339
out affecting other Containers in that Pod. If a Container exceeds its resource request and the node it runs on faces resource pressure, the Pod it is in becomes a candidate for eviction . If this occurs, all Containers in the Pod will be terminated. Kubernetes may create a replacement Pod, usually on a different node. The resource request of a Pod is equal to the sum of the resource requests of its component Containers, and the resource limit of a Pod is equal to the sum of the resource limits of its component Containers. The kube-scheduler does not consider QoS class when selecting which Pods to preempt . Preemption can occur when a cluster does not have enough resources to run all the Pods you defined.• • •
340
What's next Learn about resource management for Pods and Containers . Learn about Node-pressure eviction . Learn about Pod priority and preemption . Learn about Pod disruptions . Learn how to assign memory resources to containers and pods . Learn how to assign CPU resources to containers and pods . Learn how to configure Quality of Service for Pods . User Namespaces FEATURE STATE: Kubernetes v1.25 [alpha] This page explains how user namespaces are used in Kubernetes pods. A user namespace isolates the user running inside the container from the one in the host. A process running as root in a container can run as a different (non-root) user in the host; in other words, the process has full privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace. You can use this feature to reduce the damage a compromised container can do to the host or other pods in the same node. There are several security vulnerabilities rated either HIGH or CRI
341
TICAL that were not exploitable when user namespaces is active. It is expected user namespace will mitigate some future vulnerabilities too. Before you begin Note:  This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the content guide before submitting a change. More information. This is a Linux-only feature and support is needed in Linux for idmap mounts on the filesystems used. This means: On the node, the filesystem you use for /var/lib/kubelet/pods/ , or the custom directory you configure for this, needs idmap mount support. All the filesystems used in the pod's volumes must support idmap mounts. In practice this means you need at least Linux 6.3, as tmpfs started supporting idmap mounts in that version. This is usually needed as several Kubernetes features use tmpfs (the service account token that is
342
mounted by default uses a tmpfs, Secrets use a tmpfs, etc.) Some popular filesystems that support idmap mounts in Linux 6.3 are: btrfs, ext4, xfs, fat, tmpfs, overlayfs. In addition, support is needed in the container runtime to use this feature with Kubernetes pods: CRI-O: version 1.25 (and later) supports user namespaces for containers.• • • • • • • • •
343
containerd v1.7 is not compatible with the userns support in Kubernetes v1.27 to v1.29. Kubernetes v1.25 and v1.26 used an earlier implementation that is compatible with containerd v1.7, in terms of userns support. If you are using a version of Kubernetes other than 1.29, check the documentation for that version of Kubernetes for the most relevant information. If there is a newer release of containerd than v1.7 available for use, also check the containerd documentation for compatibility information. You can see the status of user namespaces support in cri-dockerd tracked in an issue on GitHub. Introduction User namespaces is a Linux feature that allows to map users in the container to different users in the host. Furthermore, the capabilities granted to a pod in a user namespace are valid only in the namespace and void outside of it. A pod can opt-in to use user namespaces by setting the pod.spec.hostUsers field to false. The kubelet will pick host UIDs/GIDs a pod is mapped to, and w
344
ill do so in a way to guarantee that no two pods on the same node use the same mapping. The runAsUser , runAsGroup , fsGroup , etc. fields in the pod.spec always refer to the user inside the container. The valid UIDs/GIDs when this feature is enabled is the range 0-65535. This applies to files and processes ( runAsUser , runAsGroup , etc.). Files using a UID/GID outside this range will be seen as belonging to the overflow ID, usually 65534 (configured in /proc/sys/kernel/overflowuid and /proc/sys/kernel/overflowgid ). However, it is not possible to modify those files, even by running as the 65534 user/group. Most applications that need to run as root but don't access other host namespaces or resources, should continue to run fine without any changes needed if user namespaces is activated. Understanding user namespaces for pods Several container runtimes with their default configuration (like Docker Engine, containerd, CRI-O) use Linux namespaces for isolation. Other technologies exis
345
t and can be used with those runtimes too (e.g. Kata Containers uses VMs instead of Linux namespaces). This page is applicable for container runtimes using Linux namespaces for isolation. When creating a pod, by default, several new namespaces are used for isolation: a network namespace to isolate the network of the container, a PID namespace to isolate the view of processes, etc. If a user namespace is used, this will isolate the users in the container from the users in the node. This means containers can run as root and be mapped to a non-root user on the host. Inside the container the process will think it is running as root (and therefore tools like apt, yum, etc. work fine), while in reality the process doesn't have privileges on the host. You can verify this, for example, if you check which user the container process is running by executing ps aux from the host. The user ps shows is not the same as the user you see if you execute inside the container the command id
346
This abstraction limits what can happen, for example, if the container manages to escape to the host. Given that the container is running as a non-privileged user on the host, it is limited what it can do to the host. Furthermore, as users on each pod will be mapped to different non-overlapping users in the host, it is limited what they can do to other pods too. Capabilities granted to a pod are also limited to the pod user namespace and mostly invalid out of it, some are even completely void. Here are two examples: CAP_SYS_MODULE does not have any effect if granted to a pod using user namespaces, the pod isn't able to load kernel modules. CAP_SYS_ADMIN is limited to the pod's user namespace and invalid outside of it. Without using a user namespace a container running as root, in the case of a container breakout, has root privileges on the node. And if some capability were granted to the container, the capabilities are valid on the host too. None of this is true when we use user name
347
spaces. If you want to know more details about what changes when user namespaces are in use, see man 7 user_namespaces . Set up a node to support user namespaces It is recommended that the host's files and host's processes use UIDs/GIDs in the range of 0-65535. The kubelet will assign UIDs/GIDs higher than that to pods. Therefore, to guarantee as much isolation as possible, the UIDs/GIDs used by the host's files and host's processes should be in the range 0-65535. Note that this recommendation is important to mitigate the impact of CVEs like CVE-2021-25741 , where a pod can potentially read arbitrary files in the hosts. If the UIDs/GIDs of the pod and the host don't overlap, it is limited what a pod would be able to do: the pod UID/ GID won't match the host's file owner/group. Integration with Pod security admission checks FEATURE STATE: Kubernetes v1.29 [alpha] For Linux Pods that enable user namespaces, Kubernetes relaxes the application of Pod Security Standards in a controlled
348
way. This behavior can be controlled by the feature gate UserNamespacesPodSecurityStandards , which allows an early opt-in for end users. Admins have to ensure that user namespaces are enabled by all nodes within the cluster if using the feature gate. If you enable the associated feature gate and create a Pod that uses user namespaces, the following fields won't be constrained even in contexts that enforce the Baseline or Restricted pod security standard. This behavior does not present a security concern because root inside a Pod with user namespaces actually refers to the user inside the container, that is never mapped to a privileged user on the host. Here's the list of fields that are not checks for Pods in those circumstances: spec.securityContext.runAsNonRoot• •
349
spec.containers[*].securityContext.runAsNonRoot spec.initContainers[*].securityContext.runAsNonRoot spec.ephemeralContainers[*].securityContext.runAsNonRoot spec.securityContext.runAsUser spec.containers[*].securityContext.runAsUser spec.initContainers[*].securityContext.runAsUser spec.ephemeralContainers[*].securityContext.runAsUser Limitations When using a user namespace for the pod, it is disallowed to use other host namespaces. In particular, if you set hostUsers: false then you are not allowed to set any of: hostNetwork: true hostIPC: true hostPID: true What's next Take a look at Use a User Namespace With a Pod Downward API There are two ways to expose Pod and container fields to a running container: environment variables, and as files that are populated by a special volume type. Together, these two ways of exposing Pod and container fields are called the downward API. It is sometimes useful for a container to have information about itself, without being overly coupled to Kuberne
350
tes. The downward API allows containers to consume information about themselves or the cluster without using the Kubernetes client or API server. An example is an existing application that assumes a particular well-known environment variable holds a unique identifier. One possibility is to wrap the application, but that is tedious and error-prone, and it violates the goal of low coupling. A better option would be to use the Pod's name as an identifier, and inject the Pod's name into the well-known environment variable. In Kubernetes, there are two ways to expose Pod and container fields to a running container: as environment variables as files in a downwardAPI volume Together, these two ways of exposing Pod and container fields are called the downward API . Available fields Only some Kubernetes API fields are available through the downward API. This section lists which fields you can make available.• • • • • • • • • • • •
351
You can pass information from available Pod-level fields using fieldRef . At the API level, the spec for a Pod always defines at least one Container . You can pass information from available Container-level fields using resourceFieldRef . Information available via fieldRef For some Pod-level fields, you can provide them to a container either as an environment variable or using a downwardAPI volume. The fields available via either mechanism are: metadata.name the pod's name metadata.namespace the pod's namespace metadata.uid the pod's unique ID metadata.annotations['<KEY>'] the value of the pod's annotation named <KEY> (for example, metadata.annotations['myannotation'] ) metadata.labels['<KEY>'] the text value of the pod's label named <KEY> (for example, metadata.labels['mylabel'] ) The following information is available through environment variables but not as a downwardAPI volume fieldRef : spec.serviceAccountName the name of the pod's service account spec.nodeName the name of
352
the node where the Pod is executing status.hostIP the primary IP address of the node to which the Pod is assigned status.hostIPs the IP addresses is a dual-stack version of status.hostIP , the first is always the same as status.hostIP . The field is available if you enable the PodHostIPs feature gate . status.podIP the pod's primary IP address (usually, its IPv4 address) status.podIPs the IP addresses is a dual-stack version of status.podIP , the first is always the same as status.podIP The following information is available through a downwardAPI volume fieldRef , but not as environment variables : metadata.labels all of the pod's labels, formatted as label-key="escaped-label-value" with one label per line metadata.annotations all of the pod's annotations, formatted as annotation-key="escaped-annotation-value" with one annotation per line Information available via resourceFieldRef These container-level fields allow you to provide information about requests and limits for resourc
353
es such as CPU and memory
354
resource: limits.cpu A container's CPU limit resource: requests.cpu A container's CPU request resource: limits.memory A container's memory limit resource: requests.memory A container's memory request resource: limits.hugepages-* A container's hugepages limit resource: requests.hugepages-* A container's hugepages request resource: limits.ephemeral-storage A container's ephemeral-storage limit resource: requests.ephemeral-storage A container's ephemeral-storage request Fallback information for resource limits If CPU and memory limits are not specified for a container, and you use the downward API to try to expose that information, then the kubelet defaults to exposing the maximum allocatable value for CPU and memory based on the node allocatable calculation. What's next You can read about downwardAPI volumes . You can try using the downward API to expose container- or Pod-level information: as environment variables as files in downwardAPI volume Workload Management Kubernetes provides
355
several built-in APIs for declarative management of your workloads and the components of those workloads. Ultimately, your applications run as containers inside Pods ; however, managing individual Pods would be a lot of effort. For example, if a Pod fails, you probably want to run a new Pod to replace it. Kubernetes can do that for you. You use the Kubernetes API to create a workload object that represents a higher abstraction level than a Pod, and then the Kubernetes control plane automatically manages Pod objects on your behalf, based on the specification for the workload object you defined. The built-in APIs for managing workloads are: Deployment (and, indirectly, ReplicaSet ), the most common way to run an application on your cluster. Deployment is a good fit for managing a stateless application workload on your cluster, where any Pod in the Deployment is interchangeable and can be replaced if needed. (Deployments are a replacement for the legacy ReplicationController API).•
356
357
A StatefulSet lets you manage one or more Pods – all running the same application code – where the Pods rely on having a distinct identity. This is different from a Deployment where the Pods are expected to be interchangeable. The most common use for a StatefulSet is to be able to make a link between its Pods and their persistent storage. For example, you can run a StatefulSet that associates each Pod with a PersistentVolume . If one of the Pods in the StatefulSet fails, Kubernetes makes a replacement Pod that is connected to the same PersistentVolume. A DaemonSet defines Pods that provide facilities that are local to a specific node ; for example, a driver that lets containers on that node access a storage system. You use a DaemonSet when the driver, or other node-level service, has to run on the node where it's useful. Each Pod in a DaemonSet performs a role similar to a system daemon on a classic Unix / POSIX server. A DaemonSet might be fundamental to the operation of your cluste
358
r, such as a plugin to let that node access cluster networking , it might help you to manage the node, or it could provide less essential facilities that enhance the container platform you are running. You can run DaemonSets (and their pods) across every node in your cluster, or across just a subset (for example, only install the GPU accelerator driver on nodes that have a GPU installed). You can use a Job and / or a CronJob to define tasks that run to completion and then stop. A Job represents a one-off task, whereas each CronJob repeats according to a schedule. Other topics in this section: Automatic Cleanup for Finished Jobs ReplicationController Deployments A Deployment manages a set of Pods to run an application workload, usually one that doesn't maintain state. A Deployment provides declarative updates for Pods and ReplicaSets . You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You
359
can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. Note: Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below. Use Case The following are typical use cases for Deployments: Create a Deployment to rollout a ReplicaSet . The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not. Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.• • •
360
Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment. Scale up the Deployment to facilitate more load . Pause the rollout of a Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout. Use the status of the Deployment as an indicator that a rollout has stuck. Clean up older ReplicaSets that you don't need anymore. Creating a Deployment The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods: controllers/nginx-deployment.yaml apiVersion : apps/v1 kind: Deployment metadata : name : nginx-deployment labels : app: nginx spec: replicas : 3 selector : matchLabels : app: nginx template : metadata : labels : app: nginx spec: containers : - name : nginx image : nginx:1.14.2 ports : - containerPort : 80 In this example: A Deployme
361
nt named nginx-deployment is created, indicated by the .metadata.name field. This name will become the basis for the ReplicaSets and Pods which are created later. See Writing a Deployment Spec for more details. The Deployment creates a ReplicaSet that creates three replicated Pods, indicated by the .spec.replicas field. The .spec.selector field defines how the created ReplicaSet finds which Pods to manage. In this case, you select a label that is defined in the Pod template ( app: nginx ). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule. Note: The .spec.selector.matchLabels field is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions ,• • • • • • •
362
whose key field is "key", the operator is "In", and the values array contains only "value". All of the requirements, from both matchLabels and matchExpressions , must be satisfied in order to match. The template field contains the following sub-fields: The Pods are labeled app: nginx using the .metadata.labels field. The Pod template's specification, or .template.spec field, indicates that the Pods run one container, nginx , which runs the nginx Docker Hub image at version 1.14.2. Create one container and name it nginx using the .spec.template.spec.containers[0].name field. Before you begin, make sure your Kubernetes cluster is up and running. Follow the steps given below to create the above Deployment: Create the Deployment by running the following command: kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml Run kubectl get deployments to check if the Deployment was created. If the Deployment is still being created, the output is similar to the follow
363
ing: NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/3 0 0 1s When you inspect the Deployments in your cluster, the following fields are displayed: NAME lists the names of the Deployments in the namespace. READY displays how many replicas of the application are available to your users. It follows the pattern ready/desired. UP-TO-DATE displays the number of replicas that have been updated to achieve the desired state. AVAILABLE displays how many replicas of the application are available to your users. AGE displays the amount of time that the application has been running. Notice how the number of desired replicas is 3 according to .spec.replicas field. To see the Deployment rollout status, run kubectl rollout status deployment/nginx- deployment . The output is similar to: Waiting for rollout to finish: 2 out of 3 new replicas have been updated... deployment "nginx-deployment" successfully rolled out Run the kubectl get deploymen
364
ts again a few seconds later. The output is similar to this: NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 18s Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.• ◦ ◦ ◦ 1. 2. ◦ ◦ ◦ ◦ ◦ 3. 4
365
To see the ReplicaSet ( rs) created by the Deployment, run kubectl get rs . The output is similar to this: NAME DESIRED CURRENT READY AGE nginx-deployment-75675f5897 3 3 3 18s ReplicaSet output shows the following fields: NAME lists the names of the ReplicaSets in the namespace. DESIRED displays the desired number of replicas of the application, which you define when you create the Deployment. This is the desired state . CURRENT displays how many replicas are currently running. READY displays how many replicas of the application are available to your users. AGE displays the amount of time that the application has been running. Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]- [HASH] . This name will become the basis for the Pods which are created. The HASH string is the same as the pod-template-hash label on the ReplicaSet. To see the labels automatically generated for each Pod, run kubectl ge
366
t pods --show- labels . The output is similar to: NAME READY STATUS RESTARTS AGE LABELS nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod- template-hash=75675f5897 nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod- template-hash=75675f5897 nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod- template-hash=75675f5897 The created ReplicaSet ensures that there are three nginx Pods. Note: You must specify an appropriate selector and Pod template labels in a Deployment (in this case, app: nginx ). Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly. Pod-template-hash label Caution: Do not change this label. The pod-
367
template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts. This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have.5. ◦ ◦ ◦ ◦ ◦ 6
368
Updating a Deployment Note: A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template ) is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout. Follow the steps given below to update your Deployment: Let's update the nginx Pods to use the nginx:1.16.1 image instead of the nginx:1.14.2 image. kubectl set image deployment.v1.apps/nginx-deployment nginx =nginx:1.16.1 or use the following command: kubectl set image deployment/nginx-deployment nginx =nginx:1.16.1 where deployment/nginx-deployment indicates the Deployment, nginx indicates the Container the update will take place and nginx:1.16.1 indicates the new image and its tag. The output is similar to: deployment.apps/nginx-deployment image updated Alternatively, you can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1 : kube
369
ctl edit deployment/nginx-deployment The output is similar to: deployment.apps/nginx-deployment edited To see the rollout status, run: kubectl rollout status deployment/nginx-deployment The output is similar to this: Waiting for rollout to finish: 2 out of 3 new replicas have been updated... or deployment "nginx-deployment" successfully rolled out Get more details on your updated Deployment: After the rollout succeeds, you can view the Deployment by running kubectl get deployments . The output is similar to this: NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 36s1. 2.
370
Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. kubectl get rs The output is similar to this: NAME DESIRED CURRENT READY AGE nginx-deployment-1564180365 3 3 3 6s nginx-deployment-2035384211 0 0 0 36s Running get pods should now show only the new Pods: kubectl get pods The output is similar to this: NAME READY STATUS RESTARTS AGE nginx-deployment-1564180365-khku8 1/1 Running 0 14s nginx-deployment-1564180365-nacti 1/1 Running 0 14s nginx-deployment-1564180365-z9gth 1/1 Running 0 14s Next time you want to update these Pods, you only need to update the Deployment's Pod template again. Deployment ensures that only a certain number of Pods are down while they are being updated. By de
371
fault, it ensures that at least 75% of the desired number of Pods are up (25% max unavailable). Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge). For example, if you look at the above Deployment closely, you will see that it first creates a new Pod, then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of a Deployment with 4 replicas, the number of Pods would be between 3 and 5. Get details of your Deployment: kubectl describe deployments The output is similar to this: Name: nginx-deployment Namespace: default CreationTimestamp: Thu, 30 Nov
372
2017 10:56:25 +0000 Labels: app=nginx Annotations: deployment.kubernetes.io/revision=2 Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate• •
373
MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.16.1 Port: 80/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx- deployment-2035384211 to 3 Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx- deployment-1564180365 to 1 Normal ScalingReplicaSet 22s deployment-control
374
ler Scaled down replica set nginx- deployment-2035384211 to 2 Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx- deployment-1564180365 to 2 Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx- deployment-2035384211 to 1 Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx- deployment-1564180365 to 3 Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx- deployment-2035384211 to 0 Here you see that when you first created the Deployment, it created a ReplicaSet (nginx- deployment-2035384211) and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet (nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSet to 2 and scaled up the new ReplicaSet to 2 so that at least 3 Pods were available and at most 4 Pods were created at all times. It the
375
n continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. Finally, you'll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0. Note: Kubernetes doesn't count terminating Pods when calculating the number of availableReplicas , which must be between replicas - maxUnavailable and replicas + maxSurge . As a result, you might notice that there are more Pods than expected during a rollout, and that the total resources consumed by the Deployment is more than replicas + maxSurge until the terminationGracePeriodSeconds of the terminating Pods expires
376
Rollover (aka multiple updates in-flight) Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring up the desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labels match .spec.selector but whose template does not match .spec.template are scaled down. Eventually, the new ReplicaSet is scaled to .spec.replicas and all old ReplicaSets is scaled to 0. If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously -- it will add it to its list of old ReplicaSets and start scaling it down. For example, suppose you create a Deployment to create 5 replicas of nginx:1.14.2 , but then update the Deployment to create 5 replicas of nginx:1.16.1 , when only 3 replicas of nginx:1.14.2 had been created. In that case, the Deployment immediately starts killing t
377
he 3 nginx:1.14.2 Pods that it had created, and starts creating nginx:1.16.1 Pods. It does not wait for the 5 replicas of nginx:1.14.2 to be created before changing course. Label selector updates It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped all of the implications. Note: In API version apps/v1 , a Deployment's label selector is immutable after it gets created. Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too, otherwise a validation error is returned. This change is a non- overlapping one, meaning that the new selector does not select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets and creating a new ReplicaSet. Selector updates changes the existing value in a selector key -- result in the same beh
378
avior as additions. Selector removals removes an existing key from the Deployment selector -- do not require any changes in the Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the removed label still exists in any existing Pods and ReplicaSets. Rolling Back a Deployment Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping. By default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want (you can change that by modifying revision history limit). Note: A Deployment's revision is created when a Deployment's rollout is triggered. This means that the new revision is created if and only if the Deployment's Pod template ( .spec.template ) is changed, for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment, do not create a Deployment revision, so that
379
you can• •
380
facilitate simultaneous manual- or auto-scaling. This means that when you roll back to an earlier revision, only the Deployment's Pod template part is rolled back. Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.161 instead of nginx:1.16.1 : kubectl set image deployment/nginx-deployment nginx =nginx:1.161 The output is similar to this: deployment.apps/nginx-deployment image updated The rollout gets stuck. You can verify it by checking the rollout status: kubectl rollout status deployment/nginx-deployment The output is similar to this: Waiting for rollout to finish: 1 out of 3 new replicas have been updated... Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, read more here . You see that the number of old replicas (adding the replica count from nginx- deployment-1564180365 and nginx-deployment-2035384211 ) is 3, and the number of new replicas (from nginx-deployment-3066724191 ) is 1. kubectl g
381
et rs The output is similar to this: NAME DESIRED CURRENT READY AGE nginx-deployment-1564180365 3 3 3 25s nginx-deployment-2035384211 0 0 0 36s nginx-deployment-3066724191 1 1 0 6s Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop. kubectl get pods The output is similar to this: NAME READY STATUS RESTARTS AGE nginx-deployment-1564180365-70iae 1/1 Running 0 25s nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s nginx-deployment-1564180365-hysrc 1/1 Running 0 25s nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s Note: The Deployment controller stops the bad rollout automatically, and stops scaling up the new ReplicaSet. This depends on the rollingUpdate parameters ( maxUn
382
available specifically) that you have specified. Kubernetes by default sets the value to 25%. Get the description of the Deployment:• • • • •
383
kubectl describe deployment The output is similar to this: Name: nginx-deployment Namespace: default CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 Labels: app=nginx Selector: app=nginx Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.161 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True ReplicaSetUpdated OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created) NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created) Events: FirstSeen LastSeen Count From
384
SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1 21s 21s 1
385
{deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1 To fix this, you need to rollback to a previous revision of Deployment that is stable
386
Checking Rollout History of a Deployment Follow the steps given below to check the rollout history: First, check the revisions of this Deployment: kubectl rollout history deployment/nginx-deployment The output is similar to this: deployments "nginx-deployment" REVISION CHANGE-CAUSE 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx- deployment.yaml 2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161 CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify the CHANGE-CAUSE message by: Annotating the Deployment with kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1" Manually editing the manifest of the resource. To see the details of each revision, run: kubectl rollout history deployment/nginx-deployment --revision =2 The output is simi
387
lar to this: deployments "nginx-deployment" revision 2 Labels: app=nginx pod-template-hash=1159050644 Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx- deployment nginx=nginx:1.16.1 Containers: nginx: Image: nginx:1.16.1 Port: 80/TCP QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: <none> No volumes. Rolling Back to a Previous Revision Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2. Now you've decided to undo the current rollout and rollback to the previous revision: kubectl rollout undo deployment/nginx-deployment1. ◦ ◦ 2. 1
388
The output is similar to this: deployment.apps/nginx-deployment rolled back Alternatively, you can rollback to a specific revision by specifying it with --to-revision : kubectl rollout undo deployment/nginx-deployment --to-revision =2 The output is similar to this: deployment.apps/nginx-deployment rolled back For more details about rollout related commands, read kubectl rollout . The Deployment is now rolled back to a previous stable revision. As you can see, a DeploymentRollback event for rolling back to revision 2 is generated from Deployment controller. Check if the rollback was successful and the Deployment is running as expected, run: kubectl get deployment nginx-deployment The output is similar to this: NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 30m Get the description of the Deployment: kubectl describe deployment nginx-deployment The output is similar to this: Name: nginx-deployment Namespace:
389
default CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 Labels: app=nginx Annotations: deployment.kubernetes.io/revision=4 kubernetes.io/change-cause=kubectl set image deployment/nginx- deployment nginx=nginx:1.16.1 Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.16.1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>2. 3
390
Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx- deployment-75675f5897 to 3 Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx- deployment-c4747d96c to 1 Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx- deployment-75675f5897 to 2 Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx- deployment-c4747d96c to 2 Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx- deployment-75675f5897 to 1 Normal Scalin
391
gReplicaSet 11m deployment-controller Scaled up replica set nginx- deployment-c4747d96c to 3 Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx- deployment-75675f5897 to 0 Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx- deployment-595696685f to 1 Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2 Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx- deployment-595696685f to 0 Scaling a Deployment You can scale a Deployment by using the following command: kubectl scale deployment/nginx-deployment --replicas =10 The output is similar to this: deployment.apps/nginx-deployment scaled Assuming horizontal Pod autoscaling is enabled in your cluster, you can set up an autoscaler for your Deployment and choose the minimum and maximum number of Pods you want to run based on the CPU utilization of your existing Pod
392
s. kubectl autoscale deployment/nginx-deployment --min =10 --max =15 --cpu-percent =80 The output is similar to this: deployment.apps/nginx-deployment scale
393
Proportional scaling RollingUpdate Deployments support running multiple versions of an application at the same time. When you or an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progress or paused), the Deployment controller balances the additional replicas in the existing active ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called proportional scaling . For example, you are running a Deployment with 10 replicas, maxSurge =3, and maxUnavailable =2. Ensure that the 10 replicas in your Deployment are running. kubectl get deploy The output is similar to this: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 10 10 10 10 50s You update to a new image which happens to be unresolvable from inside the cluster. kubectl set image deployment/nginx-deployment nginx =nginx:sometag The output is similar to this: deployment.apps/nginx-deployment image updat
394
ed The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the maxUnavailable requirement that you mentioned above. Check out the rollout status: kubectl get rs The output is similar to this: NAME DESIRED CURRENT READY AGE nginx-deployment-1989198191 5 5 0 9s nginx-deployment-618515232 8 8 8 1m Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't using proportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, you spread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with the most replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to the ReplicaSet with the most replicas. Re
395
plicaSets with zero replicas are not scaled up. In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming the new replicas become healthy. To confirm this, run: kubectl get deploy• • •
396
The output is similar to this: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 15 18 7 8 7m The rollout status confirms how the replicas were added to each ReplicaSet. kubectl get rs The output is similar to this: NAME DESIRED CURRENT READY AGE nginx-deployment-1989198191 7 7 0 7m nginx-deployment-618515232 11 11 11 7m Pausing and Resuming a rollout of a Deployment When you update a Deployment, or plan to, you can pause rollouts for that Deployment before you trigger one or more updates. When you're ready to apply those changes, you resume rollouts for the Deployment. This approach allows you to apply multiple fixes in between pausing and resuming without triggering unnecessary rollouts. For example, with a Deployment that was created: Get the Deployment details: kubectl get deploy The output is similar to this: NAME DESIR
397
ED CURRENT UP-TO-DATE AVAILABLE AGE nginx 3 3 3 3 1m Get the rollout status: kubectl get rs The output is similar to this: NAME DESIRED CURRENT READY AGE nginx-2142116321 3 3 3 1m Pause by running the following command: kubectl rollout pause deployment/nginx-deployment The output is similar to this: deployment.apps/nginx-deployment paused Then update the image of the Deployment: kubectl set image deployment/nginx-deployment nginx =nginx:1.16.1 The output is similar to this:• •
398
deployment.apps/nginx-deployment image updated Notice that no new rollout started: kubectl rollout history deployment/nginx-deployment The output is similar to this: deployments "nginx" REVISION CHANGE-CAUSE 1 <none> Get the rollout status to verify that the existing ReplicaSet has not changed: kubectl get rs The output is similar to this: NAME DESIRED CURRENT READY AGE nginx-2142116321 3 3 3 2m You can make as many updates as you wish, for example, update the resources that will be used: kubectl set resources deployment/nginx-deployment -c =nginx --limits =cpu=200m,memor y=512Mi The output is similar to this: deployment.apps/nginx-deployment resource requirements updated The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to the Deployment will not have any effect as long as the Deployment rollout is paused. Eventually, resume the Deployment rollout and observe a new ReplicaSet
399