text
stringlengths
1
1k
id
int64
0
8.58k
r) rather than just the image name or digest. Your container runtime may adapt its behavior based on the selected runtime handler. Pulling images based on runtime class will be helpful for VM based containers like windows hyperV containers. Serial and parallel image pulls By default, kubelet pulls images serially. In other words, kubelet sends only one image pull request to the image service at a time. Other image pull requests have to wait until the one being processed is complete. Nodes make image pull decisions in isolation. Even when you use serialized image pulls, two different nodes can pull the same image in parallel. If you would like to enable parallel image pulls, you can set the field serializeImagePulls to false in the kubelet configuration . With serializeImagePulls set to false, image pull requests will be sent to the image service immediately, and multiple images will be pulled at the same time. When enabling parallel image pulls, please make sure the image service of
200
your container runtime can handle parallel image pulls. The kubelet never pulls multiple images in parallel on behalf of one Pod. For example, if you have a Pod that has an init container and an application container, the image pulls for the two containers will not be parallelized. However, if you have two Pods that use different images, the kubelet pulls the images in parallel on behalf of the two different Pods, when parallel image pulls is enabled. Maximum parallel image pulls FEATURE STATE: Kubernetes v1.27 [alpha]
201
When serializeImagePulls is set to false, the kubelet defaults to no limit on the maximum number of images being pulled at the same time. If you would like to limit the number of parallel image pulls, you can set the field maxParallelImagePulls in kubelet configuration. With maxParallelImagePulls set to n, only n images can be pulled at the same time, and any image pull beyond n will have to wait until at least one ongoing image pull is complete. Limiting the number parallel image pulls would prevent image pulling from consuming too much network bandwidth or disk I/O, when parallel image pulling is enabled. You can set maxParallelImagePulls to a positive number that is greater than or equal to 1. If you set maxParallelImagePulls to be greater than or equal to 2, you must set the serializeImagePulls to false. The kubelet will fail to start with invalid maxParallelImagePulls settings. Multi-architecture images with image indexes As well as providing binary images, a container regi
202
stry can also serve a container image index . An image index can point to multiple image manifests for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: pause , example/ mycontainer , kube-apiserver ) and allow different systems to fetch the right binary image for the machine architecture they are using. Kubernetes itself typically names container images with a suffix -$(ARCH) . For backward compatibility, please generate the older images with suffixes. The idea is to generate say pause image which has the manifest for all the arch(es) and say pause-amd64 which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes. Using a private registry Private registries may require keys to read images from them. Credentials can be provided in several ways: Configuring Nodes to Authenticate to a Private Registry all pods can read any configured private registries requires node
203
configuration by cluster administrator Kubelet Credential Provider to dynamically fetch credentials for private registries kubelet can be configured to use credential provider exec plugin for the respective private registry. Pre-pulled Images all pods can use any images cached on a node requires root access to all nodes to set up Specifying ImagePullSecrets on a Pod only pods which provide own keys can access the private registry Vendor-specific or local extensions if you're using a custom node configuration, you (or your cloud provider) can implement your mechanism for authenticating the node to the container registry. These options are explained in more detail below.• ◦ ◦ • ◦ • ◦ ◦ • ◦ •
204
Configuring nodes to authenticate to a private registry Specific instructions for setting credentials depends on the container runtime and registry you chose to use. You should refer to your solution's documentation for the most accurate information. For an example of configuring a private container image registry, see the Pull an Image from a Private Registry task. That example uses a private registry in Docker Hub. Kubelet credential provider for authenticated image pulls Note: This approach is especially suitable when kubelet needs to fetch registry credentials dynamically. Most commonly used for registries provided by cloud providers where auth tokens are short-lived. You can configure the kubelet to invoke a plugin binary to dynamically fetch registry credentials for a container image. This is the most robust and versatile way to fetch credentials for private registries, but also requires kubelet-level configuration to enable. See Configure a kubelet image credential provider f
205
or more details. Interpretation of config.json The interpretation of config.json varies between the original Docker implementation and the Kubernetes interpretation. In Docker, the auths keys can only specify root URLs, whereas Kubernetes allows glob URLs as well as prefix-matched paths. The only limitation is that glob patterns ( *) have to include the dot ( .) for each subdomain. The amount of matched subdomains has to be equal to the amount of glob patterns ( *.), for example: *.kubernetes.io will not match kubernetes.io , but abc.kubernetes.io *.*.kubernetes.io will not match abc.kubernetes.io , but abc.def.kubernetes.io prefix.*.io will match prefix.kubernetes.io *-good.kubernetes.io will match prefix-good.kubernetes.io This means that a config.json like this is valid: { "auths" : { "my-registry.io/images" : { "auth" : "..." }, "*.my-registry.io/images" : { "auth" : "..." } } } Image pull operations would now pass the credentials to the CRI container
206
runtime for every valid pattern. For example the following container image names would match successfully: my-registry.io/images my-registry.io/images/my-image my-registry.io/images/another-image sub.my-registry.io/images/my-image• • • • • • •
207
But not: a.sub.my-registry.io/images/my-image a.b.sub.my-registry.io/images/my-image The kubelet performs image pulls sequentially for every found credential. This means, that multiple entries in config.json for different paths are possible, too: { "auths" : { "my-registry.io/images" : { "auth" : "..." }, "my-registry.io/images/subpath" : { "auth" : "..." } } } If now a container specifies an image my-registry.io/images/subpath/my-image to be pulled, then the kubelet will try to download them from both authentication sources if one of them fails. Pre-pulled images Note: This approach is suitable if you can control node configuration. It will not work reliably if your cloud provider manages nodes and replaces them automatically. By default, the kubelet tries to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never , then a local image is
208
used (preferentially or exclusively, respectively). If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images. This can be used to preload certain images for speed or as an alternative to authenticating to a private registry. All pods will have read access to any pre-pulled images. Specifying imagePullSecrets on a Pod Note: This is the recommended approach to run containers based on images in private registries. Kubernetes supports specifying container image registry keys on a Pod. imagePullSecrets must all be in the same namespace as the Pod. The referenced Secrets must be of type kubernetes.io/ dockercfg or kubernetes.io/dockerconfigjson .•
209
Creating a Secret with a Docker config You need to know the username, registry password and client email address for authenticating to the registry, as well as its hostname. Run the following command, substituting the appropriate uppercase values: kubectl create secret docker-registry <name> \ --docker-server =DOCKER_REGISTRY_SERVER \ --docker-username =DOCKER_USER \ --docker-password =DOCKER_PASSWORD \ --docker-email =DOCKER_EMAIL If you already have a Docker credentials file then, rather than using the above command, you can import the credentials file as a Kubernetes Secrets . Create a Secret based on existing Docker credentials explains how to set this up. This is particularly useful if you are using multiple private container registries, as kubectl create secret docker-registry creates a Secret that only works with a single private registry. Note: Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.
210
Referring to an imagePullSecrets on a Pod Now, you can create pods which reference that secret by adding an imagePullSecrets section to a Pod definition. Each item in the imagePullSecrets array can only reference a Secret in the same namespace. For example: cat <<EOF > pod.yaml apiVersion: v1 kind: Pod metadata: name: foo namespace: awesomeapps spec: containers: - name: foo image: janedoe/awesomeapp:v1 imagePullSecrets: - name: myregistrykey EOF cat <<EOF >> ./kustomization.yaml resources: - pod.yaml EOF This needs to be done for each pod that is using a private registry
211
However, setting of this field can be automated by setting the imagePullSecrets in a ServiceAccount resource. Check Add ImagePullSecrets to a Service Account for detailed instructions. You can use this in conjunction with a per-node .docker/config.json . The credentials will be merged. Use cases There are a number of solutions for configuring private registries. Here are some common use cases and suggested solutions. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images. Use public images from a public registry No configuration required. Some cloud providers automatically cache or mirror public images, which improves availability and reduces the time to pull images. Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users. Use a hosted private registry Manual configuration may be required on the nodes that need to access to private registry Or, run an internal private registry behin
212
d your firewall with open read access. No Kubernetes configuration is required. Use a hosted container image registry service that controls image access It will work better with cluster autoscaling than manual node configuration. Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets . Cluster with proprietary images, a few of which require stricter access control. Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images. Move sensitive data into a "Secret" resource, instead of packaging it in an image. A multi-tenant cluster where each tenant needs own private registry. Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods of all tenants potentially have access to all images. Run a private registry with authorization required. Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace. The tenant adds that secret to im
213
agePullSecrets of each namespace. If you need access to multiple registries, you can create one secret for each registry. Legacy built-in kubelet credential provider In older versions of Kubernetes, the kubelet had a direct integration with cloud provider credentials. This gave it the ability to dynamically fetch credentials for image registries. There were three built-in implementations of the kubelet credential provider integration: ACR (Azure Container Registry), ECR (Elastic Container Registry), and GCR (Google Container Registry).1. ◦ ▪ ▪ 2. ◦ ▪ ◦ ▪ ◦ ▪ ◦ 3. ◦ ◦ 4. ◦ ◦ ◦
214
For more information on the legacy mechanism, read the documentation for the version of Kubernetes that you are using. Kubernetes v1.26 through to v1.29 do not include the legacy mechanism, so you would need to either: configure a kubelet image credential provider on each node specify image pull credentials using imagePullSecrets and at least one Secret What's next Read the OCI Image Manifest Specification . Learn about container image garbage collection . Learn more about pulling an Image from a Private Registry . Container Environment This page describes the resources available to Containers in the Container environment. Container environment The Kubernetes Container environment provides several important resources to Containers: A filesystem, which is a combination of an image and one or more volumes . Information about the Container itself. Information about other objects in the cluster. Container information The hostname of a Container is the name of the Pod in which the Contai
215
ner is running. It is available through the hostname command or the gethostname function call in libc. The Pod name and namespace are available as environment variables through the downward API. User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the container image. Cluster information A list of all services that were running when a Container was created is available to that Container as environment variables. This list is limited to services within the same namespace as the new Container's Pod and Kubernetes control plane services. For a service named foo that maps to a Container named bar, the following variables are defined: FOO_SERVICE_HOST =<the host the service is running on> FOO_SERVICE_PORT =<the port the service is running on> Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled. • • • • • • •
216
What's next Learn more about Container lifecycle hooks . Get hands-on experience attaching handlers to Container lifecycle events . Runtime Class FEATURE STATE: Kubernetes v1.20 [stable] This page describes the RuntimeClass resource and runtime selection mechanism. RuntimeClass is a feature for selecting the container runtime configuration. The container runtime configuration is used to run a Pod's containers. Motivation You can set a different RuntimeClass between different Pods to provide a balance of performance versus security. For example, if part of your workload deserves a high level of information security assurance, you might choose to schedule those Pods so that they run in a container runtime that uses hardware virtualization. You'd then benefit from the extra isolation of the alternative runtime, at the expense of some additional overhead. You can also use RuntimeClass to run different Pods with the same container runtime but with different settings. Setup Configure the CR
217
I implementation on nodes (runtime dependent) Create the corresponding RuntimeClass resources 1. Configure the CRI implementation on nodes The configurations available through RuntimeClass are Container Runtime Interface (CRI) implementation dependent. See the corresponding documentation ( below ) for your CRI implementation for how to configure. Note: RuntimeClass assumes a homogeneous node configuration across the cluster by default (which means that all nodes are configured the same way with respect to container runtimes). To support heterogeneous node configurations, see Scheduling below. The configurations have a corresponding handler name, referenced by the RuntimeClass. The handler must be a valid DNS label name . 2. Create the corresponding RuntimeClass resources The configurations setup in step 1 should each have an associated handler name, which identifies the configuration. For each handler, create a corresponding RuntimeClass object. The RuntimeClass resource currently
218
only has 2 significant fields: the RuntimeClass name (metadata.name ) and the handler ( handler ). The object definition looks like this:• • 1. 2
219
# RuntimeClass is defined in the node.k8s.io API group apiVersion : node.k8s.io/v1 kind: RuntimeClass metadata : # The name the RuntimeClass will be referenced by. # RuntimeClass is a non-namespaced resource. name : myclass # The name of the corresponding CRI configuration handler : myconfiguration The name of a RuntimeClass object must be a valid DNS subdomain name . Note: It is recommended that RuntimeClass write operations (create/update/patch/delete) be restricted to the cluster administrator. This is typically the default. See Authorization Overview for more details. Usage Once RuntimeClasses are configured for the cluster, you can specify a runtimeClassName in the Pod spec to use it. For example: apiVersion : v1 kind: Pod metadata : name : mypod spec: runtimeClassName : myclass # ... This will instruct the kubelet to use the named RuntimeClass to run this pod. If the named RuntimeClass does not exist, or the CRI cannot run the corresponding handler, the pod will e
220
nter the Failed terminal phase . Look for a corresponding event for an error message. If no runtimeClassName is specified, the default RuntimeHandler will be used, which is equivalent to the behavior when the RuntimeClass feature is disabled. CRI Configuration For more details on setting up CRI runtimes, see CRI installation . containerd Runtime handlers are configured through containerd's configuration at /etc/containerd/ config.toml . Valid handlers are configured under the runtimes section: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.${HANDLER_NAME}] See containerd's config documentation for more details
221
CRI-O Runtime handlers are configured through CRI-O's configuration at /etc/crio/crio.conf . Valid handlers are configured under the crio.runtime table : [crio.runtime.runtimes.${HANDLER_NAME}] runtime_path = "${PATH_TO_BINARY}" See CRI-O's config documentation for more details. Scheduling FEATURE STATE: Kubernetes v1.16 [beta] By specifying the scheduling field for a RuntimeClass, you can set constraints to ensure that Pods running with this RuntimeClass are scheduled to nodes that support it. If scheduling is not set, this RuntimeClass is assumed to be supported by all nodes. To ensure pods land on nodes supporting a specific RuntimeClass, that set of nodes should have a common label which is then selected by the runtimeclass.scheduling.nodeSelector field. The RuntimeClass's nodeSelector is merged with the pod's nodeSelector in admission, effectively taking the intersection of the set of nodes selected by each. If there is a conflict, the pod will be rejected. If the supported
222
nodes are tainted to prevent other RuntimeClass pods from running on the node, you can add tolerations to the RuntimeClass. As with the nodeSelector , the tolerations are merged with the pod's tolerations in admission, effectively taking the union of the set of nodes tolerated by each. To learn more about configuring the node selector and tolerations, see Assigning Pods to Nodes . Pod Overhead FEATURE STATE: Kubernetes v1.24 [stable] You can specify overhead resources that are associated with running a Pod. Declaring overhead allows the cluster (including the scheduler) to account for it when making decisions about Pods and resources. Pod overhead is defined in RuntimeClass through the overhead field. Through the use of this field, you can specify the overhead of running pods utilizing this RuntimeClass and ensure these overheads are accounted for in Kubernetes. What's next RuntimeClass Design RuntimeClass Scheduling Design Read about the Pod Overhead concept PodOverhead Feature
223
Design• • •
224
Container Lifecycle Hooks This page describes how kubelet managed Containers can use the Container lifecycle hook framework to run code triggered by events during their management lifecycle. Overview Analogous to many programming language frameworks that have component lifecycle hooks, such as Angular, Kubernetes provides Containers with lifecycle hooks. The hooks enable Containers to be aware of events in their management lifecycle and run code implemented in a handler when the corresponding lifecycle hook is executed. Container hooks There are two hooks that are exposed to Containers: PostStart This hook is executed immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler. PreStop This hook is called immediately before a container is terminated due to an API request or management event such as a liveness/startup probe failure, preemption, resource contention and others.
225
A call to the PreStop hook fails if the container is already in a terminated or completed state and the hook must complete before the TERM signal to stop the container can be sent. The Pod's termination grace period countdown begins before the PreStop hook is executed, so regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period. No parameters are passed to the handler. A more detailed description of the termination behavior can be found in Termination of Pods . Hook handler implementations Containers can access a hook by implementing and registering a handler for that hook. There are three types of hook handlers that can be implemented for Containers: Exec - Executes a specific command, such as pre-stop.sh , inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container. HTTP - Executes an HTTP request against a specific endpoint on the Container. Sleep - Pause
226
s the container for a specified duration. The "Sleep" action is available when the feature gate PodLifecycleSleepAction is enabled. Hook handler execution When a Container lifecycle management hook is called, the Kubernetes management system executes the handler according to the hook action, httpGet , tcpSocket and sleep are executed by the kubelet process, and exec is executed in the container.• •
227
Hook handler calls are synchronous within the context of the Pod containing the Container. This means that for a PostStart hook, the Container ENTRYPOINT and hook fire asynchronously. However, if the hook takes too long to run or hangs, the Container cannot reach a running state. PreStop hooks are not executed asynchronously from the signal to stop the Container; the hook must complete its execution before the TERM signal can be sent. If a PreStop hook hangs during execution, the Pod's phase will be Terminating and remain there until the Pod is killed after its terminationGracePeriodSeconds expires. This grace period applies to the total time it takes for both the PreStop hook to execute and for the Container to stop normally. If, for example, terminationGracePeriodSeconds is 60, and the hook takes 55 seconds to complete, and the Container takes 10 seconds to stop normally after receiving the signal, then the Container will be killed before it can stop normally, since terminati
228
onGracePeriodSeconds is less than the total time (55+10) it takes for these two things to happen. If either a PostStart or PreStop hook fails, it kills the Container. Users should make their hook handlers as lightweight as possible. There are cases, however, when long running commands make sense, such as when saving state prior to stopping a Container. Hook delivery guarantees Hook delivery is intended to be at least once , which means that a hook may be called multiple times for any given event, such as for PostStart or PreStop . It is up to the hook implementation to handle this correctly. Generally, only single deliveries are made. If, for example, an HTTP hook receiver is down and is unable to take traffic, there is no attempt to resend. In some rare cases, however, double delivery may occur. For instance, if a kubelet restarts in the middle of sending a hook, the hook might be resent after the kubelet comes back up. Debugging Hook handlers The logs for a Hook handler are not e
229
xposed in Pod events. If a handler fails for some reason, it broadcasts an event. For PostStart , this is the FailedPostStartHook event, and for PreStop , this is the FailedPreStopHook event. To generate a failed FailedPostStartHook event yourself, modify the lifecycle-events.yaml file to change the postStart command to "badcommand" and apply it. Here is some example output of the resulting events you see from running kubectl describe pod lifecycle-demo : Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7s default-scheduler Successfully assigned default/ lifecycle-demo to ip-XXX-XXX-XX-XX.us-east-2... Normal Pulled 6s kubelet Successfully pulled image "nginx" in 229.604315ms Normal Pulling 4s (x2 over 6s) kubelet Pulling image "nginx" Normal Created
230
4s (x2 over 5s) kubelet Created container lifecycle-demo- container Normal Started 4s (x2 over 5s) kubelet Started container lifecycle-demo- containe
231
Warning FailedPostStartHook 4s (x2 over 5s) kubelet Exec lifecycle hook ([badcommand]) for Container "lifecycle-demo-container" in Pod "lifecycle- demo_default(30229739-9651-4e5a-9a32-a8f1688862db)" failed - error: command 'badcommand' exited with 126: , message: "OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: \"badcommand\": executable file not found in $PATH: unknown\r\n" Normal Killing 4s (x2 over 5s) kubelet FailedPostStartHook Normal Pulled 4s kubelet Successfully pulled image "nginx" in 215.66395ms Warning BackOff 2s (x2 over 3s) kubelet Back-off restarting failed container What's next Learn more about the Container environment . Get hands-on experience attaching handlers to Container lifecycle events . Workloads Understand Pods, the smallest deployable compute object in Kubernetes, and the higher-level abst
232
ractions that help you to run them. A workload is an application running on Kubernetes. Whether your workload is a single component or several that work together, on Kubernetes you run it inside a set of pods. In Kubernetes, a Pod represents a set of running containers on your cluster. Kubernetes pods have a defined lifecycle . For example, once a pod is running in your cluster then a critical fault on the node where that pod is running means that all the pods on that node fail. Kubernetes treats that level of failure as final: you would need to create a new Pod to recover, even if the node later becomes healthy. However, to make life considerably easier, you don't need to manage each Pod directly. Instead, you can use workload resources that manage a set of pods on your behalf. These resources configure controllers that make sure the right number of the right kind of pod are running, to match the state you specified. Kubernetes provides several built-in workload resources: Deploym
233
ent and ReplicaSet (replacing the legacy resource ReplicationController ). Deployment is a good fit for managing a stateless application workload on your cluster, where any Pod in the Deployment is interchangeable and can be replaced if needed. StatefulSet lets you run one or more related Pods that do track state somehow. For example, if your workload records data persistently, you can run a StatefulSet that matches each Pod with a PersistentVolume . Your code, running in the Pods for that StatefulSet, can replicate data to other Pods in the same StatefulSet to improve overall resilience. DaemonSet defines Pods that provide facilities that are local to nodes. Every time you add a node to your cluster that matches the specification in a DaemonSet, the control plane schedules a Pod for that DaemonSet onto the new node. Each pod in a DaemonSet performs a job similar to a system daemon on a classic Unix / POSIX server. A DaemonSet might be fundamental to the operation of your cluster,
234
such as a plugin to run• • • •
235
cluster networking , it might help you to manage the node, or it could provide optional behavior that enhances the container platform you are running. Job and CronJob provide different ways to define tasks that run to completion and then stop. You can use a Job to define a task that runs to completion, just once. You can use a CronJob to run the same Job multiple times according a schedule. In the wider Kubernetes ecosystem, you can find third-party workload resources that provide additional behaviors. Using a custom resource definition , you can add in a third-party workload resource if you want a specific behavior that's not part of Kubernetes' core. For example, if you wanted to run a group of Pods for your application but stop work unless all the Pods are available (perhaps for some high-throughput distributed task), then you can implement or install an extension that does provide that feature. What's next As well as reading about each API kind for workload management, you can r
236
ead how to do specific tasks: Run a stateless application using a Deployment Run a stateful application either as a single instance or as a replicated set Run automated tasks with a CronJob To learn about Kubernetes' mechanisms for separating code from configuration, visit Configuration . There are two supporting concepts that provide backgrounds about how Kubernetes manages pods for applications: Garbage collection tidies up objects from your cluster after their owning resource has been removed. The time-to-live after finished controller removes Jobs once a defined time has passed since they completed. Once your application is running, you might want to make it available on the internet as a Service or, for web application only, using an Ingress . Pods Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod (as in a pod of whales or pea pod) is a group of one or more containers , with shared storage and network resources, and a s
237
pecification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host. As well as application containers, a Pod can contain init containers that run during Pod startup. You can also inject ephemeral containers for debugging a running Pod.• • • • •
238
What is a Pod? Note: You need to install a container runtime into each node in the cluster so that Pods can run there. The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation - the same things that isolate a container . Within a Pod's context, the individual applications may have further sub-isolations applied. A Pod is similar to a set of containers with shared namespaces and shared filesystem volumes. Pods in a Kubernetes cluster are used in two main ways: Pods that run a single container . The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers directly. Pods that run multiple containers that need to work together . A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers
239
form a single cohesive unit. Grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. You don't need to run multiple containers to provide replication (for resilience or capacity); if you need multiple replicas, see Workload management . Using Pods The following is an example of a Pod which consists of a container running the image nginx: 1.14.2 . pods/simple-pod.yaml apiVersion : v1 kind: Pod metadata : name : nginx spec: containers : - name : nginx image : nginx:1.14.2 ports : - containerPort : 80 To create the Pod shown above, run the following command: kubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.•
240
Workload resources for managing pods Usually you don't need to create Pods directly, even singleton Pods. Instead, create them using workload resources such as Deployment or Job. If your Pods need to track state, consider the StatefulSet resource. Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (to provide more overall resources by running more instances), you should use multiple Pods, one for each instance. In Kubernetes, this is typically referred to as replication . Replicated Pods are usually created and managed as a group by a workload resource and its controller . See Pods and controllers for more information on how Kubernetes uses workload resources, and their controllers, to implement application scaling and auto-healing. Pods natively provide two kinds of shared resources for their constituent containers: networking and storage . Working with Pods You'll rarely create individual Pods directly in Kuber
241
netes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a controller ), the new Pod is scheduled to run on a Node in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is evicted for lack of resources, or the node fails. Note: Restarting a container in a Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for running container(s). A Pod persists until it is deleted. The name of a Pod must be a valid DNS subdomain value, but this can produce unexpected results for the Pod hostname. For best compatibility, the name should follow the more restrictive rules for a DNS label . Pod OS FEATURE STATE: Kubernetes v1.25 [stable] You should set the .spec.os.name field to either windows or linux to indicate the OS on which you want the pod to run. These two are the only operating sy
242
stems supported for now by Kubernetes. In future, this list may be expanded. In Kubernetes v1.29, the value you set for this field has no effect on scheduling of the pods. Setting the .spec.os.name helps to identify the pod OS authoritatively and is used for validation. The kubelet refuses to run a Pod where you have specified a Pod OS, if this isn't the same as the operating system for the node where that kubelet is running. The Pod security standards also use this field to avoid enforcing policies that aren't relevant to that operating system. Pods and controllers You can use workload resources to create and manage multiple Pods for you. A controller for the resource handles replication and rollout and automatic healing in case of Pod failure. Fo
243
example, if a Node fails, a controller notices that Pods on that Node have stopped working and creates a replacement Pod. The scheduler places the replacement Pod onto a healthy Node. Here are some examples of workload resources that manage one or more Pods: Deployment StatefulSet DaemonSet Pod templates Controllers for workload resources create Pods from a pod template and manage those Pods on your behalf. PodTemplates are specifications for creating Pods, and are included in workload resources such as Deployments , Jobs, and DaemonSets . Each controller for a workload resource uses the PodTemplate inside the workload object to make actual Pods. The PodTemplate is part of the desired state of whatever workload resource you used to run your app. The sample below is a manifest for a simple Job with a template that starts one container. The container in that Pod prints a message then pauses. apiVersion : batch/v1 kind: Job metadata : name : hello spec: template : # This is t
244
he pod template spec: containers : - name : hello image : busybox:1.28 command : ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600' ] restartPolicy : OnFailure # The pod template ends here Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create replacement Pods that use the updated template. For example, the StatefulSet controller ensures that the running Pods match the current pod template for each StatefulSet object. If you edit the StatefulSet to change its pod template, the StatefulSet starts to create new Pods based on the updated template. Eventually, all of the old Pods are replaced with new Pods, and the update is complete. Each workload resource implements its own rules for handling changes to the Pod template. If you want to read more about StatefulSet specifically, read Update strategy
245
in the StatefulSet Basics tutorial. On Nodes, the kubelet does not directly observe or manage any of the details around pod templates and updates; those details are abstracted away. That abstraction and separation of• •
246
concerns simplifies system semantics, and makes it feasible to extend the cluster's behavior without changing existing code. Pod update and replacement As mentioned in the previous section, when the Pod template for a workload resource is changed, the controller creates new Pods based on the updated template instead of updating or patching the existing Pods. Kubernetes doesn't prevent you from managing Pods directly. It is possible to update some fields of a running Pod, in place. However, Pod update operations like patch , and replace have some limitations: Most of the metadata about a Pod is immutable. For example, you cannot change the namespace , name , uid, or creationTimestamp fields; the generation field is unique. It only accepts updates that increment the field's current value. If the metadata.deletionTimestamp is set, no new entry can be added to the metadata.finalizers list. Pod updates may not change fields other than spec.containers[*].image , spec.initContainers[*
247
].image , spec.activeDeadlineSeconds or spec.tolerations . For spec.tolerations , you can only add new entries. When updating the spec.activeDeadlineSeconds field, two types of updates are allowed: setting the unassigned field to a positive number; updating the field from a positive number to a smaller, non-negative number. Resource sharing and communication Pods enable data sharing and communication among their constituent containers. Storage in Pods A Pod can specify a set of shared storage volumes . All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted. See Storage for more information on how Kubernetes implements shared storage and makes it available to Pods. Pod networking Each Pod is assigned a unique IP address for each address family. Every container in a Pod shares the network namespace, including the IP address and n
248
etwork ports. Inside a Pod (and only then), the containers that belong to the Pod can communicate with one another using localhost . When containers in a Pod communicate with entities outside the Pod , they must coordinate how they use the shared network resources (such as ports). Within a Pod, containers share an IP address and port space, and can find each other via localhost . The containers in a Pod can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. Containers in different Pods have distinct IP addresses and can not communicate by OS-level IPC without special configuration. Containers• • • • 1. 2
249
that want to interact with a container running in a different Pod can use IP networking to communicate. Containers within the Pod see the system hostname as being the same as the configured name for the Pod. There's more about this in the networking section. Privileged mode for containers Note: Your container runtime must support the concept of a privileged container for this setting to be relevant. Any container in a pod can run in privileged mode to use operating system administrative capabilities that would otherwise be inaccessible. This is available for both Windows and Linux. Linux privileged containers In Linux, any container in a Pod can enable privileged mode using the privileged (Linux) flag on the security context of the container spec. This is useful for containers that want to use operating system administrative capabilities such as manipulating the network stack or accessing hardware devices. Windows privileged containers FEATURE STATE: Kubernetes v1.26 [stable] In
250
Windows, you can create a Windows HostProcess pod by setting the windowsOptions.hostProcess flag on the security context of the pod spec. All containers in these pods must run as Windows HostProcess containers. HostProcess pods run directly on the host and can also be used to perform administrative tasks as is done with Linux privileged containers. Static Pods Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them. Whereas most Pods are managed by the control plane (for example, a Deployment ), for static Pods, the kubelet directly supervises each static Pod (and restarts it if it fails). Static Pods are always bound to one Kubelet on a specific node. The main use for static Pods is to run a self-hosted control plane: in other words, using the kubelet to supervise the individual control plane components . The kubelet automatically tries to create a mirror Pod on the Kubernetes API server for each static Pod. This means
251
that the Pods running on a node are visible on the API server, but cannot be controlled from there. See the guide Create static Pods for more information. Note: The spec of a static Pod cannot refer to other API objects (e.g., ServiceAccount , ConfigMap , Secret , etc)
252
Pods with multiple containers Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated. Pods in a Kubernetes cluster are used in two main ways: Pods that run a single container . The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers directly. Pods that run multiple containers that need to work together . A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one conta
253
iner serving data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit. For example, you might have a container that acts as a web server for files in a shared volume, and a separate sidecar container that updates those files from a remote source, as in the following diagram: Pod creation diagram Some Pods have init containers as well as app containers . By default, init containers run and complete before the app containers are started. You can also have sidecar containers that provide auxiliary services to the main application Pod (for example: a service mesh). FEATURE STATE: Kubernetes v1.29 [beta] Enabled by default, the SidecarContainers feature gate allows you to specify restartPolicy: Always for init containers. Setting the Always restart policy ensures that the containers where you set it are treated as
254
sidecars that are kept running during the entire lifetime of the Pod. Containers that you explicitly define as sidecar containers start up before the main application Pod and remain running until the Pod is shut down. Container probes A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke different actions: ExecAction (performed with the help of the container runtime) TCPSocketAction (checked directly by the kubelet) HTTPGetAction (checked directly by the kubelet) You can read more about probes in the Pod Lifecycle documentation.• • • •
255
What's next Learn about the lifecycle of a Pod . Learn about RuntimeClass and how you can use it to configure different Pods with different container runtime configurations. Read about PodDisruptionBudget and how you can use it to manage application availability during disruptions. Pod is a top-level resource in the Kubernetes REST API. The Pod object definition describes the object in detail. The Distributed System Toolkit: Patterns for Composite Containers explains common layouts for Pods with more than one container. Read about Pod topology spread constraints To understand the context for why Kubernetes wraps a common Pod API in other resources (such as StatefulSets or Deployments ), you can read about the prior art, including: Aurora Borg Marathon Omega Tupperware . Pod Lifecycle This page describes the lifecycle of a Pod. Pods follow a defined lifecycle, starting in the Pending phase , moving through Running if at least one of its primary containers starts OK, and then thro
256
ugh either the Succeeded or Failed phases depending on whether any container in the Pod terminated in failure. Whilst a Pod is running, the kubelet is able to restart containers to handle some kind of faults. Within a Pod, Kubernetes tracks different container states and determines what action to take to make the Pod healthy again. In the Kubernetes API, Pods have both a specification and an actual status. The status for a Pod object consists of a set of Pod conditions . You can also inject custom readiness information into the condition data for a Pod, if that is useful to your application. Pods are only scheduled once in their lifetime. Once a Pod is scheduled (assigned) to a Node, the Pod runs on that Node until it stops or is terminated . Pod lifetime Like individual application containers, Pods are considered to be relatively ephemeral (rather than durable) entities. Pods are created, assigned a unique ID ( UID), and scheduled to nodes where they remain until termination (acc
257
ording to restart policy) or deletion. If a Node dies, the Pods scheduled to that node are scheduled for deletion after a timeout period. Pods do not, by themselves, self-heal. If a Pod is scheduled to a node that then fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a controller , that handles the work of managing the relatively disposable Pod instances.• • • • • • • • • •
258
A given Pod (as defined by a UID) is never "rescheduled" to a different node; instead, that Pod can be replaced by a new, near-identical Pod, with even the same name if desired, but with a different UID. When something is said to have the same lifetime as a Pod, such as a volume , that means that the thing exists as long as that specific Pod (with that exact UID) exists. If that Pod is deleted for any reason, and even if an identical replacement is created, the related thing (a volume, in this example) is also destroyed and created anew. Pod diagram A multi-container Pod that contains a file puller and a web server that uses a persistent volume for shared storage between the containers. Pod phase A Pod's status field is a PodStatus object, which has a phase field. The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The phase is not intended to be a comprehensive rollup of observations of container or Pod state, nor is it intended to be a comprehe
259
nsive state machine. The number and meanings of Pod phase values are tightly guarded. Other than what is documented here, nothing should be assumed about Pods that have a given phase value. Here are the possible values for phase : Value Description PendingThe Pod has been accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a Pod spends waiting to be scheduled as well as the time spent downloading container images over the network. RunningThe Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. Succeeded All containers in the Pod have terminated in success, and will not be restarted. FailedAll containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system. UnknownFor some reason t
260
he state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the Pod should be running. Note: When a Pod is being deleted, it is shown as Terminating by some kubectl commands. This Terminating status is not one of the Pod phases. A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force . Since Kubernetes 1.27, the kubelet transitions deleted Pods, except for static Pods and force- deleted Pods without a finalizer, to a terminal phase ( Failed or Succeeded depending on the exit statuses of the pod containers) before their deletion from the API server. If a node dies or is disconnected from the rest of the cluster, Kubernetes applies a policy for setting the phase of all Pods on the lost node to Failed
261
Container states As well as the phase of the Pod overall, Kubernetes tracks the state of each container inside a Pod. You can use container lifecycle hooks to trigger events to run at certain points in a container's lifecycle. Once the scheduler assigns a Pod to a Node, the kubelet starts creating containers for that Pod using a container runtime . There are three possible container states: Waiting , Running , and Terminated . To check the state of a Pod's containers, you can use kubectl describe pod <name-of-pod> . The output shows the state for each container within that Pod. Each state has a specific meaning: Waiting If a container is not in either the Running or Terminated state, it is Waiting . A container in the Waiting state is still running the operations it requires in order to complete start up: for example, pulling the container image from a container image registry, or applying Secret data. When you use kubectl to query a Pod with a container that is Waiting , you
262
also see a Reason field to summarize why the container is in that state. Running The Running status indicates that a container is executing without issues. If there was a postStart hook configured, it has already executed and finished. When you use kubectl to query a Pod with a container that is Running , you also see information about when the container entered the Running state. Terminated A container in the Terminated state began execution and then either ran to completion or failed for some reason. When you use kubectl to query a Pod with a container that is Terminated , you see a reason, an exit code, and the start and finish time for that container's period of execution. If a container has a preStop hook configured, this hook runs before the container enters the Terminated state. Container restart policy The spec of a Pod has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always. The restartPolicy for a Pod applies to ap
263
p containers in the Pod and to regular init containers . Sidecar containers ignore the Pod-level restartPolicy field: in Kubernetes, a sidecar is defined as an entry inside initContainers that has its container-level restartPolicy set to Always . For init containers that exit with an error, the kubelet restarts the init container if the Pod level restartPolicy is either OnFailure or Always . When the kubelet is handling container restarts according to the configured restart policy, that only applies to restarts that make replacement containers inside the same Pod and running o
264
the same node. After containers in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s, 40s, ...), that is capped at five minutes. Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container. Sidecar containers and Pod lifecycle explains the behaviour of init containers when specify restartpolicy field on it. Pod conditions A Pod has a PodStatus, which has an array of PodConditions through which the Pod has or has not passed. Kubelet manages the following PodConditions: PodScheduled : the Pod has been scheduled to a node. PodReadyToStartContainers : (beta feature; enabled by default ) the Pod sandbox has been successfully created and networking configured. ContainersReady : all containers in the Pod are ready. Initialized : all init containers have completed successfully. Ready : the Pod is able to serve requests and should be added to the load balancing pools of all matching Serv
265
ices. Field name Description type Name of this Pod condition. statusIndicates whether that condition is applicable, with possible values " True ", "False ", or " Unknown ". lastProbeTime Timestamp of when the Pod condition was last probed. lastTransitionTime Timestamp for when the Pod last transitioned from one status to another. reasonMachine-readable, UpperCamelCase text indicating the reason for the condition's last transition. message Human-readable message indicating details about the last status transition. Pod readiness FEATURE STATE: Kubernetes v1.14 [stable] Your application can inject extra feedback or signals into PodStatus: Pod readiness . To use this, set readinessGates in the Pod's spec to specify a list of additional conditions that the kubelet evaluates for Pod readiness. Readiness gates are determined by the current state of status.condition fields for the Pod. If Kubernetes cannot find such a condition in the status.conditions field of a Pod, the status of the con
266
dition is defaulted to " False ". Here is an example: kind: Pod ... spec: readinessGates : - conditionType : "www.example.com/feature-1" status : conditions : - type: Ready # a built in PodCondition• • • •
267
status : "False" lastProbeTime : null lastTransitionTime : 2018-01-01T00:00:00Z - type: "www.example.com/feature-1" # an extra PodCondition status : "False" lastProbeTime : null lastTransitionTime : 2018-01-01T00:00:00Z containerStatuses : - containerID : docker://abcd... ready : true ... The Pod conditions you add must have names that meet the Kubernetes label key format . Status for Pod readiness The kubectl patch command does not support patching object status. To set these status.conditions for the Pod, applications and operators should use the PATCH action. You can use a Kubernetes client library to write code that sets custom Pod conditions for Pod readiness. For a Pod that uses custom conditions, that Pod is evaluated to be ready only when both the following statements apply: All containers in the Pod are ready. All conditions specified in readinessGates are True . When a Pod's containers are Ready but at least one cu
268
stom condition is missing or False , the kubelet sets the Pod's condition to ContainersReady . Pod network readiness FEATURE STATE: Kubernetes v1.29 [beta] Note: During its early development, this condition was named PodHasNetwork . After a Pod gets scheduled on a node, it needs to be admitted by the kubelet and to have any required storage volumes mounted. Once these phases are complete, the kubelet works with a container runtime (using Container runtime interface (CRI) ) to set up a runtime sandbox and configure networking for the Pod. If the PodReadyToStartContainersCondition feature gate is enabled (it is enabled by default for Kubernetes 1.29), the PodReadyToStartContainers condition will be added to the status.conditions field of a Pod. The PodReadyToStartContainers condition is set to False by the Kubelet when it detects a Pod does not have a runtime sandbox with networking configured. This occurs in the following scenarios: Early in the lifecycle of the Pod, when the k
269
ubelet has not yet begun to set up a sandbox for the Pod using the container runtime. Later in the lifecycle of the Pod, when the Pod sandbox has been destroyed due to either: the node rebooting, without the Pod getting evicted for container runtimes that use virtual machines for isolation, the Pod sandbox virtual machine rebooting, which then requires creating a new sandbox and fresh container network configuration.• • • • ◦
270
The PodReadyToStartContainers condition is set to True by the kubelet after the successful completion of sandbox creation and network configuration for the Pod by the runtime plugin. The kubelet can start pulling container images and create containers after PodReadyToStartContainers condition has been set to True . For a Pod with init containers, the kubelet sets the Initialized condition to True after the init containers have successfully completed (which happens after successful sandbox creation and network configuration by the runtime plugin). For a Pod without init containers, the kubelet sets the Initialized condition to True before sandbox creation and network configuration starts. Pod scheduling readiness FEATURE STATE: Kubernetes v1.26 [alpha] See Pod Scheduling Readiness for more information. Container probes A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet either executes code within the container,
271
or makes a network request. Check mechanisms There are four different ways to check a container using a probe. Each probe must define exactly one of these four mechanisms: exec Executes a specified command inside the container. The diagnostic is considered successful if the command exits with a status code of 0. grpc Performs a remote procedure call using gRPC . The target should implement gRPC health checks . The diagnostic is considered successful if the status of the response is SERVING . httpGet Performs an HTTP GET request against the Pod's IP address on a specified port and path. The diagnostic is considered successful if the response has a status code greater than or equal to 200 and less than 400. tcpSocket Performs a TCP check against the Pod's IP address on a specified port. The diagnostic is considered successful if the port is open. If the remote system (the container) closes the connection immediately after it opens, this counts as healthy. Caution: Unlike the other mech
272
anisms, exec probe's implementation involves the creation/ forking of multiple processes each time when executed. As a result, in case of the clusters having higher pod densities, lower intervals of initialDelaySeconds , periodSeconds , configuring any probe with exec mechanism might introduce an overhead on the cpu usage of the node. In such scenarios, consider using the alternative probe mechanisms to avoid the overhead. Probe outcome Each probe has one of three results: Succes
273
The container passed the diagnostic. Failure The container failed the diagnostic. Unknown The diagnostic failed (no action should be taken, and the kubelet will make further checks). Types of probe The kubelet can optionally perform and react to three kinds of probes on running containers: livenessProbe Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy . If a container does not provide a liveness probe, the default state is Success . readinessProbe Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure . If a container does not provide a readiness probe, the default state is Success . startupProbe Indicates whether the application within the container is started. Al
274
l other probes are disabled if a startup probe is provided, until it succeeds. If the startup probe fails, the kubelet kills the container, and the container is subjected to its restart policy . If a container does not provide a startup probe, the default state is Success . For more information about how to set up a liveness, readiness, or startup probe, see Configure Liveness, Readiness and Startup Probes . When should you use a liveness probe? If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod's restartPolicy . If you'd like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a restartPolicy of Always or OnFailure. When should you use a readiness probe? If you'd like to start sending traffic to a Pod only when a probe succeeds, specify a rea
275
diness probe. In this case, the readiness probe might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without receiving any traffic and only start receiving traffic after the probe starts succeeding. If you want your container to be able to take itself down for maintenance, you can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe. If your app has a strict dependency on back-end services, you can implement both a liveness and a readiness probe. The liveness probe passes when the app itself is healthy, but the readiness probe additionally checks that each required back-end service is available. This helps you avoid directing traffic to Pods that can only respond with error messages
276
If your container needs to work on loading large data, configuration files, or migrations during startup, you can use a startup probe . However, if you want to detect the difference between an app that has failed and an app that is still processing its startup data, you might prefer a readiness probe. Note: If you want to be able to drain requests when the Pod is deleted, you do not necessarily need a readiness probe; on deletion, the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The Pod remains in the unready state while it waits for the containers in the Pod to stop. When should you use a startup probe? Startup probes are useful for Pods that have containers that take a long time to come into service. Rather than set a long liveness interval, you can configure a separate configuration for probing the container as it starts up, allowing a time longer than the liveness interval would allow. If your container usually starts in mor
277
e than initialDelaySeconds + failureThreshold × periodSeconds , you should specify a startup probe that checks the same endpoint as the liveness probe. The default for periodSeconds is 10s. You should then set its failureThreshold high enough to allow the container to start, without changing the default values of the liveness probe. This helps to protect against deadlocks. Termination of Pods Because Pods represent processes running on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (rather than being abruptly stopped with a KILL signal and having no chance to clean up). The design aim is for you to be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When you request deletion of a Pod, the cluster records and tracks the intended grace period before the Pod is allowed to be forcefully killed. With that forceful shutdown tracking in place, the k
278
ubelet attempts graceful shutdown. Typically, with this graceful termination of the pod, kubelet makes requests to the container runtime to attempt to stop the containers in the pod by first sending a TERM (aka. SIGTERM) signal, with a grace period timeout, to the main process in each container. The requests to stop the containers are processed by the container runtime asynchronously. There is no guarantee to the order of processing for these requests. Many container runtimes respect the STOPSIGNAL value defined in the container image and, if different, send the container image configured STOPSIGNAL instead of TERM. Once the grace period has expired, the KILL signal is sent to any remaining processes, and the Pod is then deleted from the API Server . If the kubelet or the container runtime's management service is restarted while waiting for processes to terminate, the cluster retries from the start including the full original grace period. An example flow: You use the kubectl tool to
279
manually delete a specific Pod, with the default grace period (30 seconds). The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period. If you use kubectl describe to check the Pod you're1. 2
280
deleting, that Pod shows up as "Terminating". On the node where the Pod is running: as soon as the kubelet sees that a Pod has been marked as terminating (a graceful shutdown duration has been set), the kubelet begins the local Pod shutdown process. If one of the Pod's containers has defined a preStop hook and the terminationGracePeriodSeconds in the Pod spec is not set to 0, the kubelet runs that hook inside of the container. The default terminationGracePeriodSeconds setting is 30 seconds. If the preStop hook is still running after the grace period expires, the kubelet requests a small, one-off grace period extension of 2 seconds. Note: If the preStop hook needs longer to complete than the default grace period allows, you must modify terminationGracePeriodSeconds to suit this. The kubelet triggers the container runtime to send a TERM signal to process 1 inside each container. Note: The containers in the Pod receive the TERM signal at different times and in an arbitrary order.
281
If the order of shutdowns matters, consider using a preStop hook to synchronize. At the same time as the kubelet is starting graceful shutdown of the Pod, the control plane evaluates whether to remove that shutting-down Pod from EndpointSlice (and Endpoints) objects, where those objects represent a Service with a configured selector . ReplicaSets and other workload resources no longer treat the shutting-down Pod as a valid, in-service replica. Pods that shut down slowly should not continue to serve regular traffic and should start terminating and finish processing open connections. Some applications need to go beyond finishing open connections and need more graceful termination, for example, session draining and completion. Any endpoints that represent the terminating Pods are not immediately removed from EndpointSlices, and a status indicating terminating state is exposed from the EndpointSlice API (and the legacy Endpoints API). Terminating endpoints always have their ready sta
282
tus as false (for backward compatibility with versions before 1.26), so load balancers will not use it for regular traffic. If traffic draining on terminating Pod is needed, the actual readiness can be checked as a condition serving . You can find more details on how to implement connections draining in the tutorial Pods And Endpoints Termination Flow Note: If you don't have the EndpointSliceTerminatingCondition feature gate enabled in your cluster (the gate is on by default from Kubernetes 1.22, and locked to default in 1.26), then the Kubernetes control plane removes a Pod from any relevant EndpointSlices as soon as the Pod's termination grace period begins . The behavior above is described when the feature gate EndpointSliceTerminatingCondition is enabled. Note: Beginning with Kubernetes 1.29, if your Pod includes one or more sidecar containers (init containers with an Always restart policy), the kubelet will delay sending the TERM signal to these sidecar containers until the la
283
st main container has fully terminated. The sidecar containers will be terminated in the reverse order they are defined in the Pod spec. This ensures1. 2. 3
284
that sidecar containers continue serving the other containers in the Pod until they are no longer needed. Note that slow termination of a main container will also delay the termination of the sidecar containers. If the grace period expires before the termination process is complete, the Pod may enter emergency termination. In this case, all remaining containers in the Pod will be terminated simultaneously with a short grace period. Similarly, if the Pod has a preStop hook that exceeds the termination grace period, emergency termination may occur. In general, if you have used preStop hooks to control the termination order without sidecar containers, you can now remove them and allow the kubelet to manage sidecar termination automatically. When the grace period expires, the kubelet triggers forcible shutdown. The container runtime sends SIGKILL to any processes still running in any container in the Pod. The kubelet also cleans up a hidden pause container if that container runtime uses
285
one. The kubelet transitions the Pod into a terminal phase ( Failed or Succeeded depending on the end state of its containers). This step is guaranteed since version 1.27. The kubelet triggers forcible removal of Pod object from the API server, by setting grace period to 0 (immediate deletion). The API server deletes the Pod's API object, which is then no longer visible from any client. Forced Pod termination Caution: Forced deletions can be potentially disruptive for some workloads and their Pods. By default, all deletes are graceful within 30 seconds. The kubectl delete command supports the --grace-period=<seconds> option which allows you to override the default and specify your own value. Setting the grace period to 0 forcibly and immediately deletes the Pod from the API server. If the Pod was still running on a node, that forcible deletion triggers the kubelet to begin immediate cleanup. Note: You must specify an additional flag --force along with --grace-period=0 in order
286
to perform force deletions. When a force deletion is performed, the API server does not wait for confirmation from the kubelet that the Pod has been terminated on the node it was running on. It removes the Pod in the API immediately so a new Pod can be created with the same name. On the node, Pods that are set to terminate immediately will still be given a small grace period before being force killed. Caution: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. If you need to force-delete Pods that are part of a StatefulSet, refer to the task documentation for deleting Pods from a StatefulSet .1. 2. 3. 4
287
Garbage collection of Pods For failed Pods, the API objects remain in the cluster's API until a human or controller process explicitly removes them. The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up terminated Pods (with a phase of Succeeded or Failed ), when the number of Pods exceeds the configured threshold (determined by terminated-pod-gc-threshold in the kube-controller- manager). This avoids a resource leak as Pods are created and terminated over time. Additionally, PodGC cleans up any Pods which satisfy any of the following conditions: are orphan Pods - bound to a node which no longer exists, are unscheduled terminating Pods, are terminating Pods, bound to a non-ready node tainted with node.kubernetes.io/out-of- service , when the NodeOutOfServiceVolumeDetach feature gate is enabled. When the PodDisruptionConditions feature gate is enabled, along with cleaning up the Pods, PodGC will also mark them as failed if they are in a non-termin
288
al phase. Also, PodGC adds a Pod disruption condition when cleaning up an orphan Pod. See Pod disruption conditions for more details. What's next Get hands-on experience attaching handlers to container lifecycle events . Get hands-on experience configuring Liveness, Readiness and Startup Probes . Learn more about container lifecycle hooks . Learn more about sidecar containers . For detailed information about Pod and container status in the API, see the API reference documentation covering status for Pod. Init Containers This page provides an overview of init containers: specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image. You can specify init containers in the Pod specification alongside the containers array (which describes app containers). In Kubernetes, a sidecar container is a container that starts before the main application container and continues to run . This document is about ini
289
t containers: containers that run to completion during Pod initialization.1. 2. 3. • • • •
290
Understanding init containers A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started. Init containers are exactly like regular containers, except: Init containers always run to completion. Each init container must complete successfully before the next one starts. If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds. However, if the Pod has a restartPolicy of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed. To specify an init container for a Pod, add the initContainers field into the Pod specification , as an array of container items (similar to the app containers field and its contents). See Container in the API reference for more details. The status of the init containers is returned in .status.initContainerStatuses field as an array of the container statuses (similar
291
to the .status.containerStatuses field). Differences from regular containers Init containers support all the fields and features of app containers, including resource limits, volumes , and security settings. However, the resource requests and limits for an init container are handled differently, as documented in Resource sharing within containers . Regular init containers (in other words: excluding sidecar containers) do not support the lifecycle , livenessProbe , readinessProbe , or startupProbe fields. Init containers must run to completion before the Pod can be ready; sidecar containers continue running during a Pod's lifetime, and do support some probes. See sidecar container for further details about sidecar containers. If you specify multiple init containers for a Pod, kubelet runs each init container sequentially. Each init container must succeed before the next can run. When all of the init containers have run to completion, kubelet initializes the application containers f
292
or the Pod and runs them as usual. Differences from sidecar containers Init containers run and complete their tasks before the main application container starts. Unlike sidecar containers , init containers are not continuously running alongside the main containers. Init containers run to completion sequentially, and the main container does not start until all the init containers have successfully completed. init containers do not support lifecycle , livenessProbe , readinessProbe , or startupProbe whereas sidecar containers support all these probes to control their lifecycle. Init containers share the same resources (CPU, memory, network) with the main application containers but do not interact directly with them. They can, however, use shared volumes for data exchange.•
293
Using init containers Because init containers have separate images from app containers, they have some advantages for start-up related code: Init containers can contain utilities or custom code for setup that are not present in an app image. For example, there is no need to make an image FROM another image just to use a tool like sed, awk, python , or dig during setup. The application image builder and deployer roles can work independently without the need to jointly build a single app image. Init containers can run with a different view of the filesystem than app containers in the same Pod. Consequently, they can be given access to Secrets that app containers cannot access. Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met. Once preconditions are met, all of the app containers in a Pod can start in parallel. Init containers can securely run utilitie
294
s or custom code that would otherwise make an app container image less secure. By keeping unnecessary tools separate you can limit the attack surface of your app container image. Examples Here are some ideas for how to use init containers: Wait for a Service to be created, using a shell one-line command like: for i in {1..100 }; do sleep 1; if nslookup myservice; then exit 0; fi; done ; exit 1 Register this Pod with a remote server from the downward API with a command like: curl -X POST http:// $MANAGEMENT_SERVICE_HOST :$MANAGEMENT_SERVICE_PO RT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)' Wait for some time before starting the app container with a command like sleep 60 Clone a Git repository into a Volume Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app container. For example, place the POD_IP value in a configuration and generate the main app configuration file using Jinja. Init containers in use Thi
295
s example defines a simple Pod that has two init containers. The first waits for myservice , and the second waits for mydb . Once both init containers complete, the Pod runs the app container from its spec section. apiVersion : v1 kind: Pod metadata : name : myapp-pod• • • • • • • • •
296
labels : app.kubernetes.io/name : MyApp spec: containers : - name : myapp-container image : busybox:1.28 command : ['sh', '-c', 'echo The app is running! && sleep 3600' ] initContainers : - name : init-myservice image : busybox:1.28 command : ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/ serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done" ] - name : init-mydb image : busybox:1.28 command : ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/ serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done" ] You can start this Pod by running: kubectl apply -f myapp.yaml The output is similar to this: pod/myapp-pod created And check on its status with: kubectl get -f myapp.yaml The output is similar to this: NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 6m or for more details: kubectl describe -f
297
myapp.yaml The output is similar to this: Name: myapp-pod Namespace: default [...] Labels: app.kubernetes.io/name=MyApp Status: Pending [...] Init Containers: init-myservice: [...] State: Running [...] init-mydb: [...] State: Waitin
298
Reason: PodInitializing Ready: False [...] Containers: myapp-container: [...] State: Waiting Reason: PodInitializing Ready: False [...] Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201 16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image "busybox" 13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox" 13s 13s
299