This page describes the lifecycle of a Pod.
status field is a
object, which has a
The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The phase is not intended to be a comprehensive rollup of observations of Container or Pod state, nor is it intended to be a comprehensive state machine.
The number and meanings of Pod phase values are tightly guarded.
Other than what is documented here, nothing should be assumed about Pods that
have a given
Here are the possible values for
||The Pod has been accepted by the Kubernetes system, but one or more of the Container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.|
||The Pod has been bound to a node, and all of the Containers have been created. At least one Container is still running, or is in the process of starting or restarting.|
||All Containers in the Pod have terminated in success, and will not be restarted.|
||All Containers in the Pod have terminated, and at least one Container has terminated in failure. That is, the Container either exited with non-zero status or was terminated by the system.|
||For some reason the state of the Pod could not be obtained, typically due to an error in communicating with the host of the Pod.|
A Pod has a PodStatus, which has an array of PodConditions through which the Pod has or has not passed. Each element of the PodCondition array has six possible fields:
lastProbeTime field provides a timestamp for when the Pod condition
was last probed.
lastTransitionTime field provides a timestamp for when the Pod
last transitioned from one status to another.
message field is a human-readable message indicating details
about the transition.
reason field is a unique, one-word, CamelCase reason for the condition’s last transition.
status field is a string, with possible values “
False”, and “
type field is a string with the following possible values:
PodScheduled: the Pod has been scheduled to a node;
Ready: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services;
Initialized: all init containers have started successfully;
Unschedulable: the scheduler cannot schedule the Pod right now, for example due to lacking of resources or other constraints;
ContainersReady: all containers in the Pod are ready.
ExecAction: Executes a specified command inside the Container. The diagnostic is considered successful if the command exits with a status code of 0.
TCPSocketAction: Performs a TCP check against the Container’s IP address on a specified port. The diagnostic is considered successful if the port is open.
HTTPGetAction: Performs an HTTP Get request against the Container’s IP address on a specified port and path. The diagnostic is considered successful if the response has a status code greater than or equal to 200 and less than 400.
Each probe has one of three results:
The kubelet can optionally perform and react to two kinds of probes on running Containers:
livenessProbe: Indicates whether the Container is running. If
the liveness probe fails, the kubelet kills the Container, and the Container
is subjected to its restart policy. If a Container does not
provide a liveness probe, the default state is
readinessProbe: Indicates whether the Container is ready to service requests.
If the readiness probe fails, the endpoints controller removes the Pod’s IP
address from the endpoints of all Services that match the Pod. The default
state of readiness before the initial delay is
Failure. If a Container does
not provide a readiness probe, the default state is
If the process in your Container is able to crash on its own whenever it
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
probe; the kubelet will automatically perform the correct action in accordance
with the Pod’s
If you’d like your Container to be killed and restarted if a probe fails, then
specify a liveness probe, and specify a
restartPolicy of Always or OnFailure.
If you’d like to start sending traffic to a Pod only when a probe succeeds, specify a readiness probe. In this case, the readiness probe might be the same as the liveness probe, but the existence of the readiness probe in the spec means that the Pod will start without receiving any traffic and only start receiving traffic after the probe starts succeeding.
If your Container needs to work on loading large data, configuration files, or migrations during startup, specify a readiness probe.
If you want your Container to be able to take itself down for maintenance, you can specify a readiness probe that checks an endpoint specific to readiness that is different from the liveness probe.
Note that if you just want to be able to drain requests when the Pod is deleted, you do not necessarily need a readiness probe; on deletion, the Pod automatically puts itself into an unready state regardless of whether the readiness probe exists. The Pod remains in the unready state while it waits for the Containers in the Pod to stop.
For more information about how to set up a liveness or readiness probe, see Configure Liveness and Readiness Probes.
In order to add extensibility to Pod readiness by enabling the injection of
extra feedbacks or signals into
PodStatus, Kubernetes 1.11 introduced a
feature named Pod ready++.
You can use the new field
ReadinessGate in the
PodSpec to specify additional
conditions to be evaluated for Pod readiness. If Kubernetes cannot find such a
condition in the
status.conditions field of a Pod, the status of the condition
is default to “
False”. Below is an example:
Kind: Pod ... spec: readinessGates: - conditionType: "www.example.com/feature-1" status: conditions: - type: Ready # this is a builtin PodCondition status: "True" lastProbeTime: null lastTransitionTime: 2018-01-01T00:00:00Z - type: "www.example.com/feature-1" # an extra PodCondition status: "False" lastProbeTIme: null lastTransitionTime: 2018-01-01T00:00:00Z containerStatuses: - containerID: docker://abcd... ready: true ...
The new Pod conditions must comply with Kubernetes label key format.
kubectl patch command still doesn’t support patching object status,
the new Pod conditions have to be injected through the
PATCH action using
one of the KubeClient libraries.
With the introduction of new Pod conditions, a Pod is evaluated to be ready only when both the following statements are true:
To facilitate this change to Pod readiness evaluation, a new Pod condition
ContainersReady is introduced to capture the old Pod
As an alpha feature, the “Pod Ready++” feature has to be explicitly enabled by
PodReadinessGates feature gate
A PodSpec has a
restartPolicy field with possible values Always, OnFailure,
and Never. The default value is Always.
restartPolicy applies to all Containers in the Pod.
refers to restarts of the Containers by the kubelet on the same node. Exited
Containers that are restarted by the kubelet are restarted with an exponential
back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten
minutes of successful execution. As discussed in the
once bound to a node, a Pod will never be rebound to another node.
In general, Pods do not disappear until someone destroys them. This might be a
human or a controller. The only exception to
this rule is that Pods with a
phase of Succeeded or Failed for more than some
duration (determined by
terminated-pod-gc-threshold in the master) will expire and be automatically destroyed.
Three types of controllers are available:
Use a Job for Pods that are expected to terminate,
for example, batch computations. Jobs are appropriate only for Pods with
restartPolicy equal to OnFailure or Never.
Use a ReplicationController,
for Pods that are not expected to terminate, for example, web servers.
ReplicationControllers are appropriate only for Pods with a
Use a DaemonSet for Pods that need to run one per machine, because they provide a machine-specific system service.
All three types of controllers contain a PodTemplate. It is recommended to create the appropriate controller and let it create Pods, rather than directly create Pods yourself. That is because Pods alone are not resilient to machine failures, but controllers are.
If a node dies or is disconnected from the rest of the cluster, Kubernetes
applies a policy for setting the
phase of all Pods on the lost node to Failed.
Liveness probes are executed by the kubelet, so all requests are made in the kubelet network namespace.
apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-http spec: containers: - args: - /server image: k8s.gcr.io/liveness livenessProbe: httpGet: # when "host" is not defined, "PodIP" will be used # host: my-host # when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed # scheme: HTTPS path: /healthz port: 8080 httpHeaders: - name: X-Custom-Header value: Awesome initialDelaySeconds: 15 timeoutSeconds: 1 name: liveness
Pod is running and has one Container. Container exits with success.
Pod is running and has one Container. Container exits with failure.
Pod is running and has two Containers. Container 1 exits with failure.
Pod is running and has one Container. Container runs out of memory.
Pod is running, and a disk dies.
Pod is running, and its node is segmented out.