-
Notifications
You must be signed in to change notification settings - Fork 2
05 Status
Daniel Morinigo edited this page Jan 15, 2020
·
2 revisions
Mostly used to store the currently status of the resource.
Seeing that the spec
is used to describe our Desired state
, the status
should answer the question: How far is it from the desired state?
If the above is answered correctly, retriving the resource from the API should be incredibly familiar:
- The spec was defined by the user
- The status is a response from the system regarding the above
// PodStatus represents information about the status of a pod. Status may trail the actual
// state of a system, especially if the node that hosts the pod cannot contact the control
// plane.
type PodStatus struct {
// The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle.
// The conditions array, the reason and message fields, and the individual container status
// arrays contain more detail about the pod's status.
// There are five possible phase values:
//
// Pending: The pod has been accepted by the Kubernetes system, but one or more of the
// container images has not been created. This includes time before being scheduled as
// well as time spent downloading images over the network, which could take a while.
// Running: The pod has been bound to a node, and all of the containers have been created.
// At least one container is still running, or is in the process of starting or restarting.
// Succeeded: All containers in the pod have terminated in success, and will not be restarted.
// Failed: All containers in the pod have terminated, and at least one container has
// terminated in failure. The container either exited with non-zero status or was terminated
// by the system.
// Unknown: For some reason the state of the pod could not be obtained, typically due to an
// error in communicating with the host of the pod.
//
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-phase
// +optional
Phase PodPhase `json:"phase,omitempty" protobuf:"bytes,1,opt,name=phase,casttype=PodPhase"`
// Current service state of pod.
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions
// +optional
// +patchMergeKey=type
// +patchStrategy=merge
Conditions []PodCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,2,rep,name=conditions"`
// A human readable message indicating details about why the pod is in this condition.
// +optional
Message string `json:"message,omitempty" protobuf:"bytes,3,opt,name=message"`
// A brief CamelCase message indicating details about why the pod is in this state.
// e.g. 'Evicted'
// +optional
Reason string `json:"reason,omitempty" protobuf:"bytes,4,opt,name=reason"`
[...]
// RFC 3339 date and time at which the object was acknowledged by the Kubelet.
// This is before the Kubelet pulled the container image(s) for the pod.
// +optional
StartTime *metav1.Time `json:"startTime,omitempty" protobuf:"bytes,7,opt,name=startTime"`
// The list has one entry per init container in the manifest. The most recent successful
// init container will have ready = true, the most recently started container will have
// startTime set.
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status
InitContainerStatuses []ContainerStatus `json:"initContainerStatuses,omitempty" protobuf:"bytes,10,rep,name=initContainerStatuses"`
// The list has one entry per container in the manifest. Each entry is currently the output
// of `docker inspect`.
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status
// +optional
ContainerStatuses []ContainerStatus `json:"containerStatuses,omitempty" protobuf:"bytes,8,rep,name=containerStatuses"`
// The Quality of Service (QOS) classification assigned to the pod based on resource requirements
// See PodQOSClass type for available QOS classes
// More info: https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md
// +optional
QOSClass PodQOSClass `json:"qosClass,omitempty" protobuf:"bytes,9,rep,name=qosClass"`
// Status for any ephemeral containers that have run in this pod.
// This field is alpha-level and is only populated by servers that enable the EphemeralContainers feature.
// +optional
EphemeralContainerStatuses []ContainerStatus `json:"ephemeralContainerStatuses,omitempty" protobuf:"bytes,13,rep,name=ephemeralContainerStatuses"`
}
// PodCondition contains details for the current condition of this pod.
type PodCondition struct {
// Type is the type of the condition.
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions
Type PodConditionType `json:"type" protobuf:"bytes,1,opt,name=type,casttype=PodConditionType"`
// Status is the status of the condition.
// Can be True, False, Unknown.
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-conditions
Status ConditionStatus `json:"status" protobuf:"bytes,2,opt,name=status,casttype=ConditionStatus"`
// Last time we probed the condition.
// +optional
LastProbeTime metav1.Time `json:"lastProbeTime,omitempty" protobuf:"bytes,3,opt,name=lastProbeTime"`
// Last time the condition transitioned from one status to another.
// +optional
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty" protobuf:"bytes,4,opt,name=lastTransitionTime"`
// Unique, one-word, CamelCase reason for the condition's last transition.
// +optional
Reason string `json:"reason,omitempty" protobuf:"bytes,5,opt,name=reason"`
// Human-readable message indicating details about last transition.
// +optional
Message string `json:"message,omitempty" protobuf:"bytes,6,opt,name=message"`
}
// ContainerStatus contains details for the current status of this container.
type ContainerStatus struct {
// This must be a DNS_LABEL. Each container in a pod must have a unique name.
// Cannot be updated.
Name string `json:"name" protobuf:"bytes,1,opt,name=name"`
// Details about the container's current condition.
// +optional
State ContainerState `json:"state,omitempty" protobuf:"bytes,2,opt,name=state"`
// Details about the container's last termination condition.
// +optional
LastTerminationState ContainerState `json:"lastState,omitempty" protobuf:"bytes,3,opt,name=lastState"`
// Specifies whether the container has passed its readiness probe.
Ready bool `json:"ready" protobuf:"varint,4,opt,name=ready"`
// The number of times the container has been restarted, currently based on
// the number of dead containers that have not yet been removed.
// Note that this is calculated from dead containers. But those containers are subject to
// garbage collection. This value will get capped at 5 by GC.
RestartCount int32 `json:"restartCount" protobuf:"varint,5,opt,name=restartCount"`
// The image the container is running.
// More info: https://kubernetes.io/docs/concepts/containers/images
Image string `json:"image" protobuf:"bytes,6,opt,name=image"`
// ImageID of the container's image.
ImageID string `json:"imageID" protobuf:"bytes,7,opt,name=imageID"`
// Container's ID in the format 'docker://<container_id>'.
// +optional
ContainerID string `json:"containerID,omitempty" protobuf:"bytes,8,opt,name=containerID"`
// Specifies whether the container has passed its startup probe.
// Initialized as false, becomes true after startupProbe is considered successful.
// Resets to false when the container is restarted, or if kubelet loses state temporarily.
// Is always true when no startupProbe is defined.
// +optional
Started *bool `json:"started,omitempty" protobuf:"varint,9,opt,name=started"`
}
Following the general guidelines for status and its typical properties
Given a simple pod with one container we get the following status:
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-01-13T13:09:42Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-01-15T13:58:16Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-01-15T13:58:16Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-01-13T13:09:42Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://2d8f270efe9febde9221a458f72667ee310f5c2a1c0967f6d686ec5fbcff7828
image: sha256:d0dabaae76fc104b91d138914b4484c9464f61297f09c65d7cc60d1961d53365
imageID: docker-pullable://gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller@sha256:cc5e186131c9141f512786e3e55aca432e4dae841cad55fbb57d51b17b79371a
lastState:
terminated:
containerID: docker://a4be05cd307448c1e261fea1a9766031184f36782875f7408d51c680c21e2563
exitCode: 1
finishedAt: "2020-01-15T13:58:14Z"
reason: Error
startedAt: "2020-01-15T13:58:04Z"
name: tekton-pipelines-controller
ready: true
restartCount: 1
state:
running:
startedAt: "2020-01-15T13:58:15Z"
hostIP: 192.168.65.3
phase: Running
podIP: 10.1.2.105
qosClass: BestEffort
startTime: "2020-01-13T13:09:42Z"
- Make clear what is the current situation and in case of failure add the necessary information to clearify the possible cause.
- Avoid complex states machines, prefer multiple "flags" to display a richer scenario