Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

Kubernetes Workload Resources

Last updated on August 10, 2023

A workload is an application that can have one or more components running on Kubernetes. A Pod represents a set of running containers in the cluster.

Kubernetes allows for declarative configuration of workloads and its components. This will allow the Control Plane to manage the creation, deletion, and modification of these Pods.

The built-in APIs for managing workload resources are:

  • Deployments – The most commonly used workload to run an application in a Kubernetes Cluster. Deployments are used for managing workloads that are stateless, where any Pod in the configuration can be replaced if needed. Under the hood, Deployments creates and manages ReplicaSet with a declarative configuration based on the deployment configuration.
  • ReplicaSet – A ReplicaSet contains pods that are identical to each Its purpose is to ensure that a specific number of Pods are always available at a given time. Deployments are recommended over ReplicaSet.
  • StatefulSets – This is used for managing workloads that are stateful. This workload configuration manages the deployment and scaling of a set of unique and persistent Pods. The uniqueness is done by providing each Pod with a sticky identity.
  • DaemonSet – This defines Pods that provides functionality that will ensure that all Nodes or a subset of Nodes in the Cluster will run a copy of a Pod. This concept follows how a system daemon is traditionally used in a Unix server. For example, a driver that allows access to a storage system can be implemented as a DaemonSet.
  • Jobs – These workloads are designed for one-off tasks. Jobs will continue to retry execution until a specified number of them successfully terminate. When this number is reached, the Job will be marked The simplest Job uses a single Pod. Jobs can run multiple Pods in parallel.
  • CronJobs – These workloads are the same as Jobs but are expected to run to completion and then stop until the next scheduled run.

Defining a Kubernetes Workload Resource

Deployments

Example of a Deployment:

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment

labels:

   app: nginx

spec:

replicas: 3

selector:

matchLabels:

app: nginx

template: metadata:

labels:

   app: nginx

   spec:

   containers:

– name: nginx

image: nginx:1.14.2

ports:

– containerPort: 80

 

Deployment is commonly abbreviated to deploy, including in kubectl. For the following examples, you can use deploy instead of deployment.

(If you want to test using the above example, use nginx-deployment for [deployment name]) Command to check Deployment:

$ kubectl get deployments

$ kubectl get deploy

Command to get details of Deployment:

$ kubectl describe deployments

$ kubectl describe deployment [deployment name]

Command to see Deployment rollout status:

$ kubectl rollout status deployment/[deployment name]

Command to check history and revisions to the Deployment:

$ kubectl rollout history deployment/[deployment name]

To see the details of each revision:

Tutorials dojo strip

$ kubectl rollout history deployment/[deployment name] –revision=[index]

$ kubectl rollout history deployment/nginx-deployment –revision=2

To undo the current rollout and rollback to the previous revision:

$ kubectl rollout undo deployment/[deployment name]

To undo the current rollout and rollback to a specific revision:

$ kubectl rollout undo deployment/[deployment name] –to-revision=[index]

$ kubectl rollout undo deployment/nginx-deployment –to-revision=2

To pause the current rollout:

$ kubectl rollout pause deployment/[deployment name]

To resume the paused rollout:

$ kubectl rollout resume deployment/[deployment name]

Command to scale a deployment:

$ kubectl scale deployment/[deployment name] –replicas=[value]

Command to autoscale if horizontal Pod autoscaling is enabled in cluster:

$ kubectl autoscale deployment/[deployment name] –min=[value] –max=[value]

–cpu-percent=[value]

$ kubectl autoscale deployment/nginx-deployment –min=10 –max=15 –cpu-percent=80

Command to update the Deployment:

$ kubectl set image deployment.v1.apps/nginx-deployment [image name]=[new value] or

$ kubectl set image deployment/nginx-deployment [image name]=[new value]

 

$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 or

$ kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1

Command to edit the Deployment:

$ kubectl edit deployment/nginx-deployment

ReplicaSet

Example of a Replicaset:

apiVersion: apps/v1

kind: ReplicaSet

metadata:

name: frontend

labels:

   app: gremlin

   tier: frontend

spec:

replicas: 10

selector:

   matchLabels:

     tier: frontend

template:

   metadata:

labels:

tier: frontend

spec:

   containers:

– name: php-redis

image: gcr.io/google_samples/gb-frontend:v3

Replicaset is commonly abbreviated to rs, including in kubectl. For the following examples, you can use rs instead of replicaset.

 

Command to get current ReplicaSets deployed:

$ kubectl get replicaset

$ kubectl get rs

Command to check the state of the ReplicaSet:

$ kubectl describe replicaset/[ReplicaSet name]

$ kubectl describe replicaset/frontend

 

Command to autoscale the ReplicaSet:

$ kubectl autoscale replicaset [ReplicaSet name] –max=[value] –min=[value]

–cpu-percent=[value]

$ kubectl autoscale replicaset frontend –max=10 –min=3 –cpu-percent=50

StatefulSets

Example YAML configuration of StatefulSets:

apiVersion: v1

kind: Service

metadata:

name: nginx

labels:

app: nginx

spec:

  ports:

  – port: 80

  name: web

  clusterIP: None

  selector:

app: nginx

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: web

spec:

  selector:

matchLabels:

app: nginx      # has to match .spec.template.metadata.labels

     serviceName: “nginx”

     replicas: 3      # by default is 

     minReadySeconds: 10 # by default is 0

     template:

metadata:

labels:

   app: nginx    # has to match .spec.selector.matchLabels 

spec:

     terminationGracePeriodSeconds: 10

     containers:

– name: nginx

image: registry.k8s.io/nginx-slim:0.8

     ports:

-containerPort: 80

– name: web

volumeMounts:

– name: www

mountPath: /usr/share/nginx/html

volumeClaimTemplates:

   – metadata:

    name:www

spec:

accessModes: [ “ReadWriteOnce” ]

storageClass.

Name: “my-storage-class”

resources:

   requests:

storage: 1Gi

StatefulSet is commonly abbreviated to sts, including in kubectl.

Command to get current StatefulSet deployed:

$ kubectl get statefulset

$ kubectl get sts

Command to check the state of the StatefulSet:

$ kubectl describe statefulset/[StatefulSet name]

$ kubectl describe statefulset/web

Command to scale the StatefulSet:

$ kubectl scale statefulset [StatefulSet name] –replicas=[value]

$ kubectl scale statefulset web –replicas=5

 

DaemonSet*

Example of DaemonSet:

apiVersion: apps/v1

kind: DaemonSet

metadata:

   name: fluentd-elasticsearch

   namespace: kube-system

   labels:

      k8s-app: fluentd-logging

spec:

selector:

   matchLabels:

name: fluentd-elasticsearch

template:

   metadata:

labels:

name: fluentd-elasticsearch

spec:

tolerations:

# these tolerations are to have the daemonset runnable on control plane nodes

# remove them if your control plane nodes should not run pods

-key: node-role.kubernetes.io/control-plane

operator: Exists

effect: NoSchedule

– key: node-role.kubernetes.io/master

operator: Exists

effect: NoSchedule

containers:

– name: fluentd-elasticsearch

image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2

resources:

   limits:

   memory: 200Mi

requests:

   cpu: 100m

memory: 200Mi

volumeMounts:

– name: varlog

mountPath: /var/log

terminationGracePeriodSeconds: 30

volumes:

– name: varlog

hostPath:

path: /var/log

DaemonSet is commonly abbreviated to ds, including in kubectl.

Command to get current DaemonSet deployed:

$ kubectl get daemonset

$ kubectl get ds

AWS Exam Readiness Courses

Command to check the state of the DaemonSet:

$ kubectl describe daemonset/[StatefulSet name]

 

Jobs

Example of Job:

apiVersion: batch/v1

kind: Job

metadata:

   name: pi

spec:

template:

   spec:

      containers:

    – name: pi

      image: perl:5.34.0

      command: [“perl”, “-Mbignum=bpi”, “-wle”, “print bpi(2000)”]

restartPolicy: Never

backoffLimit: 4

Command to check Jobs:

$ kubectl get job

Command to check specific Job:

$ kubectl get job [Job name]

$ kubectl get job pi

Command to get details of specific Jobs:

$ kubectl describe job [Job name]

Command to view the logs of a Job:

$ kubectl logs jobs/[Job name]

Command to suspend an active job:

$ kubectl patch job/[Job name] –type=strategic –patch ‘{“spec”:{“suspend”:true}}’

Command to resume an active job:

$ kubectl patch job/[Job name] –type=strategic –patch ‘{“spec”:{“suspend”:false}}’

CronJob

Example of a CronJob:

apiVersion: batch/v1

kind: CronJob

metadata:

   name: hello

spec:

   schedule: “* * * * *”

   jobTemplate:

spec:

   template:

    spec:

     containers:

– name: hello

image: busybox:1.28

imagePullPolicy: IfNotPresent

command:

– /bin/sh

– c

-date; echo Hello from the Kubernetes cluster

– restartPolicy: OnFailure

The .spec.schedule field uses the Cron syntax:

# +————- minute (0 – 59)

# ¦ +————- hour (0 – 23)

# ¦ ¦ +————- day of the month (1 – 31)

# ¦ ¦ ¦ +————- month (1 – 12)

# ¦ ¦ ¦ ¦ +————- day of the week (0 – 6) (Sunday to Saturday;

# ¦ ¦ ¦ ¦ ¦                                     7 is also Sunday on some systems)

# ¦ ¦ ¦ ¦ ¦                                    OR sun, mon, tue, wed, thu, fri, sat # ¦ ¦ ¦ ¦ ¦

# * * * * *

For example, 0 0 13 * 5 states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight.

Other than the standard syntax, some macros like @monthly can also be used:

Entry

Description

Equivalent to

@yearly (or @annually)

Run once a year at midnight of 1 January

0 0 1 1 *

@monthly

Run once a month at midnight of the first day of the month

0 0 1 * *

@weekly

Run once a week at midnight on Sunday morning

0 0 * * 0

@daily (or @midnight)

Run once a day at midnight

0 0 * * *

@hourly

Run once an hour at the beginning of the hour

0 * * * *

Automatic Cleanup for Finished Jobs

After a Job is finished, Kubernetes will not immediately delete the Job workload. Instead, a TTL-after-finished (time to live) controller will activate. This allows the client via the Kubernetes API to check within the TTL if a Job has succeeded or not.

This is done by modifying the .spec.ttlSecondsAfterFinished field of a Job.

After a Job is finished with either Complete or Failed status condition, the timer will trigger. When the TTL-after-finished seconds are reached, the Job will become eligible for cascading removal. The Job will still follow an object lifecycle, including waiting for finalizers.

The TTL-after-finished controller is only supported for Jobs.

When the .spec.ttlSecondsAfterFinished field is modified after the timer has expired, Kubernetes will not guarantee that the Job will extend. This is true even if TTL returns a successful API response.

This feature is stable starting from Kubernetes v1.23.

Example yaml of using TTL-after-finished

apiVersion: batch/v1

kind: Job

metadata:

   name: pi-with-ttl

spec:

ttlSecondsAfterFinished: 100

template:

   spec:

     containers:

– name: pi

image: perl:5.34.0

command: [“perl”, “-Mbignum=bpi”, “-wle”, “print bpi(2000)”]

restartPolicy: Never

ReplicationController

The ReplicationController ensures that a user-specified number of pod replicas are always up. This ensures that those pods are always running and available.

The ReplicationController will check the number of running pods it is maintaining. If there are too few, it will start more pods. If there are too many, it will delete existing pods. This is done automatically if a pod fails, deleted, terminated, or created.

A ReplicationController is required even if the workload requires a single pod. The ReplicationController supervises multiple pods across multiple nodes.

The ReplicationController is designed to facilitate rolling updates by replacing pods one by one. Example of ReplicationController to run 5 copies of the nginx web server:

apiVersion: v1

kind: ReplicationController

metadata:

   name: nginx

spec: replicas: 5

selector:

app: nginx

template:

metadata:

name: nginx 

labels:

app: nginx

spec:

containers:

– name: nginx image: nginx ports:

– containerPort: 80

ReplicationController is commonly abbreviated to rc, including in kubectl.

 

Kubectl command to check the status of ReplicationController

$ kubectl describe replicationcontrollers/nginx

$ kubectl describe rc/nginx

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE AWS, Azure, GCP Practice Test Samplers

Follow Us On Linkedin

Recent Posts

Written by: Tutorials Dojo

Tutorials Dojo offers the best AWS and other IT certification exam reviewers in different training modes to help you pass your certification exams on your first try!

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?