Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

🎁 Get 20% Off - Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Kubernetes Fundamentals

Fundamental terminologies

Containers – a lightweight, standalone, and executable package that can hold a workload.

Workload – an application. This can be a single component or multiple components that work together. An example of a simple workload is a ping workload that runs every hour. An example of a complex workload is a web server that needs multiple containers working together.

Orchestration – In Kubernetes, this refers to the automatic creation, deletion, modification, and querying of containers.

Virtual Machine (VM) – An abstract software of a virtualized or emulated Operating System. This can be packaged inside a container or run as a standalone program.

Docker – A suite of software development tools for creating, sharing, and running containers.

Resources – Refers to the available CPU, Memory, Storage, Network, and anything else that a Workload would need to run.

Server – An abstract machine that hosts and runs software. A server can be hosted and run on multiple physical machines acting as one or a single machine.

What is Kubernetes?

Kubernetes is a platform for orchestrating, managing, and operating containerized workloads. This simply means Kubernetes can automatically create and manage containers.

Kubernetes is portable, extensible, and open-source.

Google created Kubernetes for their internal use. It was designed for the large scale as its primary use was running Google’s production workloads. Google open-sourced Kubernetes to the public in 2014.

Tutorials dojo strip

Origin of Kubernetes name

Kubernetes came from the Greek word for Helmsman or Pilot. This can be seen in the logo.

Kubernetes Fundamentals Cheat Sheet

Kubernetes is also abbreviated as K8s. This naming was used as there are 8 letters between “K” and “s”.

Deployment without Kubernetes

Without Kubernetes, native applications running on physical servers can use up a large amount of resources such as memory and disk space. There is no generic way of limiting an application’s access to these resources, as it would be running bare metal on the machine.

Without Kubernetes, a possible solution would be to run each application on different servers, but this is expensive in both hardware and maintenance.

Deployment with Virtualization

Without Kubernetes, a Virtual Machine can be used to isolate the applications from each other. Configuration of these VMs will allow for resources to be statically set during the creation of the VM.

However, VM is a complete machine running its own Operating System. The host machine will still be spending resources in running the VM, which will, in turn, spend those resources to run its OS and the application within the VM. As such, it will be inherently CPU intensive.

Deployment with Containers via Kubernetes

Kubernetes is the platform that manages the deployment of containers.

Internally, Kubernetes uses containers for handling the isolation between applications on one or multiple servers. Kubernetes is also responsible for configuring the resources allowed per container. This allows each container to have a quota of resources that they can use.

What are Containers?

Containers are similar to VMs. However, unlike a VM, the container will be running on the host’s OS. This removes the CPU needed for running an OS inside a VM, making it inherently faster than VMs.

Like a VM, a container will have its own resources and run applications isolated from other containers.

  • Quick creation and deployment compared to VMs.
  • The same environment can be consistent across development, testing, and production.
  • Rollbacks for container image build and deployment.
  • Separation of concerns between Developers and Operation of application.
  • Allows for microservices architecture in software development.
  • Decoupling of applications from infrastructure. This allows for OS and Cloud portability.
      • OS portability. This means that containers can run on most modern OS like Ubuntu, RHEL, and others.
      • Cloud portability. This means that different cloud providers can run containers.
  • Resource isolation can be done between each running container.
  • Observability. The containers can be queried by the end user or by an application via an API.

Kubernetes and Docker

Docker is actually multiple tools and products used to create, share and run a container. In relation to Kubernetes, Docker is primarily used for creating container images.

Starting in Kubernetes v1.20, Docker’s container runtime will not be used in favor of Kubernetes’ own container runtime interface. But even when not using Docker’s container runtime, Kubernetes can still use the container images created in Docker.

In practical terms, you use Docker to build container images. Kubernetes will take those images and run them without any other changes.

Benefits of Using Kubernetes

Kubernetes was built with the following in mind:

Free AWS Courses
  • Resilience. If a container goes offline, a new container with the same configuration will automatically deploy, ensuring little to no downtime.
  • Scaling. If a service or container needs more resources, Kubernetes can provide more resources to the container (vertical scaling) or by creating more instances of the container to handle load balancing (horizontal scaling).
  • Failover. If a container fails, Kubernetes have mechanisms to redirect traffic to a different and ready container.

In addition, Kubernetes has the following features that are useful for large-scale workloads:

  • Service discovery. Kubernetes provides a container or end users to find containers that provide a service.
  • Storage orchestration. Kubernetes allows for the configuration of storage with different storage providers.
  • Automated rollouts. When deploying a new version of a container, Kubernetes will automatically handle the deployment to all the Clusters.
  • Automated rollbacks. If after deploying a new version of a container and an error occurs, Kubernetes can rollback to the previous version.
  • Automatic bin packing. Given the resources available to Kubernetes among different clusters, the platform can determine which cluster to use to be efficient in resource usage.
  • Self-healing. When a Kubernetes component fails or has a discrepancy with the configuration, Kubernetes can repair itself to provide the expected configuration.
  • Secret and configuration management. Kubernetes provides an API for keeping secrets (such as passwords and API keys) and configuring Kubernetes components.

Get 20% Off – Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

Written by: Tutorials Dojo

Tutorials Dojo offers the best AWS and other IT certification exam reviewers in different training modes to help you pass your certification exams on your first try!

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?