Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

Kubernetes and Cloud Native Associate (KCNA) Sample Exam Questions

Last updated on December 8, 2023

Here are 10 Kubernetes and Cloud Native Associate (KCNA) practice exam questions to help you gauge your readiness for the actual exam.

Question 1

What is the smallest deployable unit of computing that you can create and manage in Kubernetes?

  1. Container
  2. kubelet
  3. Node
  4. Pod

Correct Answer: 4

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. It is similar to a set of containers with shared namespaces and shared filesystem volumes.

A Pod (as in a pod of whales or pea pod) is a group of one or more containers with shared storage and network resources and a specification for how to run the containers. A Pod’s contents are always co-located and co-scheduled and run in a shared context.

A Pod models an application-specific “logical host” which means that it contains one or more application containers that are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.

As well as application containers, a Pod can contain init containers that run during Pod startup. You can also inject ephemeral containers for debugging if your cluster offers this.

The shared context of a Pod is a set of Linux namespaces, groups, and potentially other facets of isolation – the same things that isolate a container. Within a Pod’s context, the individual applications may have further sub-isolations applied. 

Hence, the correct answer is: Pod

Container is incorrect. Although a container is smaller than a Pod, you still can not deploy a single container in Kubernetes. Remember that the scenario explicitly asks for the smallest deployable unit of computing that you can create and manage in Kubernetes. You can deploy a pod with one container but not a lone container without any pod in a Kubernetes cluster.

kubelet is incorrect because this is simply a Kubernetes process, or an agent, that runs on each node in the cluster, which ensures that the containers described in the PodSpecs are running and healthy.

Node is incorrect because a Node is just a virtual machine or a physical server that runs/hosts the Pods of your Kubernetes cluster. Technically, it is bigger than a Pod and a Container; thus, a Node cannot be the smallest deployable unit of computing in Kubernetes.

References:
https://kubernetes.io/docs/concepts/workloads/pods/
https://kubernetes.io/docs/concepts/workloads/controllers/
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

Check out this Kubernetes Services Cheat Sheet:
https://tutorialsdojo.com/kubernetes-services/

Question 2

What does the acronym CNCF stand for?

  1. Cloud Native Computing Foundation
  2. Cloud Native Computing Federation
  3. Container Native Computing Foundation
  4. Container Native Computing Federation

Correct Answer: 1

The Cloud Native Computing Foundation (CNCF) is the open-source, vendor-neutral hub of cloud-native computing that hosts various projects such as Kubernetes, Helm, and Prometheus to make cloud-native universal and sustainable.

CNCF hosts critical components of the global technology infrastructure and brings together the world’s top developers, end users, and vendors. It is also a part of the nonprofit Linux Foundation and runs the largest open-source developer conferences. 

https://media.tutorialsdojo.com/tutorialsdojo_kcna_what_is_pod.png

Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.

The Cloud Native Computing Foundation seeks to drive the adoption of this paradigm by fostering and sustaining an ecosystem of open-source, vendor-neutral projects. CNCF democratizes state-of-the-art patterns to make these innovations accessible to everyone.

Hence, the correct answer is: Cloud Native Computing Foundation

All other options are incorrect.

References:
https://www.cncf.io/about/who-we-are/
https://www.cncf.io/

Question 3

Which of the following statements is true regarding the Kubernetes networking model?

  1. Pods can communicate with all other pods on any other node without Network Address Translation (NAT).
  2. Agents running on a node such as system daemons and kubelet can communicate with all pods on any node of the cluster.
  3. Network Address Translation (NAT) is necessary for the pods to communicate with all other pods on any other node.
  4. Tutorials dojo strip
  5. Agents on a node such as system daemons and kubelet must use CoreDNS to communicate with all pods on that node.

Correct Answer: 1

Every Pod in a cluster gets its own unique cluster-wide IP address. This means you do not need to explicitly create links between Pods, and you almost never need to deal with mapping container ports to host ports.

This creates a clean, backward-compatible model where Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):

– Pods can communicate with all other pods on any other node without NAT

– Agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node

For those platforms that support Pods running in the host network (e.g. Linux), when pods are attached to the host network of a node, they can still communicate with all pods on all nodes without NAT.

This model is not only less complex overall, but it is principally compatible with the desire for Kubernetes to enable low-friction porting of apps from VMs to containers. If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model.

Remember that pods can communicate with all other pods on any other node without NAT, and all the agents running in a Kubernetes node can communicate with all pods on that node.

Hence, the correct answer is: Pods can communicate with all other pods on any other node without Network Address Translation (NAT)

The option that says: Agents running on a node such as system daemons and kubelet can communicate with all pods on any node of the cluster is incorrect because agents can communicate with all pods running in the same node only and not on all nodes of the Kubernetes cluster.

The option that says: Network Address Translation (NAT) is necessary for the pods to communicate with all other pods on any other node is incorrect because NAT is not a requirement for pod-to-pod networking. Keep in mind that pods can communicate with all other pods on any other node without NAT.

The option that says: Agents on a node such as system daemons and kubelet must use CoreDNS to communicate with all pods on that node is incorrect since CoreDNS is not necessary for the kubelet and system daemons to communicate with all the pods on the same node. A Kubernetes cluster uses kube-DNS by default, so there’s no need to use any third-party tool for the cluster to have a domain name system service. CoreDNS is a flexible, extensible DNS server that is primarily used as a cluster DNS in Kubernetes. 

References:
https://kubernetes.io/docs/concepts/services-networking/
https://kubernetes.io/docs/tasks/administer-cluster/coredns/

Check out this Kubernetes Services Cheat Sheet:
https://tutorialsdojo.com/kubernetes-services/

Question 4

In CI/CD, what does the concept of “Continuous Integration” means?

  1. Frequent and automated code changes by integrating the changes from multiple developers.
  2. Periodic code changes via GitHub Actions from a single developer on a weekly basis.
  3. Frequent code changes that are done manually by integrating the changes from multiple developers.
  4. Automated integration of code changes from a single developer that is done on a daily basis

Correct Answer: 1

Continuous integration (CI) is the practice of automating the integration of code changes from multiple contributors or developers into a shared source code repository of a single software project frequently. This integration includes the automatic process of testing every code change that your development team commits or merges, and, afterward, kicks off a build automatically. Errors, bugs, and security issues can be identified and fixed more easily at a much earlier stage in the development process.

It’s a primary DevOps best practice, allowing developers to frequently merge code changes into a central repository where it is automatically built, tested, and run. Various automated tools are utilized to assert the new code’s correctness before initiating the integration.

Continuous Integration, deployment, and delivery are three phases of an automated software release pipeline, including a DevOps pipeline. These three different phases take software from idea to delivery to the end-user. The integration phase is the very first step in the process. Continuous Integration (CI) covers the process of multiple developers/contributors attempting to merge their code changes with the main code repository of a software project.

Continuous Delivery (CD) is the next extension of Continuous Integration in the CI/CD process. The delivery phase is responsible for packaging an artifact together to be delivered to the end-users of your application. The CD phase runs automated building tools to generate this deployable artifact. This build phase is kept ‘green’ which means that the artifact should be ready to deploy to your users at any given point in time.

Continuous Deployment (CD) is the final phase of the CI/CD pipeline. The deployment phase is responsible for automatically launching and distributing the software artifact to end users. At deployment time, the artifact has successfully passed the integration and delivery phases seamlessly. This will happen through scripts or tools that automatically move the artifact to public servers or to another mechanism of distribution.

Hence, the correct answer is: Frequent and automated code changes by integrating the changes from multiple developers.

The option that says: Periodic code changes via GitHub Actions from a single developer on a weekly basis is incorrect because continuous integration is meant for code changes that are done frequently by multiple developers. The use of GitHub Actions as a continuous integration and continuous delivery (CI/CD) platform is valid, but Continuous Integration can also be implemented by other CI/CD platforms, not just through GitHub Actions alone.

The option that says: Frequent code changes that are done manually by integrating the changes from multiple developers is incorrect because you cannot implement Continuous Integration without automation. Integrating the code changes from multiple developers manually entails significant management overhead and defeats the very purpose of Continuous Integration.

The option that says: Automated integration of code changes from a single developer that is done on a daily basis is incorrect. Continuous Integration is meant for frequent and automated code changes by integrating the pull requests/changes from multiple developers, and not just for one lone developer. It’s not a strict requirement to do this on a daily basis, contrary to what this option suggests.

References:
https://www.redhat.com/en/topics/devops/what-is-ci-cd
https://www.atlassian.com/continuous-delivery/continuous-integration

Question 5

In Node Selection, the kube-scheduler selects a node for the pod in a 2-step operation, namely Filtering and ________.

  1. Bin Packing
  2. Scoring
  3. Binding
  4. CSI Volume Cloning

Correct Answer: 2

Kube-scheduler is the default scheduler for Kubernetes and runs as part of the control plane. Kube-scheduler is designed so that, if you want and need to, you can write your own scheduling component and use that instead.

Kube-scheduler selects an optimal node to run newly created or not yet scheduled (unscheduled) pods. Since containers in pods – and pods themselves – can have different requirements, the scheduler filters out any nodes that don’t meet a Pod’s specific scheduling needs. Alternatively, the API lets you specify a node for a Pod when you create it, but this is unusual and is only done in special cases.

Kube-scheduler selects a node for the pod in a 2-step operation:

– Filtering

– Scoring

The filtering step finds the set of Nodes where it’s feasible to schedule the Pod. For example, the PodFitsResources filter checks whether a candidate Node has enough available resources to meet a Pod’s specific resource requests. After this step, the node list contains any suitable Nodes; often, there will be more than one. If the list is empty, that Pod isn’t (yet) schedulable.

In the scoring step, the scheduler ranks the remaining nodes to choose the most suitable Pod placement. The scheduler assigns a score to each Node that survived filtering, basing this score on the active scoring rules.

Finally, the kube-scheduler assigns the Pod to the Node with the highest ranking. If there is more than one node with equal scores, the kube-scheduler selects one of these at random.

In a cluster, Nodes that meet the scheduling requirements for a Pod are called feasible nodes. If none of the nodes are suitable, the pod remains unscheduled until the scheduler is able to place it.

The scheduler finds feasible Nodes for a Pod and then runs a set of functions to score the feasible Nodes and picks a Node with the highest score among the feasible ones to run the Pod. The scheduler then notifies the API server about this decision in a process called binding.

Factors that need to be taken into account for scheduling decisions include individual and collective resource requirements, hardware / software / policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and so on.

Hence, the correct answer is: Scoring

The option that says: Bin Packing is incorrect because this is a mechanism of “packing” or placing your applications onto your Kubernetes nodes. This is not part of the 2-step node placement of the kube-scheduler service.

The option that says: Binding is incorrect, as this is simply a process that ties one object to another. For example, a pod is bound to a node by a scheduler. There’s no Binding step in the kube-scheduler when it comes to selecting a node for the pod.

The option that says: CSI Volume Cloning is incorrect because this feature only adds support for specifying existing PVCs (PersistentVolumeClaim) for the purposes of indicating that a user would like to clone a Volume.

References:
https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/

Check out this Kubernetes Components Cheat Sheet:
https://tutorialsdojo.com/kubernetes-components/

Question 6

What are the 4 layers of Cloud Native Security?

  1. CI/CD, Clusters, Containers, Code
  2. Cloud, Clusters, Containers, Code
  3. Clusters, Containers, Code, Cloud Controller
  4. CloudEvents, Containers, Cloud, Code

Correct Answer: 2

In Cloud Native architectures, you can think about security in layers. The 4C’s of Cloud Native security are Cloud, Clusters, Containers, and Code. This layered approach augments the defense in depth computing approach to security, which is widely regarded as a best practice for securing software systems.

Each layer of the Cloud Native security model builds upon the next outermost layer. The Code layer benefits from strong base (Cloud, Cluster, Container) security layers. You cannot safeguard against poor security standards in the base layers by addressing security at the Code level.

In many ways, the Cloud (or co-located servers or the corporate datacenter) is the trusted computing base of a Kubernetes cluster. If the Cloud layer is vulnerable (or configured in a vulnerable way), then there is no guarantee that the components built on top of this base are secure. Each cloud provider makes security recommendations for running workloads securely in their environment.

There are two areas of concern for securing Kubernetes:

– Securing the cluster components that are configurable

– Securing the applications which run in the cluster

Depending on the attack surface of your application, you may want to focus on specific aspects of security. For example: If you are running a service (Service A) that is critical in a chain of other resources and a separate workload (Service B) that is vulnerable to a resource exhaustion attack, then the risk of compromising Service A is high if you do not limit the resources of Service B.

Hence, the correct answer is: Cloud, Clusters, Containers, Code

The option that says: CI/CD, Clusters, Containers, Code is incorrect because CI/CD is simply a method to frequently deliver apps to customers by introducing automation into the stages of software development.

The option that says: Clusters, Containers, Code, Cloud Controller is incorrect. Cloud Controller is related to cloud-controller-manager, which is a Kubernetes control plane component that embeds cloud-specific control logic. 

The option that says: CloudEvents, Containers, Cloud, Code is incorrect because CloudEvents is just a specification for describing event data in common formats to provide interoperability across services, platforms, and systems.

References:
https://kubernetes.io/docs/concepts/security/overview/
https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/

Question 7

Which of the following provides containers with self-healing capabilities?

  1. Kustomize
  2. Envoy
  3. Helm Charts
  4. Kubernetes

Correct Answer: 4

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation, results from counting the eight letters between the “K” and the “s”. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with:

– Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.

– Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.

– Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.

– Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.

AWS Exam Readiness Courses

– Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.

– Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images and without exposing secrets in your stack configuration.

 

Hence, the correct answer is: Kubernetes

Kustomize is incorrect because this simply provides a solution for customizing Kubernetes resource configuration free from templates and domain-specific language (DSL). 

Envoy is incorrect because this is only an open-source edge and service proxy designed for cloud-native applications.

Helm Charts is incorrect. This is primarily used to help users define, install, and upgrade complex Kubernetes applications and not specifically to provide self-healing capabilities to containers.

References:
https://kubernetes.io/docs/concepts/overview/
https://kubectl.docs.kubernetes.io/guides/introduction/kustomize/
https://helm.sh/

Check out this Kubernetes Fundamentals Cheat Sheet:
https://tutorialsdojo.com/kubernetes-fundamentals/

Question 8

Which of the following is eliminated in Serverless Computing?

  1. Server
  2. Server management
  3. Container orchestration
  4. Scaling

Correct Answer: 2

Serverless computing is a cloud computing execution model that allocates machine resources on an as-used basis. Under a serverless model, developers can build and run applications without having to manage any servers and pay only for the exact amount of resources used. Instead, the cloud service provider is responsible for provisioning, managing, and scaling the cloud infrastructure that runs the application code.

While the name can be misleading, serverless does not mean “no servers.” Instead, serverless apps abstract away the routine infrastructure work associated with application development. You have no visibility into the machines that run your applications, can’t configure them, and don’t have to manage or scale them. In other words, you pay for the service of a server, not the server itself.

Serverless Computing is also highly scalable and can quickly increase its computing capacity in a matter of milliseconds. It can process hundreds or even thousands of requests concurrently without any manual intervention or a scaling configuration in place.

A “serverless” solution does not mean it is literally running out of thin air with absolutely no physical server at all. That is a wrong concept behind serverless computing since you must have a CPU, a RAM, a network interface card, and other physical server devices to process data.

From the development perspective, it’s as if there are no servers at all—developers write the code, deploy it to production, and the cloud provider handles the rest.

Hence, the correct answer is: Server management

Server is incorrect because it is impossible to run your applications out of thin air without any physical server at all. The serverless platform simply abstracts away the routine infrastructure work and server maintenance associated with application development.

Scaling is incorrect because serverless computing often includes scaling on its workloads. A serverless solution can quickly increase its computing capacity in a matter of milliseconds and can process hundreds or even thousands of requests concurrently without any manual intervention.

Container orchestration is incorrect. This process simply revolves around the automation of the operational effort required to run containerized workloads and services. In fact, you can actually run serverless containers in the cloud using a container orchestration tool that you prefer. 

References:
https://cloud.google.com/discover/what-is-serverless-computing
https://aws.amazon.com/serverless/

Question 9

Which of the following is NOT a valid phase of a Pod in Kubernetes?

  1. Pending
  2. Running
  3. Terminating
  4. Failed

Correct Answer: 3

Pods follow a defined lifecycle, starting in the Pending phase, moving through Running if at least one of its primary containers starts OK, and then through either the Succeeded or Failed phases depending on whether any container in the Pod terminated in failure.

Whilst a Pod is running, the kubelet is able to restart containers to handle some kind of faults. Within a Pod, Kubernetes tracks different container states and determines what action to take to make the Pod healthy again.

In the Kubernetes API, Pods have both a specification and an actual status. The status for a Pod object consists of a set of Pod conditions. You can also inject custom readiness information into the condition data for a Pod if that is useful to your application.

Pods are only scheduled once in their lifetime. Once a Pod is scheduled (assigned) to a Node, the Pod runs on that Node until it stops or is terminated.

Like individual application containers, Pods are considered to be relatively ephemeral (rather than durable) entities. Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion. If a Node dies, the Pods scheduled to that node are scheduled for deletion after a timeout period.

A Pod’s status field is a PodStatus object, which has a phase field. The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The phase is not intended to be a comprehensive rollup of observations of the container or Pod state, nor is it intended to be a comprehensive state machine.

Here are the possible values for phase:

– Pending – The Pod has been accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes the time a Pod spends waiting to be scheduled as well as the time spent downloading container images over the network.

– Running – The Pod has been bound to a node, and all of the containers have been created. At least one container is still running or is in the process of starting or restarting.
Succeeded All containers in the Pod have terminated in success and will not be restarted.

– Failed – All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.

– Unknown – For some reason, the state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the Pod should be running.

Take note that when a Pod is being deleted, it is shown as Terminating by some kubectl commands. This Terminating status is not one of the Pod phases. A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag –force to terminate a Pod by force.

Hence, the invalid phase of a Pod in Kubernetes answer is: Terminating

The following options below are all valid phases of a Pod in Kubernetes: 

- Pending

- Running

- Failed

References:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Check out this Kubernetes Services Cheat Sheet:
https://tutorialsdojo.com/kubernetes-services/

Question 10

Which of the following is not a service type in Kubernetes?

  1. ClusterIP
  2. NodePort
  3. Ingress
  4. ExternalName

Correct Answer: 3

The Service API of Kubernetes is an abstraction to help you expose groups of Pods over a network. Each Service object defines a logical set of endpoints (usually, these endpoints are Pods) along with a policy about how to make those pods accessible.

For example, consider a stateless image-processing backend that is running with 3 replicas. Those replicas are fungible —frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves.

The Service abstraction enables this decoupling. The set of Pods targeted by a Service is usually determined by a selector that you define. For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, one that’s accessible from outside of your cluster.

Kubernetes Service types allow you to specify what kind of Service you want.

The available type values and their behaviors are:

ClusterIP – Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don’t explicitly specify a type for a Service. You can expose the Service to the public internet using an Ingress or a Gateway.

NodePort – Exposes the Service on each Node’s IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.

LoadBalancer – Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.

ExternalName – Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example). The mapping configures your cluster’s DNS server to return a CNAME record with that external hostname value. No proxying of any kind is set up.

If your workload speaks HTTP, you might choose to use an Ingress to control how web traffic reaches that workload. Ingress is not a Service type, but it acts as the entry point for your cluster. An Ingress lets you consolidate your routing rules into a single resource, so that you can expose multiple components of your workload, running separately in your cluster, behind a single listener.

The Gateway API for Kubernetes provides extra capabilities beyond Ingress and Service. You can add Gateway to your cluster – it is a family of extension APIs, implemented using CustomResourceDefinitions – and then use these to configure access to network services that are running in your cluster.

Ingress is not a Service Type in Kubernetes. This is basically an API object that manages external access to the services in your cluster and is primarily used to control how web traffic reaches your workload. 

ClusterIP, NodePort and ExternalName are all valid Kubernetes Service Types. 

References:
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/

Check out this Kubernetes Services Cheat Sheet:
https://tutorialsdojo.com/kubernetes-services/

For more practice questions like these and to further prepare you for the actual Kubernetes and Cloud Native Associate (KCNA) exam, we recommend that you take our top-notch Kubernetes and Cloud Native Associate (KCNA) Practice Exams, which simulate the question types in the KCNA exam.

Get any AWS Specialty Mock Test for FREE when you Buy 2 AWS Pro-Level Practice Tests – as LOW as $10.49 USD each ONLY!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE AWS, Azure, GCP Practice Test Samplers

Follow Us On Linkedin

Recent Posts

Written by: Tutorials Dojo

Tutorials Dojo offers the best AWS and other IT certification exam reviewers in different training modes to help you pass your certification exams on your first try!

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?