Recap

Recapping from last week's lecture, we discussed the basics of Kubernetes, which is a container orchestration platform used for managing containerized applications. A Kubernetes cluster consists of one or more worker nodes and one master node.

Figure 11.1 Kubernetes cluster

Figure 11.1 Kubernetes cluster

When interacting with Kubernetes, you can use the kubectl command-line tool.

<aside> <img src="/icons/map-pin_gray.svg" alt="/icons/map-pin_gray.svg" width="40px" /> In our lab, we will be using Google Cloud as our Kubernetes environment. Google Cloud Platform (GCP) offers Kubernetes Engine (GKE) as one of its managed Kubernetes services.

</aside>

We also discussed the roles of the API server, Replication Controller, and Scheduler in the master node.

Introduction to Kubernetes Service

Kubernetes Services provide an abstraction layer that allows client requests to be directed to a related set of pods. So why exactly might we need it? As we covered last week, modern software architecture treats software as a short-term piece of work.

<aside> <img src="/icons/map-pin_gray.svg" alt="/icons/map-pin_gray.svg" width="40px" /> This abstraction layer is defined in Kubernetes YAML configuration files, which we'll cover in more detail as the final main topic of these notes and the course overall.

</aside>

In Kubernetes, pods can frequently die or be replaced due to various factors.

This is where Kubernetes services come into play. A service provides a stable IP address and port number for clients to connect to, regardless of the number of pods providing the service or their IP addresses.