Kubernetes practical Introduction

Ever get confused in all the “Kubernetes” noise? You hear about orchestration, containers, clusters. and you think, Oh boy, that’s complicated. Maybe you’re an app developer wondering how your app works in this “K8s” thing, or perhaps you work in Ops requiring a quick refresher.
So you’ve arrived in the right place!
Imagine manually taking care of hundreds, possibly even thousands, of application containers. Updating them? Scaling them up and down depending upon traffic? Replacing them when they fail? Nightmare, right?
That is exactly what Kubernetes is about.
Simply put, Kubernetes is an open-source container orchestrator. It automates deployment, scaling, and managing applications so your life can be easier (or not ?).

Kubernetes Architecture

A Kubernetes Cluster is a group of machines (Nodes) that run containerized apps. These can be physical or virtual, like AWS EC2 or on prem server.
There are two types of nodes:


Control Plane Nodes – the “brain” of the cluster that makes decisions, manages state, and runs components like:

  • kube-apiserver: handles all communication
  • etcd: stores cluster data
  • kube-scheduler: decides where to run Pods
  • kube-controller-manager: keeps cluster state as desired

Worker Nodes: These are the “muscles” of the cluster. This is where your actual application containers run. Each Worker Node runs a few essential components:

  • kubelet: An agent that talks to the Control Plane’s API server, receives instructions, and ensures containers are running and they are healthy
  • kube-proxy: Handles networking. It makes sure network communication to your Pods works correctly.
  • Container Runtime: The software responsible for actually running containers (e.g., Docker, containerd)

This separation allows Kubernetes to be resilient. If a Worker Node fails, the Control Plane can reschedule the applications onto other healthy nodes.

Core Kubernetes Objects

1. Pods

  • Pods are the smallest and most basic deployable unit in Kubernetes.
  • Think of it as: A wrapper around one or more containers. Pod can run multiple containers, what you will find is the most common pattern is one container per Pod.
  • Key characteristics:
    • Containers in the Pod share network (they can communicate via localhost) and they share storage volumes.
    • Each Pod gets its own unique IP address inside the cluster.
    • Pods are ephemeral! Pods are not long-lived things. If a Node fails, the Pods will die. If you scale down, some pods are shut-down.
  • Why Pods? They provide a higher level of abstraction than individual containers, simplifying management and resource sharing for tightly coupled processes.
  • Important: You typically don’t create Pods directly for your main applications. Because they’re ephemeral and don’t handle failures or scaling automatically. That’s where Deployments come in.

2. ReplicaSets

  • ReplicaSets ensures that a specified number Pods are running at any given time. If a Pod dies, the ReplicaSet controller creates a new one.
  • Important: Like Pods, you usually don’t manage ReplicaSets directly. They are mainly used by Deployments.

3. Deployments (Your Go-To)

  • What they are: The standard way to run applications on Kubernetes.
  • What they do: They manage ReplicaSets, which in turn manage Pods. You tell the Deployment what container image to use and how many replicas you want, and it handles the rest.
  • Key benefits:
    • Declarative Updates: You define the desired state (e.g., “use image v2”), and the Deployment handles the update process safely.
    • Rolling Updates: Updates Pods incrementally with zero downtime by default. New Pods are created, and old ones are terminated gradually.
    • Rollbacks: Easily revert to a previous version if an update goes wrong.
    • Scaling: Easily scale the number of replicas up or down.
    • Self-healing: Ensures the desired number of Pods are always running, replacing failed ones automatically (via the ReplicaSet).

4. Namespaces

  • What they are: Consider them to be virtual folders or partitions inside your cluster.
  • Goal:
    • Organization: Group related resources
    • Prevent Name Collisions: A pod with the name my-app in the development namespace is not the same as one with the same name in the production namespace.
    • Access Control & Resource Quotas: Implement policies (like RBAC rules or resource limits) for every namespace.

Connecting Your Application

We have Pods that run our app, managed by Deployments. But remember, Pods are temporary, and their IP addresses change all the time. So, how do the other parts of our app find them? And what about users from outside? This is where Services & Ingress help us out.

1. Services

  • Problem: Pod IPs are not reliable. If your frontend Pods need to talk to backend Pods, they can’t just use a communicate magically.
  • Solution: A Service provides a stable IP address and DNS for a related set of Pods.
  • How it works:
    • A Service uses Labels and Selectors. You label your Pods (e.g., app: backend), and the Service selects Pods with matching labels.
  • Common Service Types:
    • ClusterIP (Default): Exposes the Service on an internal IP within the cluster. Only reachable from inside the cluster. Perfect for backend services talking to each other.
    • NodePort: Exposes the Service on each Node’s IP (e.g., NodeIP:30080). Useful for development or temporary access.
    • LoadBalancer: Provisions an external load balancer (usually from a cloud provider like AWS) that points to the Service. The standard way to expose web applications to the internet in the cloud.
    • ExternalName: Maps the Service to an external DNS name (like my.database.example.com), acting as a CNAME alias within the cluster.

2. Ingress

  • Problem: Using a LoadBalancer Service works, but what if you have many services that you want to expose? Creating a multiple load balancers for each service can be very expensive and complex thing to do.
  • Solution: An Ingress acts as a smart router or reverse proxy for HTTP/HTTPS traffic coming into the cluster. It lets you define rules for your external traffic to different internal Services based on hostname or URL path.
  • Key benefits:
    • Expose multiple services under a single IP address (usually via one LoadBalancer).
    • Handle SSL/TLS termination.
    • Define path-based and host-based routing rules.
  • Important: An Ingress resource itself doesn’t do anything. You need an Ingress Controller (like Nginx Ingress) running in your cluster to actually implement the rules defined in your Ingress objects.

Secrets

Hardcoding API keys or database URLs directly into your container is a bad practice. Kubernetes provides objects to manage this:

  • ConfigMaps: Used to store non-sensitive configuration data as key-value pairs. Think of database hostnames, API endpoints, feature flags, environment settings.
  • Secrets: Used for sensitive data like passwords, API tokens. Data in Secrets must be stored in base64 encoded format (which is by the way not encryption, just obfuscation!).

Both ConfigMaps and Secrets can be injected into your Pods as:

  • Environment Variables: Keys become environment variables available to the container.
  • Mounted Files: Keys become files within a specified directory inside the container.

Storing Data

Remember how Pods are not long lived (ephemeral)? When a container restarts, any files or data on it are lost. When a Pod dies – same story. Then what are we doing with application that need to store data persistently?

1. Volumes

  • What they are: A directory accessible to the containers within a Pod. The key is that a Volume’s lifecycle is tied to the Pod itself. If the Pod dies, the Volume goes with it.
  • Purpose:
    • Sharing data between containers in the same Pod.
    • Persisting data across container restarts within the same Pod.
  • Common Volume Types:
    • emptyDir: Empty directory created when the Pod is created too. Useful for temporary space or for sharing files between containers in a multi-container Pod. Data is lost when the Pod is deleted.
    • hostPath: Mounts a file or directory directly from the host into Pod. Use with caution – it tightly couples your Pod to a specific Node and can represent some security risks.

2. PersistentVolumes (PVs) & PersistentVolumeClaims (PVCs)

This is the standard way to handle persistent storage in k8s.

  • PersistentVolume (PV): Is storage in the cluster that has been provisioned. Think of it as a network-attached disk made available to the cluster. It’s a cluster resource, like a Node.
  • PersistentVolumeClaim (PVC): A request for storage made by a user or application. It’s something like, “I need 10GB of storage.” Pods consume storage by referencing a PVC.
  • The Binding Process: Kubernetes matches an incoming PVC request to a suitable, available PV based on criteria like storage size and access modes (e.g., ReadWriteOnce – mountable by one node, ReadWriteMany – mountable by many nodes).
  • Benefit: Decouples the Pod from the specific storage implementation. The Pod just asks for storage (PVC), and the cluster figures out how to provide it (PV). If the Pod dies, the PVC and the underlying PV (and its data) typically remain, allowing a new Pod to reconnect to the same data.

Getting Hands-On: Where to Go Next?

Reading is great, but the best way to learn Kubernetes (or anything) is just by doing it and doing it a bit more!

  • Try it Locally:
    • Minikube: Creates a single-node cluster on your laptop. Great for starting and learning.
    • Docker Desktop: Includes an easy-to-setup k8s cluster on Mac and Windows.
  • Use the Cloud: Major cloud providers (AWS EKS, Google GKE, Azure AKS) offer managed Kubernetes services, often with free tiers or credits to get started. Worth checking em.

You’ll primarily interact with your cluster using the kubectl command-line tool. Here are a few essential commands to start with:

kubectl get <resource-type> (pods, deployments, services, nodes, ns) - List resources.
kubectl describe <resource-type> <resource-name> - Get information about specific.
kubectl apply -f <your-yaml-file.yaml> - Create or update resources from a YAML file.
kubectl delete <resource-type> <resource-name> or kubectl delete -f <your-yaml-file.yaml> - Remove resources.
kubectl logs <pod-name> - View the logs from a container in a Pod.
kubectl exec -it <pod-name> -- /bin/sh - Get an interactive shell inside a running container).

Don’t forget the Official Kubernetes Documentation – it’s incredibly comprehensive and well-written!

Conclusion

We’ve covered a lot of things. We saw how k8s tackles complex task and brake them into smaller parts.
We explored the basic architecture and essential building blocks like Pods, Deployments, Services, Ingress, ConfigMaps, Secrets, and Persistent Volumes.

K8s might seem daunting and overwhelming at first. But understanding these fundamentals topics best way of staring learning it. It empowers you to build, explore and find new exciting stuff.

The real magic happens when you start doing it. Spin up a local cluster, deploy a simple application, try scaling it, expose it with a Service. Break things, fix them, and learn along the way. You’ll quickly appreciate the power and flexibility k8s offers.

Newsletter Updates

Enter your email address below and subscribe to our newsletter