How to install Kubernetes: The Definitive Guide


This is my Proxmox server ;-)

There are so many ways to install a Kubernetes cluster that can transform a task, sometimes simple, into a daunting one, right? So let’s put some order into the chaos.

There are several different ways to install Kubernetes; the easiest way is to use Docker to help. The docker desktop, for example, already comes with a built-in, one-node k8s cluster for you to test or start to deploy your applications. Solutions like Kind (k8s in Docker) or Minicube also do the heavy lifting for you with automatic processes. But if you really want to learn K8s, the best way is to install it from scratch, using physical or virtual machines. If this is what you want, this guide is for you

Follow me on this journey, step by step, on how to install Kubernetes manually without any script or package help. This guide is 100% practical, with all commands explained and screenshots of the results.
If you need to understand Kubernetes a bit better on the theory side before jumping into the practice, start with this article here Why is Kubernetes so difficult? And how to learn it fast

Pre-Requisites

What hardware is required by Kubernetes?

It depends on the installation you want to do; distributions like K3s, for example, are way lighter than a Kubeadm installation. The general consensus is to use not less than 2 GB of RAM and 2 cores (or vCPUs) for the master node (control plane) and at least 1GB and 1 core for the other nodes.

Do we need Docker to install Kubernetes?

No, you definitely do not need Docker to use Kubernetes, you need a container runtime, but there are several good options for you to use. Anyway, the docker installation will help you in this process, especially in generating your own app images to deploy into Kubernetes. So, although it’s not a necessary tool to have on your client’s machine, it’s well recommended. PS.: You will not install Docker anywhere into the clusters we are showing here

What do I need to follow this tutorial?

Two machines with Ubuntu. It can be virtual or physical. Although you technically can install full Kubernetes on Raspberry Pi computers, I do not advise you to do so but use K3s instead, which is significative lighter and more suitable to the Raspberry capacities.
If you want to know how to install Kubernetes in a Raspberry Pi cluster, please check this article How to install Kubernetes on Raspberry Pi: Complete Guide.

Any machine, even an old laptop or mini PC, can do the job.
You can also create your machines with desktop virtualization software like VirtualBox or even the Hyper-V directly from Windows.

In my case, I’m using my Proxmox virtualization server to create the machines, but all the commands that I will show you here can be executed on any computer; it does not depend on Proxmox or any virtualization. BTW: The thumbnail of this article is my actual Proxmox server 😉

Proxmox Virtual Maxines
My Proxmox Virtual Machines

Install container runtime

The first thing to do to be able to run Kubernetes is to install a container runtime. Kubernetes, by itself cannot create or run containers; this responsibility is passed to compatible applications.
In our case, we will install the containerd. Containerd is like a light docker with only the essential container management features.

sudo apt install containerd
Install Container Runtime

We need to do it on all the nodes.

Install Container Runtime Nodes

Check the status of containerd

Now let’s verify if containerd is running. It’s always good to check the status step by step for easy debugging and fixing.

systemctl status containerd
Container Status Check

Configure Containerd

Now we need to configure our runtime. To do that, we need to create a new folder, ask the containerd to generate a default configuration file on this folder and then change only one configuration on this file. Let’s do that…

Create the containerd folder.

sudo mkdir /etc/containerd

Generate the default configuration for containerd

containerd config default | sudo tee /etc/containerd/config.toml

Change the configuration (runc.options)

We need to change the parameter SystemdCgroup from false to true

sudo nano /etc/containerd/config.toml
Change Containerd config

Disable swap on Linux

Kubernetes doesn’t work well when the SWAP file in the node is enabled. So, if this is your case, you need to disable that.

To verify if the swap is enabled, you can run this command:

free -m
Check if the swap file is enabled

If the SWAP line appears to you like shown in the image, you need to disable it. We can disable the swap editing of the fstab file.

sudo nano /etc/fstab

You only need to comment out the line responsible for the swap configuration, as shown in the image.

Enable Bridging for Kubernetes

For Kubernetes installation, we must configure Linux IP forwarding on a Linux system. Doing so, it will be capable of forwarding packets that are meant for other destinations (other than itself). Linux uses the net.ipv4.ip_forward kernel variable to toggle this setting on or off.

To change this variable, we can edit the file sysctl.conf

sudo nano /etc/sysctl.conf

And then uncomment the line

net.ipv4.ip_forward=1
Enable Bridging

We also need to enable the br_netfilter. The br_netfilter module is required to allow for transparent masquerading and facilitate Virtual Extensible LAN (VxLAN) traffic for Kubernetes pods across the cluster nodes.

The technical explanation behind it is a bit complex, so for now, know that we need to enable it.

We can do that by creating a k8s.conf file and adding this line. Believe me, forgetting to do this step can cause several headaches in the future…

sudo nano /etc/modules-load.d/k8s.conf

Add this simple line and save:

The pre-configuration of the Linux machines to install Kubernetes is done; now, reboot the machines, and we will proceed to the Kubernetes installation.

sudo reboot

Install Kubernetes

First, we need to install the Kubernetes repositories to download the correct packages to our system.
This installation is well described in the Kubernetes documentation, and if you have any trouble with the commands I’m showing here, you can check to direct the Kubernetes Install documentation.

First, a quick update of the repositories and installation of the certificates package and curl app.

sudo apt-get update
sudo apt-get install -y ca-certificates curl

Then we will download the key directly from Google to sign the Kubernetes repository.

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg

Add the repository to the list, signing it with the key we just downloaded

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Another update, and we should be ready to install the Kubernetes packages

sudo apt-get update

Now let’s install the kubeadm, kubectl, and the kubelet packages

sudo apt-get install -y kubeadm kubectl kubelet

Initialize the Cluster

With the packages installed, we can use kubeadm to initialize our cluster.

Important: So far, all the commands should be executed in all your nodes. The cluster initialization needs to run only into your master node (control-plane node).

The main points in this command, and the ones you will need to change in your configuration, are:

  1. The control-plane-endpoint: You need to set this parameter with the IP of your control plane (or master node). In my case, this IP is 10.0.0.200
  2. The name of the node: It’s the hostname you set for this machine; in my case, I set it to k8s-cp

Important: DO NOT change the pod-network-cidr, there’s no reason to do that, and if you change, you will also need to reconfigure other parts of the system. So be safe and let it be with the default values.

sudo kubeadm init --control-plane-endpoint=10.0.0.200 --node-name k8s-cp --pod-network-cidr=10.244.0.0/16

At the end of the process, you should see a message like this one… Copy the join command and save it for now… We will need this command and codes later to join the nodes into the cluster.


Setting up the cluster configuration for kubectl

Kubectl is a command line interface (CLI) for running commands against Kubernetes clusters. It’s one of the primary tools used to interact with a Kubernetes cluster, allowing administrators and users to manage and control cluster and application behavior.

To start using this CLI here, we need first to copy the cluster’s configuration to the default location and then set the permissions on the configuration file to run the kubectl with a normal user (not root).

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Nothing fancy here; we just created the kubectl (.kube) default configuration folder and copied our cluster configuration to the new folder.

We will use a similar command to configure our client machine later in this tutorial.

After that, you can run the kubectl commands and start to get information about our cluster. As you can see in the picture below, we have only the main node (control plane) running on our cluster right now.

Check Nodes

Let’s check the pods running:

kubectl get pods -A
Check Pods before apply overlay network

Look at that; we have two pods in pending status. The coredns is not running.
It is because we didn’t initialize the overlay network. To do that, we need to install another specialized package…

In our case, we will install the flannel network, one of the most popular packages for this function.

To make things easy, we will run the yaml directly from the flannel’s homepage. If you are afraid of the security and want to know what you are running before executing it, download the yaml file and read its content first.

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Install Flannel

Let’s recheck our pods.

kubectl get pods -A
everything running

Here we are, as you can see, now we have flannel running as pods, and our coredns pods have started.

Join the other nodes in the cluster

Now it’s time to connect our other nodes to the cluster. But first, we need to rerun kubeadm with the join command we received when the cluster spun up.

kubeadm join 10.0.0.200:6443 --token t9nysp.ogzc14hc9qf9j2wd \
--discovery-token-ca-cert-hash \
sha256:3014288f8ec92c4fc9361fdc2edccc5355f85123123221ds55455632aw144552f \
--control-plane

Pay attention here: As you can see in the command, we are passing a token as one of the parameters, right? If you know how the tokens work, they are meant to be temporary permissions. So, if you took some time to run this command, you will end up with the error in the picture below… But relax; the fix is easy…

Error when join token expired

Generating a new token to the join command on Kubernetes

To generate a new join command, we need to ssh the master node (control plane) and ask for help from kubeadm again…. Running the following command:

kubeadm token create --print-join-command
Regenerate join command

Now we can try again in the nodes

sudo kubeadm join 10.0.0.200:6443 --token 1cyhke.8jl8vkaqetzoptyj --discovery-token-ca-cert-hash sha256:3014288f8ec92c4fc9361fdc2edccc35ab5998fg5548112a2f

Now it should work…

We can check it by looking at the cluster nodes… As we didn’t configure the kubectl on the nodes machine, let’s return to the master (control plane) and execute the commands.

kubectl get nodes -A
All nodes Connected

Beautiful right? You should be proud of yourself; your cluster is up and running, ready for the next challenges….


Getting freedom from ssh terminals

Now that our cluster is working, we need to control everything remotely; you don’t want to connect directly to the cluster node via ssh all the time, right?

To do that, we only need to install the kubectl into our client machine and copy your configurations from $HOME/.kube/config

I will use a fresh new version of Linux Mint to take the screenshots step by step, but it is not that different from Windows and Mac.

BTW, if you have docker desktop installed on your machine, you already have the kubectl working; the only thing you need to do is to replace the configuration (.kube/config) with the configuration from the cluster (control-plane node)

First, we need to download your OS’s latest version of the kubectl. In our case, the command line will be the following:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
downloading kubectl

And then, set the ownership to the current user to not have to use root all the time (of course, it does not apply to Windows)

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

To kubectl to find and access a Kubernetes cluster, it needs a kubeconfig file, which is created automatically when you make a cluster.

The kubectl configuration is at ~/.kube/config by default.

So we can quickly remote copy this config from our control-plane node.

mkdir .kube/config
scp fabio@10.0.0.200:/$HOME/.kube/config ~/.kube/config
remote copy configuration from node

You saw that before, a simple new folder and a copy (via ssh) from our master node to our local configuration folder.

Now, let’s test it…

kubectl get nodes
test remote configuration

How beautiful is it? Now we have all the control of our Kubernetes cluster from the client computer… No need to ssh connect to the nodes anymore.


Deploying into Kubernetes Cluster

That’s great, our cluster is online and working, but besides the infrastructure, nothing is happening here. What about we test a simple deployment to see how it goes?

I will deploy the “hello world” of Kubernetes, a simple nginx container, and expose it to the external world via a service of the type nodePort.

We will need to apply only two yaml files, one for the deployment and another for the service; let’s do that.

First of all, we need to create a deploy file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo  
  labels:
    app: nginx-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
        - name: nginx-demo-container
          image: nginx
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80

The essential parts of this file are the kind (Deployment), the template where we specify the image we want to run, the port, and the container’s name.
You can specify more things here, like dependencies, resources limit, volumes, and so on, but this simple configuration is more than enough for our goal.

Also, please pay attention to the replicas value; here, we specify how many containers we want the system to start and maintain.

So, create a file with this content (for example, nginx-demo-depl.yaml) and run it with the following command:

kubectl apply -f nginx-demo-depl.yaml
deploying nginx

To check if the pods are running, use the command:

kubectl get pods

Alternatively, you can use the option wide to show in which node the pod is running:

kubectl get pods -o wide
pod running

Now let’s do another test and change the number of the replicas

changing replicas value

Recheck the pods, and you will now see two containers running in different nodes selected by Kubernetes.

Pods after replicas value changed

Kubernetes Service Deploy

That’s great to see the pods running, but they are still inaccessible outside the cluster network. To make it accessible, we need to configure (at the minimum) a service to expose this container.

Let’s create another file, in my case called nginx-demo-service.yaml, and run it with the kubectl (you know the drill)

apiVersion: v1
kind: Service
metadata:
  name: nginx-demo-service
spec:
  type: NodePort
  selector:
    app: nginx-demo
  ports:
    - name: http
      port: 80
      nodePort: 30000
      targetPort: 80
      protocol: TCP

Important parts here: The kind of the file (service), type (NodePort), and the ports and protocol configuration.

We are exposing a service on port 30000 that redirects the requests to the internal network pod at port 80.

Node Port

Now verify with your browser, and the nginx default page should appear.

nginx showing in the browser

Conclusion

Although the final article does not become so small, most of the things here are explanations. The practical and step-by-step commands are not that much to execute.

With practice, most of these commands will stay in your mind, and you can run clusters and deploy them in no time.

Of course, there’s much more to do with Kubernetes; this is one of the simplest examples I could give you. So continue your learning journey, and soon you will become an expert in Kubernetes!

Fabio Fernandes

Strategy-minded IT transformation leader with extensive experience in Information Technology across several industry sectors. Proven track record of leading digital transformation initiatives, aligning technology services with business goals. Microsoft, AWS, and Scrum Master Certified, with extensive global expertise.

Recent Posts