Table of contents

Security before installation

It’s best practices to run only Kubernetes on a dedicated server. Running side applications or tuning applications that are used by the cluster may impact Kubernetes itself and is not recommended or supported.

Kubernetes uses the whole iptables firewall, so any changes you make on the iptables can impact kubernetes and also be ruined by kubernetes.

At the end of this post, I will visualize the actions Kubernetes takes in the iptables, allowing us to understand how sophisticated its algorithm is.

You can now run the command to checkup what is your firewall configuration currently for the ctrlplane machine:

sudo iptables -xvnL

It gives you the following, the 3 main chains are empty INPUT, FORWARD & OUTPUT

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
    pkts      bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
    pkts      bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
    pkts      bytes target     prot opt in     out     source               destination

This is Calico for its networking solution between k8s pods via kube-proxy which is gonna feed it.

Calico is a networking and security solution that enables Kubernetes workloads to communicate seamlessly and securely. Mainly its goal is to allow us to write k8s charts of kind GlobalNetworkPolicy and/or NetworkPolicy to play with communication between pods, even though Kubernetes is also able to do that, since quite sometimes ago, with its powerful service types ClusterIP, NodePort, and LoadBalancer.

Disable Swap and Load Kernel Modules

It is highly recommended to disable swap space on your Ubuntu instances so that Kubernetes cluster works smoothly. Run beneath command on each instance to disable swap space.

sudo swapoff -a
sudo sed -i 's/\/swap/# \/swap/1' /etc/fstab

Now, load the following kernel modules using the modprobe command.

sudo modprobe overlay
sudo modprobe br_netfilter

To ensure these modules load permanently, create a file with the following content.

sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

Next, add the kernel parameters like IP forwarding. Create a file and load the parameters using sysctl command:

sudo tee /etc/sysctl.d/kubernetes.conf <<EOT
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOT

To load the above kernel parameters, run:

sudo sysctl --system

Install and Configure Containerd

Containerd provides the container run time for Kubernetes. So, Install containerd on all three instances.

First install containerd dependencies:

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

Next, add containerd repository using following set of commands.

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/containerd.gpg
sudo add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Now, install containerd using following apt command.

sudo apt update && sudo apt install containerd.io -y

Next, configure containerd to use SystemdCgroup. Run the following commands.

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

Restart containerd service so that above changes come into the affect.

sudo systemctl restart containerd

Add Kubernetes Package Repository

Kubernetes packages are not available in the default package repositories of Ubuntu 24.04, so for its installation first add it’s repository. Run these steps on each instance.

Note: At the time of writing this post, latest version of Kubernetes was 1.32. So you can this version according your requirement.

Download the public signing key for the Kubernetes package repository using curl command:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/k8s.gpg

Next, add the Kubernetes repository by running the following command:

echo 'deb [signed-by=/etc/apt/keyrings/k8s.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/k8s.list

Install Kubernetes Components (Kubeadm, kubelet & kubectl)

To install Kubernetes components like Kubeadm, kubelet, and kubectl, run the following apt commands on all the instances.

sudo apt update
sudo apt install kubelet kubeadm kubectl -y

Initialize Kubernetes Cluster

As all the prerequisites are met, now I am good to start the installation of Kubernetes on Ubuntu 24.04.

Run following Kubeadm command from the master node only to initialize the Kubernetes cluster.

sudo kubeadm init --control-plane-endpoint=ctrlplane
This command will pull the required images for your Kubernetes cluster. Once this command is executed successfully, we will get the output something like below:

[init] Using Kubernetes version: v1.32.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0210 10:47:02.654514    3690 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ctrlplane] and IPs [10.96.0.1 10.0.2.23]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ctrlplane] and IPs [10.0.2.23 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ctrlplane] and IPs [10.0.2.23 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 515.039405ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 7.006235073s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ctrlplane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ctrlplane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: sdhe3d.glgn3kfm7ooqd2pg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join ctrlplane:6443 --token sdhe3d.glgn3kfm7ooqd2pg \
    --discovery-token-ca-cert-hash sha256:5e87c3f261bf5aa9ae134de39030e49a0f6f933cec26fcc9cbf635f05aa3effe \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join ctrlplane:6443 --token sdhe3d.glgn3kfm7ooqd2pg \
    --discovery-token-ca-cert-hash sha256:5e87c3f261bf5aa9ae134de39030e49a0f6f933cec26fcc9cbf635f05aa3effe

In the output above, I get a series of commands like how to start interacting with your Kubernetes cluster and command to join any worker node to join this cluster.

On the master node, run following set of commands.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Next copy the command to join any worker node from the above output, run it on both the worker nodes. In my case, command would be:

sudo kubeadm join ctrlplane:6443 --token sdhe3d.glgn3kfm7ooqd2pg \
        --discovery-token-ca-cert-hash sha256:5e87c3f261bf5aa9ae134de39030e49a0f6f933cec26fcc9cbf635f05aa3effe

Do this command line as many nodes as you run, however it’s optional.

Now head back to the master node and run kubectl get nodes command to verify the status of worker nodes.

kubectl get nodes

Output confirms that worker nodes have joined the cluster, but the status is NotReady. So, in order to make status Ready, I need to install network add-ons plugin like calico on this cluster.

Install Helm

Helm is a CLI which plays the role of package manager for Kubernetes, it makes charts ready to go:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Install Calico Network Add-on Plugin

Helm is a CLI which plays the role of package manager for Kubernetes, it makes charts ready to go:

helm repo add projectcalico https://docs.tigera.io/calico/charts

Install the Tigera Calico operator and custom resource definitions using the Helm chart:

helm install calico projectcalico/tigera-operator --version v3.29.2 --namespace tigera-operator --create-namespace

Confirm that all of the pods are running with the following command:

watch kubectl get pods -n kube-system

Wait until each pod has the STATUS of Running.

You can now see an iptables is been configured by Calico via kube-proxy:

sudo iptables -xvnL

The 3 main iptables chain INPUT, OUPUT & FORWARD have been feeded.

Install Ingress-Nginx

I recommend you to check out how ingress works by yourself on the k8s documentation, to get it easier install it with Helm

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace

Install Load Balancer

As perfectly said in the Ingress-Nginx documentation, for bare-metal considerations I have to install metallb

helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --namespace metallb-system --create-namespace

I am gonna create a specific config for Kubernetes to set up metallb

vi ~/metallb-charts.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default
  namespace: metallb-system
spec:
  addresses:
  - 192.168.0.31-192.168.0.32
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system
spec:
  ipAddressPools:
  - default

I will now need the ipv4 address of my ctrlplane & node1 machines.

ip addr show
# or
ip a

I checked out 192.168.0.31 & 192.168.0.32 are respectivly the ip address inet in enp0s8 position on my host, remember this is the Bridged adapters we have set in Virtualbox.

kubectl apply -f ~/metallb-charts.yaml

I can now see I have an EXTERNAL-IP in my nginx-ingress service

kubectl get services --all-namespaces

Test Kubernetes Installation

I am gonna create an easy app saying “Welcome” in a html page.

helm create demo-app

This is gonna create a folder demo-app, I am gonna do some changes on a particular file in it.

vi ~/demo-app/values.yaml
# Default values for demo-app.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# This will set the replicaset count more information can be found here: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
replicaCount: 1

# This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/
image:
  repository: nginx
  # This sets the pull policy for images.
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

# This is for the secrets for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# This is to override the chart name.
nameOverride: ""
fullnameOverride: ""

# This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/
serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Automatically mount a ServiceAccount's API credentials?
  automount: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

# This is for setting Kubernetes Annotations to a Pod.
# For more information check out: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# This is for setting Kubernetes Labels to a Pod.
# For more information check out: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

# This is for setting up a service more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/
service:
  # This sets the service type more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  type: ClusterIP
  # This sets the ports more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#field-spec-ports
  port: 80

# This block is for setting up the ingress for more information can be found here: https://kubernetes.io/docs/concepts/services-networking/ingress/
ingress:
  enabled: false
  className: ""
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

# This is to setup the liveness and readiness probes more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
livenessProbe:
  httpGet:
    path: /
    port: http
readinessProbe:
  httpGet:
    path: /
    port: http

# This section is for setting up autoscaling more information can be found here: https://kubernetes.io/docs/concepts/workloads/autoscaling/
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

# Additional volumes on the output Deployment definition.
volumes: []
# - name: foo
#   secret:
#     secretName: mysecret
#     optional: false

# Additional volumeMounts on the output Deployment definition.
volumeMounts: []
# - name: foo
#   mountPath: "/etc/foo"
#   readOnly: true

nodeSelector: {}

tolerations: []

affinity: {}

Just change in the vi replicaCount: 1 by replicaCount: 5 and add an ingress http://demo-app.local also:

ingress:
  enabled: true
  className: ""
  annotations:
    kubernetes.io/ingress.class: nginx
  hosts:
    - host: demo-app.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

Save and quit the vi. Then I am gonna deploy this demo-app into my Kubernetes cluster with the Helm CLI which is gonna compile the files to make beautiful k8s charts and finally run it with the classical kubectl CLI.

helm upgrade --install ~/demo-app

upgrade --install means upgrade it, if not present just install it.

Check out the pods have been correctly created, or updated:

kubectl get pods --all-namespaces

Check out the services have been correctly created, or updated:

kubectl get services --all-namespaces

Finally check out all what’s running in your cluster:

kubectl get all -A

Usually I use the watch CLI on linux which refreshes every 5 seconds the command:

watch kubectl get all -A

On my host I can do this to test:

curl --header 'Host: demo-app.local' http://192.168.0.31

However I can add it in my /etc/hosts

echo -e "\n192.168.0.31   demo-app.local" | sudo tee -a /etc/hosts

Please be careful new Macbook pro like I am using blocks on your computer the home made /etc/hosts as local security, you need to give privilege for Google Chrome to finally open the link http://demo-app.local.

Finally security

I can now check out what’s been doing the machine with its firewall

iptables -xvnL
Chain INPUT (policy ACCEPT 27557 packets, 9485722 bytes)
    pkts      bytes target     prot opt in     out     source               destination
17127  4449758 cali-INPUT  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* cali:Cz_u1IQiXIMmKD4c */
    203    12278 KUBE-PROXY-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
21353  6568139 KUBE-NODEPORTS  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes health check service ports */
    203    12278 KUBE-EXTERNAL-SERVICES  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */
27557  9485722 KUBE-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
    pkts      bytes target     prot opt in     out     source               destination
    151     7396 cali-FORWARD  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* cali:wUHhoiAYhphO9Mso */
    152     7120 KUBE-PROXY-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
    152     7120 KUBE-FORWARD  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    152     7120 KUBE-SERVICES  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
    152     7120 KUBE-EXTERNAL-SERVICES  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */
    148     6820 ACCEPT     0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* cali:S93hcgKJrXEqnTfs */ /* Policy explicitly accepted packet. */ mark match 0x10000/0x10000
    0        0 MARK       0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* cali:mp77cMpurHhyjLrM */ MARK or 0x10000

Chain OUTPUT (policy ACCEPT 27950 packets, 9673023 bytes)
    pkts      bytes target     prot opt in     out     source               destination
17238  4597613 cali-OUTPUT  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* cali:tVnHkvAo15HuiPy0 */
    227    13794 KUBE-PROXY-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
    227    13794 KUBE-SERVICES  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
27950  9673023 KUBE-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0

You can see the 3 main iptables chain INPUT, OUPUT & FORWARD are all in policy ACCEPT. No one can change it because it’s managed by kube-proxy, which is perfectly fine if you don’t have other stuff running except Kubernetes itself. Machine reboot would overwrite any changes made, if ever you attempt to add stuff in the iptables.

However I can make sure I am just logging in to the server in SSH using keys and not via a password. This is the least I can do.

Edit /etc/ssh/sshd_config and find PasswordAuthentication. Make sure it’s uncommented and set to no.

sudo vi /etc/ssh/sshd_config

Save and exit the text editor, then type the following to apply the changes:

If you made any changes, restart sshd:

sudo systemctl restart ssh.service

SSH key-based authentication indeed offers a secure approach to authentication by eliminating the need to transmit passwords across the network. This enhanced security is achieved through the client-side retention of the private key, ensuring that it is never sent over the network. Consequently, SSH key-based authentication mitigates the risks associated with password interception.

I already logged into this machine via SSH thanks to the PasswordAuthentication but now I will not ever authorize it.

Summary

We’ve been installing and configuring a cluster Kubernetes with Calico for networking, deploying a demo app with the nginx welcome message using the Ingress Nginx. See how security would be about when running a Kubernetes cluster on a self-installed machine and a cloud provider, it’s the same. All is managed thanks to the CLI kubectl and charts you send to it. This command line kubectl will definitly be your best friend.