K8s Cluster
K8s cluster ​
Create a cluster on VMs using libvirt. See article on initial libvirt/qemu
Spawning VMs ​
This article is written on the basis of a relative clean Debian VM that will serve eventually as one of 3 nodes. So start by cloning 2 times into 3 total VMs.
Create the clones
virt-clone --original k8s-master-01 --name k8s-worker-01 --file /mnt/ssd1/vms/kvm/k8s-worker-01.qcow2
Start it up and make them appear unique (MACaddress is already handled)
- Change the Hostname
sudo hostnamectl set-hostname k8s-worker-01
sudo sed -i "s/k8s-master-01/k8s-worker-01/g" /etc/hosts
reboot
- Reset the Machine ID (Crucial for K8s)
sudo rm -f /etc/machine-id /var/lib/dbus/machine-id
sudo systemd-machine-id-setup
sudo dbus-uuidgen --ensure
- Disable swap on all nodes
sudo swapoff -a
sudo sed -i '/swap/s/^/#/' /etc/fstab
- Enable nftables
nftables is default from k8s 1.33. So you should enable it so kubernetes can add its stuff. Unless you might end up with kubernetes starting to awaken iptables
Leave the default nftsbles file if want no firewall restrictions to begin with
sudo systemctl enable --now nftables
sudo systemctl start nftables
Example of /etc/nftables.conf with some rules
#!/usr/sbin/nft -f
flush ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
ct state established,related accept
iif lo accept
ip protocol icmp accept
tcp dport 22 accept # SSH
# K8s Control Plane (Run this on Master)
tcp dport 6443 accept
tcp dport { 2379, 2380 } accept
# K8s Node Communication (Run on all nodes)
tcp dport 10250 accept
# CNI Overlay (Allow nodes to "tunnel" to each other)
# If using Calico
tcp dport 179 accept
udp dport 4789 accept
tcp dport 443 accept
ip protocol 4 accept
# nodeports
tcp dport 30000-32767 accept
}
chain forward {
type filter hook forward priority 0; policy accept;
}
}
- Make sure your nodes has STATIC IPs.
or else you need to tear the whole thing down later probably
Reboot
Installing k8s ​
sudo apt install -y apt-transport-https ca-certificates curl gpg
containerd.io ​
See docker homepage for up to date instructions
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: $(. /etc/os-release && echo "$VERSION_CODENAME")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
sudo apt update
sudo apt install containerd.io
config container.io ​
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
K8s ​
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
modeprobe overlay ​
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
enable network forwarding ​
sudo tee /etc/sysctl.d/kubernetes.conf <<EOT
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOT
sudo sysctl --system
verify
sudo sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
all should be 1
Initialize ​
(only master node)
sudo kubeadm init --apiserver-advertise-address=192.168.1.190 --pod-network-cidr=10.244.0.0/16
To reset ​
sudo kubeadm reset -f
sudo rm -rf /etc/cni/net.d
Expected answer
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.190:6443 --token x \
--discovery-token-ca-cert-hash sha256:x
save the command from this output and use it to join worker nodes later
Verify first time ​
kubectl get nodes
Expected response before set up network and other nodes:
NAME STATUS ROLES AGE VERSION
k8s-master-01 NotReady control-plane 3m33s v1.35.2
Status NotReady is fine. Now is an OK time to join in a worker node if its ready. Can also add it any time later
kubeadm join 192.168.1.190:6443 --token x \
--discovery-token-ca-cert-hash sha256:x
Network ​
Container Network Interface (CNI) must be added before the cluster actually can do anything useful besides existing
(only master node)
Calico ​
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.30.0/manifests/calico.yaml
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=enp1s0
Verify 2: Test app ​
kubectl create deployment nginx-test --image=nginx --replicas=2
kubectl expose deployment nginx-test --port=80 --target-port=80 --type=NodePort
kubectl get svc nginx-test # get port and test in browser
Load balancer ​
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
make metallb-config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.50-192.168.1.60