Setting up K3S/K8S from scratch
Requirements
Control plane node:
- 2 cores (or more)
- 2GB of RAM (or more)
- 16GB of disk space (or more)
Worker node:
- 1 core (or more)
- 1GB of RAM (or more)
- 16GB of disk space (or more)
Node installation
K3S control plane node
You need atleast 3 of these, for a micro cluster of 2-3 nodes you can only have one if you wish so. Be sure to store your token in a safe place as you will need it to connect additional nodes in the future. First node:
curl -sfL https://get.k3s.io | K3S_TOKEN="<Rand0mlyG3n3rat3dT0ken>" sh -s - server --cluster-init --disable servicelb --disable traefik
Other nodes:
curl -sfL https://get.k3s.io | K3S_TOKEN="<Rand0mlyG3n3rat3dT0ken>" sh -s - server --server https://<ipofthefirstnode>:6443 --disable servicelb --disable traefik
K3S worker node
curl -sfL https://get.k3s.io | K3S_TOKEN="<Rand0mlyG3n3rat3dT0ken>" sh -s - agent --server https://<ipofthemasternode>:6443 ---disable servicelb --disable traefik
Worker nodes are not specifically required if you don't want them or are running a small cluster of only 3 nodes.
you can check the state of the cluster by SSHing to any of the master nodes and running
kubectl get nodes
or you can install https://k9scli.io for fancy terminal UI (highly recommend)
Accessing the cluster from your machine
Kubernetes config file can be found on any of the control plane nodes at:
/etc/rancher/k3s/k3s.yaml
You can copy that file over to your PC to
~/.kube/config
And edit the server url from 127.0.0.1 to the ip addres of one of the control plane nodes
K8s
For a "real" kubernetes deployment just use kubespray. As it can already setup everything I describe below.
Network configuration
Installing MetalLB
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to expose services via IP address.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.4/config/manifests/metallb-native.yaml
Or you can simply use helm
MetalLB IP pool
Create a new yaml file with the following content and be sure to customize your ip range:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.240-192.168.1.250
save and apply the IPAddressPool config
kubectl apply -f file.yaml
IP pool advertising
Create a file:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
save and apply the L2Advertisement config
kubectl apply -f file.yaml
You need L2Advertisement so other systems in your LAN can see the IP adresses claimed by MetalLB
Certificate manager with Let'sEncrypt certificate issuer
CM allows us to issue and maintain SSL certificates in our cluster
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
Or you can simply use helm
By default it will issue fake, self signed certificates , but if your cluster is available directly form the internet you can issue let'sencrypt certs so we need to create a file
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <MY_EMAIL_ADDRESS>
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: nginx
Customize and then apply it:
kubectl apply -f file.yml
To issue valid certs you will need to add these to ingress configs for your services
...
metadata:
annotations:
acme.cert-manager.io/http01-edit-in-place: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/ssl-redirect: "False"
kubernetes.io/ingress.class: nginx
...
spec:
...
tls:
- hosts:
- <SOME-DOMAIN>
secretName: some-secret-tls
...
Ingress nginx
Ingress Nginx is a special nginx container setup that allows you to expose HTTP(s) apps from your kubernetes to the outside world via MetalLB ip address First clone the ingress repo from git:
git clone https://github.com/nginxinc/kubernetes-ingress.git
cd kubernetes-ingress
Then apply these files:
kubectl apply -f deployments/common/ns-and-sa.yaml
kubectl apply -f deployments/rbac/rbac.yaml
kubectl apply -f examples/shared-examples/default-server-secret/default-server-secret.yaml
kubectl apply -f deployments/common/nginx-config.yaml
kubectl apply -f deployments/common/ingress-class.yaml
kubectl apply -f config/crd/bases/k8s.nginx.org_virtualservers.yaml
kubectl apply -f config/crd/bases/k8s.nginx.org_virtualserverroutes.yaml
kubectl apply -f config/crd/bases/k8s.nginx.org_transportservers.yaml
kubectl apply -f config/crd/bases/k8s.nginx.org_policies.yaml
kubectl apply -f config/crd/bases/k8s.nginx.org_globalconfigurations.yaml
kubectl apply -f deployments/daemon-set/nginx-ingress.yaml
kubectl apply -f deployments/service/loadbalancer.yaml
Or you can simply use helm
you can check what ip address the service is exposed on with:
kubectl get -n ingress-nginx get services
You can create a wildcard record pointing to that address to make exposing services easier. for example
*.my.cluster.com
Storage configuration
Longhorn
Longhorn is a system that manages permanent storage inside your kubernetes cluster. On each storage node in your cluster you must install:
apt-get install open-iscsi nfs-common
by default longhorn stores data on each node in a directory, so you can mount your storage drives there
/var/lib/longhorn/
before installing Longhorn itself
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.1/deploy/longhorn.yaml
Or you can simply use helm
To make longhorn UI available trough ingress, create a file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: longhorn-ingress
namespace: longhorn-system
annotations:
acme.cert-manager.io/http01-edit-in-place: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/ssl-redirect: "False"
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
rules:
- host: longhorn.my.cluster.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: longhorn-frontend
port:
number: 80
tls:
- hosts:
- longhorn.my.cluster.com
secretName: some-name-tls
then apply it:
kubectl apply -f file.yml
Additional storage class
The default storage class that comes with this configuration:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Retain"
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fromBackup: ""
fsType: "ext4"
dataLocality: "disabled"
unmapMarkSnapChainRemoved: "ignored"
You may want to create a new storage class with a different config, for example, if you prefer performance over redundancy and want to save space.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-fast
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Retain"
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "1"
staleReplicaTimeout: "30"
fromBackup: ""
fsType: "ext4"
dataLocality: "best-effort"
unmapMarkSnapChainRemoved: "ignored"
Kubernetes Dashboard
I you want to be extra fancy you can deploy a web UI Dashboard for your kubernetes. You need HELM for this one
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm repo update
Fetch the default values file:
helm show values kubernetes-dashboard/kubernetes-dashboard > values.yaml
Edit it and enable ingress:
ingress:
enabled: true
hosts:
- dash..my.cluster.com
ingressClassName: nginx
And install it with the modified values file:
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard --values values.yaml
Create service account for accesing the dashboard as admin
Create a service account file
apiVersion: v1
kind: ServiceAccount
metadata:
name: myusername
namespace: kubernetes-dashboard
then apply it:
kubectl apply -f file.yml
Then create a file to bind your user to the built in cluster-admin role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: myusername
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: myusername
namespace: kubernetes-dashboard
then apply it:
kubectl apply -f file.yml
And finally, create a token you can use to login to the dashboard
kubectl -n kubernetes-dashboard create token myusername