Skip to content

Setting up a HA Kubernetes cluster using K3S

Updated: at 15:00

Kubernetes is one of the most sawed-after technologies of our current time and is used by most prominent companies. Still, getting started with production-ready multi-node setups can be difficult as well as expensive.

This is where K3S comes into play. K3S is a lightweight, fully compliant Kubernetes distributor designed to run on lower resource machines like IoT devices.

In this article, you will set up your own high availability K3S cluster and deploy basic Kubernetes deployments like the Kubernetes dashboard. The tutorial will show how to set up all the resources manually first, but there is also an automated option using the official Ansible script, which will also be covered in the article below.

Table of contents

Open Table of contents

Setting up Servers

Before we can get started setting up K3S we first need to set up the required servers. HA K3S requires at least three nodes, including two masters and one worker. The only requirement for the nodes is that they have static IP-Addresses.

I personally use the following setup for getting my nodes ready:

Database

K3S supports multiple databases for HA (High Availability) deployments, including MySQL, PostgreSQL and even embedded databases like etcd. It is highly recommended that you use some high availability database hosted in the cloud. Still, for this article’s sake, we will use a MySQL instance inside a Docker container on either one of the created VMs or preferably on another standalone VM. If you are using another external database the only thing you need to do is change the K3S_DATASTORE_ENDPOINT string later on.

First, you will need to create a docker-compose YAML file:

nano docker-compose.yaml

The docker-compose file will define the MySQL database and a MySQL adminer, which helps manage the database. MySQL is initialized with a non-root user as well as an already created database with the name of db and will be running on port 3306 of your virtual machine.

version: "3.3"
services:
  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_DATABASE: "db"
      # So you don't have to use root, but you can if you like
      MYSQL_USER: "k3s"
      # You can use whatever password you like
      MYSQL_PASSWORD: "k3spass"
      # Password for root access
      MYSQL_ROOT_PASSWORD: "rootpass"
    ports:
      # <Port exposed> : < MySQL Port running inside container>
      - "3306:3306"
    expose:
      # Opens port 3306 on the container
      - "3306"
    # Where our data will be persisted
    volumes:
      - my-db:/var/lib/mysql
  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080
# Names our volume
volumes:
  my-db:

The database can then be started using the following command:

docker-compose up -d

Your MySQL database should now be available on port 3306 of your machine. Also, make sure that the port is accessible and not blocked by some kind of firewall.

Optionally you can also use the embedded etcd database that I mentioned earlier. Note: To run K3s in this configuration, you must have an odd number of server nodes.

Load balancer

Since the deployment has multiple Kubernetes master nodes, a load balancer needs to balance the traffic between the two nodes. The easiest way for this is by using the following Nginx configuration:

events {}

stream {
  upstream k3s_servers {
    server 192.168.88.70:6443;
    server 192.168.88.71:6443;
  }

  server {
    listen 6443;
    proxy_pass k3s_servers;
  }
}

Here we define an upstream containing our two master nodes which are both running K3S on port 6443. All requests to the Nginx endpoint are then proxied to the k3s_servers group and Nginx automatically applies HTTP load balancing to distribute the requests between the nodes.

The configuration can easily be run using the following docker-compose configuration.

nginx:
  image: "nginx:mainline"
  ports:
    - 6443:6443
  restart: on-failure
  volumes:
    - ./nginx.conf:/etc/nginx/nginx.conf

There are also some high availability (HA) load balancers like HAProxy, which add extra functionality like adding health checks for the different master nodes. If a node is down, the traffic will automatically be routed to one of the available nodes. Here is an example of the same configuration as above using HAProxy instead of Nginx.

frontend kubernetes-frontend
    bind *:6443
    mode tcp
    option tcplog
    timeout client 10s
    default_backend kubernetes-backend

backend kubernetes-backend
    timeout connect 10s
    timeout server 10s
    mode tcp
    option tcp-check
    balance roundrobin
    server k3s-master-1 192.168.88.70:6443 check fall 3 rise 2
    server k3s-master-2 192.168.88.71:6443 check fall 3 rise 2

The proxy will also listen to port 6443 and redirect the traffic to the two master nodes that were set up above. The configuration can be run using the following docker_compose file:

version: "3"

services:
  haproxy:
    image: haproxy:alpine
    container_name: haproxy
    restart: always
    ports:
      - "6443:6443"
    volumes:
      - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro

Installing K3S on master nodes

Now that the database and load balancer have been set up successfully it is time to install K3S on our master nodes. The first thing we need to do is store the database connection URL into a variable that is automatically used when installing K3S.

export K3S_DATASTORE_ENDPOINT='mysql://username:password@tcp(database_ip_or_hostname:port)/db'

After that, K3S can be installed using the following command, which takes the load balancer IP-Address as an argument.

curl -sfL <https://get.k3s.io> | sh -s - server --node-taint CriticalAddonsOnly=true:NoExecute --tls-san load_balancer_ip_or_hostname

The --node-taint parameter is used to tell Kubernetes that only critical pods should be executed on our master nodes. Regular pods like custom applications should be executed on the worker nodes instead.

If you would like to run the k3s command on your master node without the need for sudo you can also execute the following command.

sudo chmod 644 /etc/rancher/k3s/k3s.yaml

After finishing, you can repeat the process for your second master node. Once k3s is successfully installed on both nodes, we can get the token required to connect a worker node to the Kubernetes cluster.

sudo cat /var/lib/rancher/k3s/server/node-token

Installing K3S on workers

Now that you have copied the connection token from one of your master nodes, you can continue by installing K3S on your workers by providing the load balancer IP-Address and the connection token to the following command.

curl -sfL <https://get.k3s.io> | K3S_URL=https://load_balancer_ip_or_hostname:6443 K3S_TOKEN=mynodetoken sh -

This will install the k3s-agent on your node and connect to the provided cluster. You can check the connection by typing the k3s kubectl get nodes command on one of your master nodes. If the installation was successful, you should see the worker node in the output of the command.

Copying k3s credentials to the development machine

After all the nodes are successfully set up we can connect to the Kubernetes cluster by copying the Kubernetes config from one of the three master nodes. The credentials can be printed using the following command.

sudo cat /etc/rancher/k3s/k3s.yaml

After that, copy the credentials and move them to ~/.kube/config on your development machine and be sure to change the IP-Address from 127.0.0.1 to the IP-Address of your actual load balancer.

You should then be able to use the K3S cluster, which can be verified by printing all nodes of the Kubernetes cluster.

kubectl get nodes

Setting up K3S using Ansible

Another way to set up a K3S cluster is using Ansible to set it up automatically on all your nodes. HA (High availability) K3S is currently not supported by the official Ansible script, but a community member is already working on the implementation. Therefore, I will show how to create a regular K3S cluster and update the article once the functionality is available.

First, you will need to clone the K3S-Ansible repository, so you have all the resources necessary to start setting up K3S.

git clone https://github.com/k3s-io/k3s-ansible.git

This command will clone the K3S-Ansible repository from GitHub and create a folder with the name of k3s-ansible in your directory. Now enter the directory using the following command:

cd k3s-ansible

Now you will need to edit the host file of the sample inventory and change the IP-Addresses to that of your actual servers.

[master]
192.168.88.90

[node]
192.168.88.91
192.168.88.92

[k3s_cluster:children]
master
node

If your servers have different user names than root you will also need to change the group_vars of the Ansible playbook. Here is an example where I changed the username to server in the group_vars/all.yml file.

---
k3s_version: v1.17.5+k3s1
ansible_connection: ssh
ansible_user: server
systemd_dir: /etc/systemd/system
master_ip: "{{ hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0]) }}"
extra_server_args: ""
extra_agent_args: ""

After making all the configurations, you can now start provisioning the cluster using the following command.

ansible-playbook site.yml -i inventory/sample/hosts.ini

Once the installation is done you should be able to copy the Kubernetes config file from your master node and connect your development machine to the cluster.

scp server@master_ip:~/.kube/config ~/.kube/config

You should then be able to use the K3S cluster, which can be verified by printing all nodes of the Kubernetes cluster.

kubectl get nodes

Debugging errors

This section can be used if you are running into any errors with your K3S cluster and need to further debug your problem. If everything is running fine you can skip this section.

You can check for errors in your K3S configuration using the following command:

k3s check-config

This should give you information about the health of your K3S configuration or a clue about the error you are having.

After checking the configuration you can also check the current logs and status of your K3S deployment of your master node using the following commands:

# Get k3s server logs
systemctl status k3s

# Full logs
journalctl -u k3s

# Save logs into a file
journalctl -u k3s > logs.txt

You can also do the same on the K3S worker nodes.

# Get k3s worker logs
systemctl status k3s-agent

# Full logs
journalctl -u k3s-agent

# Save logs into a file
journalctl -u k3s-agent > logs.txt

After running these commands, you should have enough information about your error to start researching on the internet or open a Github issue/discussion if needed.

Sources

Here is a list of resources used to write this article:

Conclusion

In this article, you set up a HA Kubernetes cluster on your hardware using K3S and deployed your first Kubernetes deployment.

If you have found this helpful, please consider recommending and sharing it with other fellow developers and subscribing to my newsletter. If you have any questions or feedback, let me know using my contact form or contact me on Twitter.