You can manually set up k0s nodes by creating a multi-node cluster that is locally managed on each node. This involves several steps, to first install each node separately, and to then connect the node together using access tokens.
Note: Before proceeding, make sure to review the System Requirements.
Though the Manual Install material is written for Debian/Ubuntu, you can use it for any Linux distro that is running either a Systemd or OpenRC init system.
You can speed up the use of the k0s
command by enabling shell completion.
Run the k0s download script to download the latest stable version of k0s and make it executable from /usr/bin/k0s.
curl -sSLf https://get.k0s.sh | sudo sh
The download script accepts the following environment variables:
Variable | Purpose |
---|---|
`K0S_VERSION=v1.23.1+k0s.0 | Select the version of k0s to be installed |
DEBUG=true |
Output commands and their arguments at execution. |
Note: If you require environment variables and use sudo, you can do:
curl -sSLf https://get.k0s.sh | sudo K0S_VERSION=v1.23.1+k0s.0 sh
Create a configuration file:
mkdir -p /etc/k0s
k0s default-config > /etc/k0s/k0s.yaml
Note: For information on settings modification, refer to the configuration documentation.
sudo k0s install controller -c /etc/k0s/k0s.yaml
sudo k0s start
k0s process acts as a "supervisor" for all of the control plane components. In moments the control plane will be up and running.
You need a token to join workers to the cluster. The token embeds information that enables mutual trust between the worker and controller(s) and which allows the node to join the cluster as worker.
To get a token, run the following command on one of the existing controller nodes:
k0s token create --role=worker
The resulting output is a long token string, which you can use to add a worker to the cluster.
For enhanced security, run the following command to set an expiration time for the token:
k0s token create --role=worker --expiry=100h > token-file
To join the worker, run k0s in the worker mode with the join token you created:
sudo k0s install worker --token-file /path/to/token/file
sudo k0s start
The join tokens are base64-encoded kubeconfigs for several reasons:
- Well-defined structure
- Capable of direct use as bootstrap auth configs for kubelet
- Embedding of CA info for mutual trust
The bearer token embedded in the kubeconfig is a bootstrap token. For controller join tokens and worker join tokens k0s uses different usage attributes to ensure that k0s can validate the token role on the controller side.
Note: Either etcd or an external data store (MySQL or Postgres) via kine must be in use to add new controller nodes to the cluster. Pay strict attention to the high availability configuration and make sure the configuration is identical for all controller nodes.
To create a join token for the new controller, run the following command on an existing controller:
k0s token create --role=controller --expiry=1h > token-file
On the new controller, run:
sudo k0s install controller --token-file /path/to/token/file -c /etc/k0s/k0s.yaml
Important notice here is that each controller in the cluster must have k0s.yaml otherwise some cluster nodes will use default config values which will lead to inconsistency behavior. If your configuration file includes IP addresses (node address, sans, etcd peerAddress), remember to update them accordingly for this specific controller node.
k0s start
To get general information about your k0s instance's status:
$ sudo k0s status
Version: v1.23.1+k0s.0
Process ID: 2769
Parent Process ID: 1
Role: controller
Init System: linux-systemd
Service file: /etc/systemd/system/k0scontroller.service
Use the Kubernetes 'kubectl' command-line tool that comes with k0s binary to deploy your application or check your node status:
$ sudo k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
k0s Ready <none> 4m6s v1.23.1-k0s1
You can also access your cluster easily with Lens, simply by copying the kubeconfig and pasting it to Lens:
sudo cat /var/lib/k0s/pki/admin.conf
Note: To access the cluster from an external network you must replace localhost
in the kubeconfig with the host ip address for your controller.
- Install using k0sctl: Deploy multi-node clusters using just one command
- Control plane configuration options: Networking and datastore configuration
- Worker node configuration options: Node labels and kubelet arguments
- Support for cloud providers: Load balancer or storage configuration
- Installing the Traefik Ingress Controller: Ingress deployment information