Kubernetes for IoT — The magic behind K3S
Lightweight —Simple —Easy — Secure — Production-Ready
Everything started with a desire to have a lightweight K8s distribution that can run in constrained environments for a whole plethora of use cases.
Rancher took the effort to create an easy to use & reliable solution with a minimum learning curve required; create a Kubernetes cluster that uses half the memory of a Vanilla Kubernetes and uses a single binary of a hundred Megabytes. By removing all superfluous components, the final result was astonishing, a production-grade K8s cluster well-tuned to restricted environments, with everything secure since day one.
I’ll go deep into it, in a few. Stay posted!!
K3s— Indefinite possibilities!
How do you want to run K3s? On Edge/IoT devices, on your laptop, on dedicated hardware, Raspberry Pi? in a VM? The options are unlimited!
K3s can be installed in places that you would have never dreamt would run K8s only a short while ago. K3s can even be deployed in HA mode.
IoT & Edge challenges!
Embedded devices such as Edge, IoT, and ARM-based devices often impose high-resource constraints and limitations.
- Edge Devices generally are not centrally managed, which adds to the complexity of a lack of control and makes updates challenging.
- IoT Devices: Are often more resource constraints, and therefore, K3s might be an ideal solution if you want to use Kubernetes in an IoT scenario.
K3s — Blow up the Kubernetes complexity.
Rancher invested in K3s, removed much of the default bloat available in vanilla Kubernetes, such as alpha components and storage. K3s bundles into a binary, ContainerD, Flannel, Traeffic, local-path provisioner, host utilities, and the service Load Balancer. K3s comes with an embedded SQLite Database as a replacement to the heavy ETCD and more.
Below you can find a subset of the functionalities/features that were removed from the Source code and which assumed not-needed on every deployment:
- The dependency on Docker; by replacing it with ContainerD,
- Drawn around 3 million lines of code around alpha and non-default features in addition to providers related features for storage and network,
- All third-parties storage drivers got dropped with support and focus only on the standardized CSI (Container Storage Interface).
All of the embedded components are swappable; If you prefer using calico, then just disable Flannel and include the required network plugin.
All of this makes K3s a complete all-in-one toolset included in a production distribution of Kubernetes, which runs with the smallest possible footprint. The binary itself is under 100 MB, small enough to run even on a Raspberry PieZero, and while running, all the components use less than 512 MB of memory.
Community — behind the lines?
The project was released in February 2019, got its version “v1.0” GA in KubeCon during the same year (November 2019). During the months following the release, K3s got a lot of attention from the community, 14k+ GitHub Stars, 500k+ Binary downloads, 700k+ Image Pulls, 800k+ Nodes launched, and more.
K3S joined the CNCF community as a sandbox project.
Single-Node Install with K3s
The fastest way to install K3s is `Single-Node` deployment. This method, when used, provisions a single server for you that acts as Control-Plane/Worker. This kind of installation is obviously not HA, but there are lots of places where a single node K8s cluster has value as long as whatever it’s managing isn’t mission-critical. There’s a path for recovery replacement of the nodes; we can find that it’s faster to bring up a new node and reinstall the manifests than it’s to manage an HA deployment.
Before diving into the setup, we need to be aware of two mandatory requirements/dependencies: Docker-Engine and a Linux-Shell.
K3s can be installed with ease by pulling down an Install Script, piping it to “sh”, and done!
curl -sfL https://get.k3s.io | sh -
Be careful on what you run! Launching a script directly from a URL is a security threat! Please verify the script’s content beforehand.
Well, the script fetches K3s, installs it, creates sim-links, deploys a SystemD unit, and then starts it up. All in all, you’re up and running in under 10 seconds (excluding the download time).
Playing with K3s
You can type “k3s Kubectl cluster-info” or download the auth file and type “kubectl cluster-info” to get the same result.
k3s kubectl get nodes
The command’s output shows a single node cluster up & running.
When the cluster first comes up, it writes out a KubeConfig file to /etc/rancher/k3s/k3s.yaml. This is the default location that k3s looks at when called by Kubectl. If you copy it off of the host and change the server's address, you can use this with Kubectl on any other machine.
Uninstalling K3s is as fast as installing it, as every installation places an uninstall script into /usr/local/bin/k3s-uninstall.sh. Running the latter script removes all traces of k3s from the system like it never existed.
I encourage you to check this free deep-dive from The Linux Foundation. It provides a good set of practical use cases related to Kubernetes at the Edge; based on the K3s project and the CloudNative edge ecosystem. 👇
Introduction to Kubernetes on Edge with k3s (LFS156x) - Linux Foundation - Training
Learn the use cases and applications of Kubernetes at the edge through practical examples, hands-on lab exercises, and a…
Please, do not hesitate to provide me with your valuable comments.