Kamaji flavor for the mini-lab
We are happy to announce that
capi-lab โ our Cluster API
extension of mini-lab โ contains a Kamaji flavor now. ๐ฅณ
After getting to know the Kamaji devs at FOSDEM, the idea grew to
explore Kamaji on top of metal-stack. In this blog post we are going to give you some insights on how to provision
Kamaji tenant clusters on metal-stack. You will also gain further insights on how to get started with Kamaji
and metal-stack yourself, based on our
capi-lab setup.
What is Kamaji?โ
Kamaji is a Control Plane Manager for Kubernetes, designed to simplify how you run and manage Kubernetes clusters. Instead of deploying control-planes on dedicated machines, Kamaji runs them as pods within a single management cluster, cutting down on operational overhead and costs. It supports multi-tenancy, high availability, and integrates seamlessly with Cluster API, making it ideal for private clouds, public clouds, bare metal, and edge computing.
Architecture / Setupโ
How can metal-stack and Kamaji cooperate?
Our focus is on showcasing how Kamaji can manage Kubernetes clusters on top of metal-stack via Cluster API, rather than running Kamaji itself on metal-stack.
Kamaji's support for Cluster API allows us to use our existing
cluster-api-provider-metal-stack to provision and
manage Kamaji tenant clusters that Kamaji oversees. This allows us to leverage the strengths of both platforms:
metal-stack for bare-metal provisioning and Kamaji for streamlined Kubernetes cluster management.
Below you can find an architectural overview over the Kamaji flavor setup inside the
capi-lab:
The existing capi-lab
kind cluster gets extended, in order to host the Kamaji management cluster. The
management cluster is responsible for running the Kamaji control-plane components and managing the lifecycle of
tenant clusters.
As you might have noticed, Kamaji only provides the control-plane for multiple tenant clusters and, therefore,
still needs a running worker nodes provider, in order to task kubeadm to bootstrap them. This is why the metal-stack
control-plane is still needed. It provides a virtually deployed partition via containerlab
and two machines, somewhat similar to how the default
mini-lab sonic flavor does.
The final component is CAPMS, the Cluster API Provider for metal-stack, which is responsible for provisioning and
managing the tenant clusters on metal-stack. It interacts with the metal-stack APIs to create and manage the
necessary resources for the tenant clusters, after Kamaji has requested them and is also running in the kind
cluster.
In order to create a working Kamaji flavor showcase in our
capi-lab, we had to do some
tinkering and, in the end, rewire the existing components to make them harmonize.
Deploying Kamajiโ
The Kamaji control-plane components are deployed into the kind cluster with additional Ansible roles, reproducing the Kamaji Kind setup.
Kamaji as ControlPlaneProvider and CAPMS as InfrastructureProviderโ
Kamaji is deployed as a ControlPlaneProvider in the kind cluster, while CAPMS is
deployed as an InfrastructureProvider. We set up the providers via
clusterctl init --control-plane kamaji --infrastructure metal-stack and configure the necessary RBAC permissions for
Kamaji to be allowed to access the tenant clusters via CAPMS. As metal-stack is not a supported infrastructure
provider in Kamaji yet, we had to patch the Kamaji provider components to allow for that. The RBAC permissions and
patches will be included in an upcoming Kamaji release, so that metal-stack can be used as an infrastructure
provider out of the box. The clusterctl init installs the CAPI Provider, the Cluster API Bootstrap Provider
Kubeadm (CAPBK), and the CAPI kubeadm control plane provider, which are all required for the tenant cluster
provisioning.
A cluster template for Kamaji tenant clustersโ
Before we can spawn tenant clusters with Kamaji, we need to define, how they should be configured. This is done
with a cluster template: The example template needs
some adjustments to work with our setup, leading to a
custom template
in our capi-lab.
Provisioning tenant clustersโ
With the management cluster running and the template in place, we can now generate tenant cluster manifests with
clusterctl generate cluster and apply them. Kamaji will then take care of provisioning the tenant clusters on
metal-stack via CAPMS. At least one firewall and worker node are provisioned for each tenant cluster, and the
control-plane components are deployed as pods in the management cluster. The tenant cluster control-plane is
accessible via an IP address provided by metal-stack, and the worker nodes join via CABPK and kubeadm.
Getting tenant clusters up and runningโ
Once the tenant cluster provisioning is triggered, Kamaji will manage its lifecycle. For metal-stack deployments
to get fully ready, metal-ccm needs to be deployed into the tenant
cluster. Finally, a CNI-Plugin can be installed to get the tenant cluster fully operational.
Getting startedโ
Head over to the cluster-api-provider-metal-stack
repository and follow the setup instructions in DEVELOPMENT.md to try it out yourself.

