[!NOTE]
Learn how to setup the fully kubernetes cluster inside your local environment
Vargrant to configuration the VM with providerkindkind clustercilium and ebpf - The powerful kernal service of kubewekend clusterKubewekend Cluster[!NOTE]
Supported K8s Distribution with Kubewekend
| Kubewekend Cluster Distribution | Local | VM | VPS Remote |
|---|---|---|---|
| Kind (K8s in Docker) | β | β | β |
| K3s Standalone | β | β | β |
| K3s High Availability (HA) | π§ | β | β |
| RKE2 | π§ | π§ | π§ |
| Tool | Required | Purpose |
|---|---|---|
| VirtualBox | Yes* | VM provider |
| Vagrant | Yes* | VM provisioning |
| Ansible | Yes | Cluster orchestration |
| kubectl | Yes | Kubernetes CLI |
| Helm | Yes | Chart management |
| Docker | Optional | Required for Kind clusters |
| kind | Optional | Kind binary (also installed by playbook) |
* Vagrant + VirtualBox are required only for local VM workflows. For remote VPS targets, only Ansible + SSH are needed.
setup.sh)[!TIP]
All cluster operations are unified under a single CLI. Run it from the project root β no need to invoke
ansible-playbookorvagrantdirectly.
# Show all available commands
./scripts/setup.sh help
# Check prerequisites
./scripts/setup.sh env check
# Show subcommand help
./scripts/setup.sh <command> help
| Command | Purpose |
|---|---|
env |
Check tools, initialise .env |
vagrant |
VM lifecycle β up, halt, destroy, ssh |
inventory |
Generate/inspect Ansible inventory, set remote VPS |
kind |
Kind cluster β setup, destroy, utilities |
k3s |
K3s cluster β standalone, HA, destroy, utilities |
network |
VirtualBox NAT forwarding (hook-up / return) |
config |
View / edit master.yaml and worker.yaml |
status |
Project-wide status dashboard |
quickstart |
Guided end-to-end workflows |
See scripts/README.md for the full CLI reference.
[!NOTE]
Read more at Kubewekend Session 1: Build up your host with Vagrant
# Provision only the master node
./scripts/setup.sh vagrant up k8s-master-machine
# Provision master + one worker (K3s standalone / Kind)
./scripts/setup.sh vagrant up k8s-master-machine k8s-worker-machine-1
# Provision master + multiple workers (K3s HA)
./scripts/setup.sh vagrant up k8s-master-machine k8s-worker-machine-1 k8s-worker-machine-2
[!NOTE]
You can also use
vagrantdirectly with--provider=virtualbox. The CLI wraps it for convenience and ensures you stay in the project root.
hosts File[!IMPORTANT]
The inventory file at
ansible/inventories/hostsis split into two sections. Make sure the right section is populated before running a playbook.
| Section | Group | Used by |
|---|---|---|
| SECTION 1: Standalone | standalone-masters, standalone-workers |
kind-playbook.yaml, k3s-playbook.yaml |
| SECTION 2: HA | ha_master_init, ha_master_join, ha_worker |
k3s-ha-playbook.yaml |
Generate the inventory automatically from running Vagrant VMs:
./scripts/setup.sh inventory generate
# Or ping all known hosts to verify connectivity
./scripts/setup.sh inventory ping
# For a remote (non-Vagrant) VPS target
./scripts/setup.sh inventory set-remote
[!NOTE]
After the upgrade 12/2025 and 01/2026, Ansible Playbooks are already rebuilt for multiple concepts which allow you configure a lots of stuff with your Kind or K3s cluster to test and experiment K8s features
For more information, you can see what are implementing via table below
Kind (kind-playbook.yaml)
| Name of Task | Description | Tags | State |
|---|---|---|---|
| Install Common Kubewekend Tools | Install common libraries, kind and dependencies for your host | install_common | β |
| Setup Kind Cluster | Create Kind Cluster with mounting kind-config template to ansible host | setup_kind | β |
| Setup Kind Network (CNI) | Setup network for Kind Cluster when disableDefaultCNI: true (Options: Calico, Flannel, Cilium) |
setup_kind | β |
| Setup Load Balancer for Kind cluster | Setup Load Balancer for external LoadBalancer-type services (Options: metallb, cloud-provider-kind, cilium-ipam-lb) |
setup_kind | β |
| Setup Ingress Controller for Kind cluster | Setup Ingress Controller (Options: NGINX, Traefik, Cilium, Kong) | setup_kind | β |
| Setup GatewayAPI for Kind cluster | Setup Gateway API (Options: Kong, Cilium, Traefik) | setup_kind | β |
| Setup Network Forwarding for port 80/443 from host to Kind cluster | Forward host ports 80/443 into Kind cluster via socat | setup_kind | β |
| Remove Kind cluster | Remove the Kind cluster and related components | remove_kind | β |
K3s (k3s-playbook.yaml Β· k3s-ha-playbook.yaml)
| Name of Task | Description | Tags | State |
|---|---|---|---|
| Install Common K3s Node Packages | Install common libraries and dependencies on the target node | install_common | β |
| Setup K3s Standalone (master or worker) | Deploy K3s server (master) or agent (worker) β one node at a time via --host |
setup_k3s | β |
| Setup K3s High Availability (HA) Cluster | Bootstrap etcd init node, join additional control-plane nodes, and attach agents | setup_k3s | β |
| Configure CNI (Flannel / Calico / Cilium) | Apply CNI manifests post-install based on k3sCluster.cni.type |
setup_k3s | β |
| Setup Load Balancer (ServiceLB / MetalLB) | Deploy load balancer and configure IP pool from k3sCluster.loadBalancer |
setup_k3s | β |
| Setup Ingress + Dashboard + Support API Gateway (Only Traefik) | Deploy ingress controller and optional dashboard from k3sCluster.ingress |
setup_k3s | β |
| Remove K3s node | Uninstall K3s from a target node (server or agent) | remove_k3s | β |
Utilities (k8s-utilities-playbook.yaml)
| Name of Task | Description | Tags | State |
|---|---|---|---|
| Ingress test deployment inside the cluster | Deploy a sample workload to validate Ingress routing | ingress_test | β |
| API Gateway test deployment inside the cluster | Deploy a sample workload to validate API Gateway routing | apigateway_test | β |
| Setup cert-manager for the cluster | Install cert-manager for TLS certificate management | certmanager | β |
| Setup Dashboard for the cluster | Install a Kubernetes dashboard (type configured in utilities: kubernetes-dashboard, headlamp, rancher) |
dashboard | β |
| Setup Storage for the cluster | Install Longhorn distributed block storage with optional iSCSI and NFS support | storage | β |
| Setup Secret Management for the cluster | Install Vault or OpenBao with optional auto-unseal, Vault Operator, and unseal key persistence | secret_management | β |
| Setup K8s Extensions for the cluster | Install Reflector, Reloader, and External Secrets Operator | k8s_extensions | β |
| Setup GitOps for the cluster | Install ArgoCD (with Image Updater + Extensions) or Flux (with Weave GitOps UI) and optional Kargo promotion engine | gitops | β |
| Setup Security for the cluster | Install policy engine (Kyverno or OPA Gatekeeper) and identity provider (Dex) for OIDC/OAuth2 authentication | security | β |
| Setup Internal Developer Portal (IDP) for the cluster | Install Backstage developer portal for application catalogue and self-service workflows | idp | β |
| Setup Monitoring for the cluster | Install the full LGTM observability stack: kube-prometheus-stack (Prometheus + Grafana), Alloy APM collector, Loki, Tempo, and Pyroscope | monitoring | β |
| Setup Service Mesh for the cluster | Install Istio service mesh for advanced traffic management, mTLS, and observability | service_mesh | β |
[!IMPORTANT]
Before running any playbook, review and adjust the cluster configuration file at
ansible/inventories/host_vars/master.yaml. Use the CLI to inspect it without opening a text editor:# Interactive table summary ./scripts/setup.sh config show # Full raw YAML ./scripts/setup.sh config show --raw # Open in $EDITOR ./scripts/setup.sh config edit
Kind cluster (all-in-one):
# Install common tools + create cluster
./scripts/setup.sh kind setup
# Add utilities after the cluster is up
./scripts/setup.sh kind utils certmanager ingress_test dashboard
# Tear down
./scripts/setup.sh kind destroy
K3s standalone (separate master / worker calls):
# Master first
./scripts/setup.sh k3s setup --host k8s-master-machine
# Then each worker
./scripts/setup.sh k3s setup --host k8s-worker-machine-1
# Add utilities
./scripts/setup.sh k3s utils certmanager gitops
# Tear down
./scripts/setup.sh k3s destroy
K3s HA cluster (HA section in hosts must be populated):
# Enable HA in master.yaml first
./scripts/setup.sh config edit
# Bootstrap all HA nodes at once
./scripts/setup.sh k3s ha-setup
[!TIP]
Prefer the
--dry-runflag to preview the exactansible-playbookcommand that will be executed before committing:./scripts/setup.sh k3s setup --host k8s-master-machine --dry-run
[!NOTE]
The
examples/lgtm-testing/directory contains a full-stack demo application designed to showcase and stress-test the LGTM observability stack (Loki Β· Grafana Β· Tempo Β· Prometheus) combined with Pyroscope for continuous profiling. It is the recommended starting point for validating your monitoring setup after running themonitoringutility tag.
Architecture
| Component | Role |
|---|---|
| Frontend β Nginx + HTML dashboard | Trigger test scenarios via UI |
| Backend β FastAPI + OpenTelemetry SDK | Generates traces, structured logs, and custom metrics |
| PostgreSQL | Persistence layer β produces DB span attributes |
| Pyroscope agent | Continuous CPU/memory flamegraph profiling |
| Alloy | DaemonSet collector β receives OTLP gRPC from app, routes to Tempo / Loki / Prometheus |
| Grafana | Unified dashboard β correlate Traces β Logs β Profiles |
Option 1 β Docker Compose (local, no cluster required)
cd examples/lgtm-testing
docker compose up -d --build
open http://localhost:3000 # frontend dashboard
open http://localhost:8000/docs # FastAPI Swagger UI
Option 2 β Deploy into a running Kubernetes cluster
# Build and push images (or use a local registry for kind)
docker build -t lgtm-testing-backend:latest ./examples/lgtm-testing/backend
docker build -t lgtm-testing-frontend:latest ./examples/lgtm-testing/frontend
# Apply manifests
kubectl apply -f examples/lgtm-testing/k8s/namespace.yaml
kubectl apply -f examples/lgtm-testing/k8s/postgres.yaml
kubectl apply -f examples/lgtm-testing/k8s/backend.yaml
kubectl apply -f examples/lgtm-testing/k8s/frontend.yaml
# Wait for readiness
kubectl -n lgtm-testing wait --for=condition=ready pod \
-l app.kubernetes.io/part-of=lgtm-testing --timeout=120s
Test scenarios included
| Scenario | Endpoint | What to verify in Grafana |
|---|---|---|
| Normal CRUD (traces) | GET/POST /api/todos/ |
Tempo: clean span waterfall; Loki: logs with trace_id |
| Auth failures | POST /api/auth/login with bad creds |
Tempo: red error spans; Prometheus: auth_attempts_total |
| N+1 slow report | GET /api/bottleneck/slow-report |
Tempo: many small DB spans; Loki: duration_ms warnings |
| CPU-intensive profiling | GET /api/bottleneck/cpu-intensive |
Pyroscope: hashlib.sha256 + _fibonacci hotspots in flamegraph |
Seed test data
curl -X POST http://localhost:8000/api/seed/
[!TIP]
Before deploying on Kubernetes, make sure the
monitoringutility is already set up so Grafana, Loki, Tempo, and Pyroscope are available to receive telemetry data:./scripts/setup.sh kind utils monitoring # or ./scripts/setup.sh k3s utils monitoring
See examples/lgtm-testing/README.md for the full architecture diagram, custom metric reference, and Grafana exploration guide.
For install helm-charts from kubewekend, you can use command
helm repo add kubewekend https://kubewekend.xeusnguyen.xyz
Vagrantfile[!IMPORTANT]
In repositories will be defined some
Vagrantfilefor two type K8s for base and ceph, for specific the Vagrantfile you should specific them via environment variables. Explore more at: StackOverFlow - Specify Vagrantfile path explicity, if not plugin
# Run as usual for base version (Default: Vagrantfile)
vagrant up name-of-your-machine
# Run specific Vagrantfile for CEPH version (Example: Vagrantfile.ceph)
VAGRANT_VAGRANTFILE=Vagrantfile.ceph vagrant up name-of-your-machine
Vargrant to configuration the VM with provider[!NOTE]
This lab is take the topic around play and practice with
vagrant- the software can help you provide the virtual machine in your host. First step way to setupkubernetescluster inside your machine, and play with on next session
Read full article about session at Kubewekend Session 1: Build up your host with Vagrant
kind[!NOTE]
This lab is practice with ansible the configuration for setup
kindcluster inside machine on the previous session
Read full article about session at Kubewekend Session 2: Setup Kind cluster with Ansible
kind cluster[!NOTE]
This session talk about basically architecture and learn more fundamental components inside kubernetes, and what the structure of them inside clusters
Read full article about session at Kubewekend Session 3: Basically about Kubernetes architecture
cilium and ebpf - The powerful kernal service of kubewekend cluster[!NOTE]
This session will talk and learn about eBPF and the especially representation of eBPF are cilium and hubble to become main CNI of Kubewekend and talk about Observability of them
Read full article about session at Kubewekend Session 4: Learn about ebpf with hubble and cilium
Kubewekend Cluster[!NOTE]
This session is really pleasant when we talk about how can create HA cluster with
kubewekend, learn more the components insidekubernetesand try figure out aboutnetwork,security,configuration,container runtimeandsystemvia this session
Read full article about session at Kubewekend Session 5: Build HA Cluster
[!NOTE]
This session is covered about topic storage inside
Kubernetescluster, how can they work withCSIArchitecture and why we need toCSI Driverfor handle this stuff. Furthermore, I try to practice withCeph- one of popular storage opensource forKubewekendcluster
Read full article about session at Kubewekend 6: CSI and Ceph with Kubewekend
[!NOTE]
This session explores core networking concepts in Kubernetes, guiding you through the setup of new deployments and demonstrating how to expose services for external access using Ingress and the Gateway API. We also delve into External LoadBalancer concepts and the operational nuances of managing them via Cilium NodeIPAM. By the end of this session, you will understand how to bridge the gap between cluster-internal services and external clients using modern, eBPF-powered networking strategies.
Read full article about session at Kubewekend Session 7: Setup new deployment and route traffic to kubewekend cluster
[!NOTE]
This session provides the opportunity to deploy the LGTM stack, the comprehensive observability suite from the Grafana ecosystem. You will gain hands-on experience in correlating logs, metrics, traces, and profiling data to achieve deep-level observability within a Kubernetes environment.
Read full article about session at Kubewekend Session 8: Setting Up the Cluster Monitoring Stack with LGTM and Grafana Alloy
[!NOTE]
This lab is try to take you to journey to learn about new CSI for Kubernetes,
Longhornand deliver you to new method to handle transfer large file via network by NFS protocol. I also provide more information aboutiSCSI,nfs-ganeshaand techniquerdma
Read full article about session at Kubewekend Session Extra 1: Longhorn and the story about NFS in Kubernetes
[!NOTE]
This article aims to provide you with insights into alternatives for self-hosting a full Kubernetes cluster. Both K3s and RKE2 are strong contenders worth considering to guide your decision. Focusing on the self-hosted approach with RKE2, I want to share more about my experiences working with it over the past four months.
Read full article about session at Kubewekend Session Extra 2: Rebuild Cluster with RKE2 or K3S
[!NOTE]
This article is my story about wrestling with networking in Kubernetes. Iβll cover the frustrating problems that arise when your pods canβt communicate with services, CoreDNS fails to resolve domains, and the tough issues involving CNI and the ChecksumTX of network interfaces in Kubernetes.
Read full article about session at Kubewekend Session Extra 3: RKE2 and The Nightmare with Network and CoreDNS
[!NOTE]
This article shares my experience setting up a sandbox environment with Kind to adapt new Kubernetes environments within CI/CD pipelines. Iβll provide several ideas for running both CPU and GPU applications, demonstrating their behavior specifically within GitLab CI.
Read full article about session at Kubewekend Session Extra 4: Kind and Sandbox environment for GitLab CI