Authenticating to the cluster + managing kubeconfig
You need to talk to the cluster from your machine. That means kubectl plus a kubeconfig — a small YAML file that holds the cluster's API server URL, its CA certificate, and your credentials (a client cert or token). Without it, every kubectl command just fails with a connection error.
This page covers how to get that file, merge it into your local setup, and switch between contexts when you manage more than one cluster.
Where the kubeconfig comes from
There are two paths:
1. Joining the existing cluster (the common case). Ask a teammate, or pull the latest kubeconfig from the internal Bitwarden vault — there's a dedicated entry for the ecnv4 admin kubeconfig. Save it somewhere under your home directory, e.g. ~/kubeconfigs/ecnv4.yml, with tight permissions (chmod 600). This is how every current operator gets their kubeconfig.
2. Fresh cluster bootstrap (very rare — historical). The cluster was originally brought up by the Ansible setup_rke2.yml playbook, which writes an admin kubeconfig to ansible/playbooks/tmp/rke.yml. We don't run this for day-to-day operations — new nodes join the existing cluster via Terraform cloud-init, not a fresh bootstrap.
Ansible playbooks are not actively maintained
The bootstrap playbook still exists under ansible/playbooks/ but isn't kept in sync with day-to-day practice. Verify the playbook matches the current cluster state before running, and prefer Terraform-based or operator-based paths (Kured, cluster-autoscaler) where they apply. See Ansible playbooks (legacy) for context.
The kubeconfig contains an admin token
Treat it like any other credential: chmod 600, never commit it, never paste it into chat. Anyone with that file has full cluster admin.
Merging into your local kubeconfig
kubectl reads ~/.kube/config by default. If that file already exists (because you manage other clusters), merge the new one in rather than overwriting:
KUBECONFIG=~/.kube/config:~/kubeconfigs/ecnv4.yml \
kubectl config view --flatten > ~/.kube/config.new \
&& mv ~/.kube/config.new ~/.kube/config \
&& chmod 600 ~/.kube/config--flatten inlines the CA cert and client cert data so the resulting file is self-contained. If ~/.kube/config doesn't exist yet, just copy the new file into place:
mkdir -p ~/.kube && cp ~/kubeconfigs/ecnv4.yml ~/.kube/config && chmod 600 ~/.kube/configNaming the context something you'll recognise
After merging, the context name from the source file lands in your config verbatim — for RKE2-generated configs this is usually default, which collides with anything else called default. Rename it to something memorable:
kubectl config rename-context default ecnv4Pick whatever name makes sense to you — ecnv4, ecnv4-prod, hetzner-prod, mac-home. The rest of this manual does not hard-code a name; it assumes the cluster you care about is your current-context.
Switching contexts
List what you have, see which one is active (marked with *), and switch:
kubectl config get-contexts
kubectl config use-context ecnv4
kubectl config current-contextIf you bounce between clusters a lot, install kubectx and kubens — they turn context/namespace switching into one-word commands:
| Platform | Install |
|---|---|
| macOS (Homebrew) | brew install kubectx |
| Debian/Ubuntu | sudo apt install kubectx |
| Arch/Manjaro | sudo pacman -S kubectx |
| Windows (scoop) | scoop install kubectx |
Usage: kubectx ecnv4, kubens ecommercen-clients-wecare.
Verifying it works
kubectl cluster-info
kubectl get nodes -o wideYou should see the API server URL responding and all nodes Ready. If you instead see a TLS error or Unable to connect to the server, double-check that you picked the right context and that your machine can reach the API server (VPN, firewall, Cloudflare Zero Trust if you route through it).
Why this manual doesn't specify --context ecnv4
Different team members name their contexts differently — mac-home, work-laptop, ecnv4-prod, default, whatever. The commands in these runbooks assume you've set the cluster you care about as your current-context, so a bare kubectl ... works.
If you juggle multiple clusters and don't want to rely on current-context, use one of:
kubectl --context <yours> get pods # per-command override
KUBECONFIG=~/kubeconfigs/ecnv4.yml kubectl get pods # per-shell overrideLocal override for wrapper scripts
The untracked/scripts/argo.sh wrapper and a handful of helper scripts honour a KUBECTL_CONTEXT environment variable. If your context is named something other than the default, set it once in local.env (gitignored, at the repo root):
# local.env
export KUBECTL_CONTEXT=mac-homeYou can also put personal Claude Code preferences in CLAUDE.local.md — also gitignored, also read by the tooling. See Prerequisites → Local Overrides.
Further reading
- Upstream docs: Configure Access to Multiple Clusters
- Prerequisites — installing
kubectlitself - Daily checks — the first thing you'll run once your kubeconfig is working