Install K8ssandra on your local K8s
Tip: This topic is specific to K8ssandra 1.4.x. Consider exploring our most recent (and recommended) implementation: K8ssandra Operator. It includes a K8ssandraCluster
custom resource and supports single- and multi-cluster Cassandra deployments in Kubernetes, for High Availability (HA) capabilities. See the K8ssandra Operator documentatation.
This topic gets you up and running with a single-node Apache Cassandraยฎ cluster on Kubernetes (K8s).
If you want to install K8ssandra on a cloud provider’s Kubernetes environment, see:
- K8ssandra installs on Azure Kubernetes Service (AKS)
- K8ssandra installs on DigitalOcean Managed Kubernetes Service (DOKS)
- K8ssandra installs on Amazon Elastic Kubernetes Service (EKS)
- K8ssandra installs on Google Kubernetes Engine (GKE)
Tip
Also available in followup topics are post-install steps and role-based considerations for developers or Site Reliability Engineers (SREs).Introduction
In this quickstart for installs of K8ssandra on your local DEV environment, we’ll cover:
- Prerequisites: Required supporting software including resource recommendations.
- K8ssandra Helm repository configuration: Accessing the Helm charts that install K8ssandra.
- K8ssandra installation: Getting K8ssandra up and running locally using the Helm chart repo.
- Verifying K8ssandra functionality: Making sure K8ssandra is working as expected.
- Retrieve K8ssandra superuser credentials: Getting the K8ssandra superuser name and password so you can access common utilities as well as the Stargate API.
Prerequisites
In your local environment the following tools are required for provisioning a K8ssandra cluster:
As K8ssandra deploys on a K8s cluster, one must be available to target for installation. The K8s environment may be a local version running on your development machine, an on-premises self-hosted environment, or a managed cloud offering.
K8ssandra works with the following versions of Kubernetes either standalone or via a cloud provider:
- 1.17 (minimum supported version)
- 1.18
- 1.19
- 1.20
- v1.21.1 (current upper bound of K8ssandra testing)
To verify your K8s server version:
kubectl version
Output:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.3", GitCommit:"01849e73f3c86211f05533c2e807736e776fcf29", GitTreeState:"clean", BuildDate:"2021-02-18T12:10:55Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.16", GitCommit:"7a98bb2b7c9112935387825f2fce1b7d40b76236", GitTreeState:"clean", BuildDate:"2021-02-17T11:52:32Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Your K8s server version is a combination of the Major:
and Minor:
key/value pairs following Server Version:
, in the example above, 1.18
.
If you don’t have a K8s cluster available, you can use OpenShift CodeReady Containers that run within a VM, or one of the following local versions that run within Docker:
The instructions in this section focus on the Docker container solutions above, but the general instructions should work for other environments as well.
Resource recommendations for local Kubernetes installations
We recommend a machine specification of no less than 16 gigs of RAM and 8 virtual processor cores (4 physical cores). You’ll want to adjust your Docker resource preferences accordingly. For this quick start we’re allocating 4 virtual processors and 8 gigs of RAM to the Docker environment.
Tip
See the documentation for your particular flavor of Docker for instructions on configuring resource limits.The following Minikube example creates a K8s cluster running K8s version 1.18.16 with 4 virtual processor cores and 8 gigs of RAM:
minikube start --cpus=4 --memory='8128m' --kubernetes-version=1.18.16
Output:
๐ minikube v1.17.1 on Darwin 11.2.1
โจ Automatically selected the docker driver. Other choices: hyperkit, ssh
๐ Starting control plane node k8ssandra in cluster k8ssandra
๐ฅ Creating docker container (CPUs=4, Memory=8128MB) ...
๐ณ Preparing Kubernetes v1.18.16 on Docker 20.10.2 ...
โช Generating certificates and keys ...
โช Booting up control plane ...
โช Configuring RBAC rules ...
๐ Verifying Kubernetes components...
๐ Enabled addons: storage-provisioner, default-storageclass
๐ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Verify your Kubernetes environment
To verify your Kubernetes environment:
-
Verify that your K8s instance is up and running in the
READY
status:kubectl get nodes
Output:
NAME STATUS ROLES AGE VERSION k8ssandra Ready master 21m v1.18.16
Validate the available Kubernetes StorageClasses
Your K8s instance must support a storage class with a VOLUMEBINDINGMODE
of WaitForFirstConsumer
.
To list the available K8s storage classes for your K8s instance:
kubectl get storageclasses
Output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 2m25s
If you don’t have a storage class with a VOLUMEBINDINGMODE
of WaitForFirstConsumer
as in the Minikube example above, you can install the Rancher Local Path Provisioner:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
Output:
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
Rechecking the available storage classes, you should see that a new local-path
storage class is available with the required VOLUMEBINDINGMODE
of WaitForFirstConsumer
:
kubectl get storageclasses
Output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path rancher.io/local-path Delete WaitForFirstConsumer false 3s
standard (default) k8s.io/minikube-hostpath Delete Immediate false 39s
Configure the K8ssandra Helm repository
K8ssandra is delivered via a collection of Helm charts for easy installation, so once you’ve got a suitable K8s environment configured, you’ll need to add the K8ssandra Helm chart repositories.
To add the K8ssandra helm chart repos:
-
Install Helm v3+ if you haven’t already.
-
Add the main K8ssandra stable Helm chart repo:
helm repo add k8ssandra https://helm.k8ssandra.io/stable
-
If you want to access K8ssandra services from outside of the Kubernetes cluster, also add the Traefik Ingress repo:
helm repo add traefik https://helm.traefik.io/traefik
-
Finally, update your helm repository listing:
helm repo update
Tip
Alternatively, you can download the individual charts directly from the project’s releases page.Install K8ssandra
The K8ssandra helm charts make installation a snap. You can override chart configurations during installation as necessary if you’re an advanced user, or make changes after a default installation using helm upgrade
at a later time.
K8ssandra 1.3.0 can install the following versions of Apache Cassandra:
- 3.11.7
- 3.11.8
- 3.11.9
- 3.11.10
- 4.0.0 (default)
Important
K8ssandra comes out of the box with a set of default values tailored to getting up and running quickly. Those defaults are intended to be a great starting point for smaller-scale local development but are not intended for production deployments.To install a single node K8ssandra cluster:
-
Copy the following YAML to a file named
k8ssandra.yaml
:cassandra: version: "3.11.10" cassandraLibDirVolume: storageClass: local-path size: 5Gi allowMultipleNodesPerWorker: true heap: size: 1G newGenSize: 1G resources: requests: cpu: 1000m memory: 2Gi limits: cpu: 1000m memory: 2Gi datacenters: - name: dc1 size: 1 racks: - name: default kube-prometheus-stack: grafana: adminUser: admin adminPassword: admin123 stargate: enabled: true replicas: 1 heapMB: 256 cpuReqMillicores: 200 cpuLimMillicores: 1000
That configuration file creates a K8ssandra cluster with a datacenter,
dc1
, containing a single Cassandra node,size: 1
version3.11.10
with the following specifications:- 1 GB of heap
- 2 GB of RAM for the container
- 1 CPU core
- 5 GB of storage
- 1 Stargate node with
- 1 CPU core
- 256 MB of heap
Important
ThestorageClass:
parameter must be a storage class with aVOLUMEBINDINGMODE
ofWaitForFirstConsumer
as described in Validate the available Kubernetes StorageClasses. -
Use
helm install
to install K8ssandra, pointing to the example configuration file using the-f
flag:helm install -f k8ssandra.yaml k8ssandra k8ssandra/k8ssandra
Output:
NAME: k8ssandra LAST DEPLOYED: Thu Feb 18 10:05:44 2021 NAMESPACE: default STATUS: deployed REVISION: 1
Tip
In the example above, the K8ssandra pods will have the cluster namek8ssandra
prefixed or appended inline.Note
When installing K8ssandra on newer versions of Kubernetes (v1.19+), some warnings may be visible on the command line related to deprecated API usage. This is currently a known issue and will not impact the provisioning of the cluster.
W0128 11:24:54.792095 27657 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
For more information, check out issue #267.
Verify your K8ssandra installation
Depending upon your K8s configuration, initialization of your K8ssandra installation can take a few minutes. To check the status of your K8ssandra deployment, use the kubectl get pods
command:
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE
k8ssandra-cass-operator-766849b497-klgwf 1/1 Running 0 7m33s
k8ssandra-dc1-default-sts-0 2/2 Running 0 7m5s
k8ssandra-dc1-stargate-5c46975f66-pxl84 1/1 Running 0 7m32s
k8ssandra-grafana-679b4bbd74-wj769 2/2 Running 0 7m32s
k8ssandra-kube-prometheus-operator-85695ffb-ft8f8 1/1 Running 0 7m32s
k8ssandra-reaper-655fc7dfc6-n9svw 1/1 Running 0 4m52s
k8ssandra-reaper-operator-79fd5b4655-748rv 1/1 Running 0 7m33s
k8ssandra-reaper-schema-dxvmm 0/1 Completed 0 5m3s
prometheus-k8ssandra-kube-prometheus-prometheus-0 2/2 Running 1 7m27s
The K8ssandra pods in the example above have the identifier k8ssandra
either prefixed or inline, since that’s the name that was specified when the cluster was created using Helm. If you choose a different cluster name during installation, your pod names will be different.
The actual Cassandra node name from the listing above is k8ssandra-dc1-default-sts-0
which we’ll use throughout the following sections.
Verify the following:
- The K8ssandra pod running Cassandra,
k8ssandra-dc1-default-sts-0
in the example above should show2/2
asReady
. - The Stargate pod,
k8ssandra-dc1-stargate-5c46975f66-pxl84
in the example above should show1/1
asReady
.
Important
- The Stargate pod will not show
Ready
until at least 4 minutes have elapsed. - The pod
k8ssandra-reaper-k8ssandra-schema-xxxxx
runs once as part of a job and does not persist.
Once all the pods are in the Running
or Completed
state, you can check the health of your K8ssandra cluster. There must be no PENDING
pods.
To check the health of your K8ssandra cluster:
-
Verify the name of the Cassandra datacenter:
kubectl get cassandradatacenters
Output:
NAME AGE dc1 51m
-
Confirm that the Cassandra operator for the datacenter is
Ready
:kubectl describe CassandraDataCenter dc1 | grep "Cassandra Operator Progress:"
Output:
Cassandra Operator Progress: Ready
-
Verify the list of available services:
kubectl get services
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cass-operator-metrics ClusterIP 10.80.3.92 <none> 8383/TCP,8686/TCP 47m k8ssandra-dc1-all-pods-service ClusterIP None <none> 9042/TCP,8080/TCP,9103/TCP 47m k8ssandra-dc1-service ClusterIP None <none> 9042/TCP,9142/TCP,8080/TCP,9103/TCP,9160/TCP 47m k8ssandra-dc1-stargate-service ClusterIP 10.80.13.197 <none> 8080/TCP,8081/TCP,8082/TCP,8084/TCP,8085/TCP,9042/TCP 47m k8ssandra-grafana ClusterIP 10.80.7.168 <none> 80/TCP 47m k8ssandra-kube-prometheus-operator ClusterIP 10.80.8.109 <none> 443/TCP 47m k8ssandra-kube-prometheus-prometheus ClusterIP 10.80.2.44 <none> 9090/TCP 47m k8ssandra-reaper-reaper-service ClusterIP 10.80.5.77 <none> 8080/TCP 47m k8ssandra-seed-service ClusterIP None <none> <none> 47m kubernetes ClusterIP 10.80.0.1 <none> 443/TCP 47m prometheus-operated ClusterIP None <none> 9090/TCP 47m
Verify that the following services are present:
- -all-pods-service - -dc1-service - -stargate-service - -seed-service
Retrieve K8ssandra superuser credentials
You’ll need the K8ssandra superuser name and password in order to access Cassandra utilities and do things like generate a Stargate access token.
To retrieve K8ssandra superuser credentials:
-
Retrieve the K8ssandra superuser name:
kubectl get secret k8ssandra-superuser -o jsonpath="{.data.username}" | base64 --decode ; echo
Output:
k8ssandra-superuser
-
Retrieve the K8ssandra superuser password:
kubectl get secret k8ssandra-superuser -o jsonpath="{.data.password}" | base64 --decode ; echo
Output:
PGo8kROUgAJOa8vhjQrE49Lgruw7s32HCPyVvcfVmmACW8oUhfoO9A
Tip
Save the superuser name and password for use in the Quickstarts, if you decide to follow those steps.Next steps
- If you’re a developer, and you’d like to get started coding using CQL or Stargate, see the Quickstart for developers.
- If you’re a Site Reliability Engineer, and you’d like to explore the K8ssandra administration environment including monitoring and maintenance utilities, see the Quickstart for Site Reliability Engineers.
For K8ssandra installation details that are specific to cloud providers, see:
- K8ssandra installs on Amazon Elastic Kubernetes Service (EKS)
- K8ssandra installs on DigitalOcean Kubernetes (DOKS)
- K8ssandra installs on Google Kubernetes Engine (GKE)
- K8ssandra installs on Microsoft Azure Kubernetes Service (AKS)
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.