Tutorials

Documentation for Kubernetes v1.11 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

Example: Deploying Cassandra with Stateful Sets

This tutorial shows you how to develop a native cloud Cassandra deployment on Kubernetes. In this instance, a custom Cassandra SeedProvider enables Cassandra to discover new Cassandra nodes as they join the cluster.

Deploying stateful distributed applications, like Cassandra, within a clustered environment can be challenging. StatefulSets greatly simplify this process. Please read about StatefulSets for more information about the features used in this tutorial.

Cassandra Docker

The Pods use the gcr.io/google-samples/cassandra:v13 image from Google’s container registry. The docker image above is based on debian-base and includes OpenJDK 8. This image includes a standard Cassandra installation from the Apache Debian repo. By using environment variables you can change values that are inserted into cassandra.yaml.

ENV VAR DEFAULT VALUE
CASSANDRA_CLUSTER_NAME ‘Test Cluster’
CASSANDRA_NUM_TOKENS 32
CASSANDRA_RPC_ADDRESS 0.0.0.0

Objectives

Before you begin

To complete this tutorial, you should already have a basic familiarity with Pods, Services, and StatefulSets. In addition, you should:

Note: Please read the getting started guides if you do not already have a cluster.

Additional Minikube Setup Instructions

Caution: Minikube defaults to 1024MB of memory and 1 CPU which results in an insufficient resource errors during this tutorial.

To avoid these errors, run minikube with:

minikube start --memory 5120 --cpus=4

Creating a Cassandra Headless Service

A Kubernetes Service describes a set of Pods that perform the same task.

The following Service is used for DNS lookups between Cassandra Pods and clients within the Kubernetes Cluster.

  1. Launch a terminal window in the directory you downloaded the manifest files.
  2. Create a Service to track all Cassandra StatefulSet Nodes from the cassandra-service.yaml file:

    kubectl create -f cassandra-service.yaml

cassandra/cassandra-service.yaml docs/tutorials/stateful-application/cassandra
apiVersion: v1
kind: Service
metadata:
  labels:
    app: cassandra
  name: cassandra
spec:
  clusterIP: None
  ports:
  - port: 9042
  selector:
    app: cassandra

Validating (optional)

Get the Cassandra Service.

kubectl get svc cassandra

The response should be

NAME        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cassandra   None         <none>        9042/TCP   45s

If anything else returns, the service was not successfully created. Read Debug Services for common issues.

Using a StatefulSet to Create a Cassandra Ring

The StatefulSet manifest, included below, creates a Cassandra ring that consists of three Pods.

Note: This example uses the default provisioner for Minikube. Please update the following StatefulSet for the cloud you are working with.
  1. Update the StatefulSet if necessary.
  2. Create the Cassandra StatefulSet from the cassandra-statefulset.yaml file:

    kubectl create -f cassandra-statefulset.yaml

cassandra/cassandra-statefulset.yaml docs/tutorials/stateful-application/cassandra
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cassandra
  labels:
    app: cassandra
spec:
  serviceName: cassandra
  replicas: 3
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      terminationGracePeriodSeconds: 1800
      containers:
      - name: cassandra
        image: gcr.io/google-samples/cassandra:v13
        imagePullPolicy: Always
        ports:
        - containerPort: 7000
          name: intra-node
        - containerPort: 7001
          name: tls-intra-node
        - containerPort: 7199
          name: jmx
        - containerPort: 9042
          name: cql
        resources:
          limits:
            cpu: "500m"
            memory: 1Gi
          requests:
           cpu: "500m"
           memory: 1Gi
        securityContext:
          capabilities:
            add:
              - IPC_LOCK
        lifecycle:
          preStop:
            exec:
              command: 
              - /bin/sh
              - -c
              - nodetool drain
        env:
          - name: MAX_HEAP_SIZE
            value: 512M
          - name: HEAP_NEWSIZE
            value: 100M
          - name: CASSANDRA_SEEDS
            value: "cassandra-0.cassandra.default.svc.cluster.local"
          - name: CASSANDRA_CLUSTER_NAME
            value: "K8Demo"
          - name: CASSANDRA_DC
            value: "DC1-K8Demo"
          - name: CASSANDRA_RACK
            value: "Rack1-K8Demo"
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
        readinessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - /ready-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        # These volume mounts are persistent. They are like inline claims,
        # but not exactly because the names need to match exactly one of
        # the stateful pod volumes.
        volumeMounts:
        - name: cassandra-data
          mountPath: /cassandra_data
  # These are converted to volume claims by the controller
  # and mounted at the paths mentioned above.
  # do not use these in production until ssd GCEPersistentDisk or other ssd pd
  volumeClaimTemplates:
  - metadata:
      name: cassandra-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: fast
      resources:
        requests:
          storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
  type: pd-ssd

Validating The Cassandra StatefulSet

  1. Get the Cassandra StatefulSet:

    kubectl get statefulset cassandra

The response should be

   NAME        DESIRED   CURRENT   AGE
   cassandra   3         0         13s

The StatefulSet resource deploys Pods sequentially.

  1. Get the Pods to see the ordered creation status:

    kubectl get pods -l=“app=cassandra”

The response should be

   NAME          READY     STATUS              RESTARTS   AGE
   cassandra-0   1/1       Running             0          1m
   cassandra-1   0/1       ContainerCreating   0          8s

Note: It can take up to ten minutes for all three Pods to deploy.

Once all Pods are deployed, the same command returns:

   NAME          READY     STATUS    RESTARTS   AGE
   cassandra-0   1/1       Running   0          10m
   cassandra-1   1/1       Running   0          9m
   cassandra-2   1/1       Running   0          8m
  1. Run the Cassandra utility nodetool to display the status of the ring.

    kubectl exec cassandra-0 – nodetool status

The response is:

   Datacenter: DC1-K8Demo
   ======================
   Status=Up/Down
   |/ State=Normal/Leaving/Joining/Moving
   --  Address     Load       Tokens       Owns (effective)  Host ID                               Rack
   UN  172.17.0.5  83.57 KiB  32           74.0%             e2dd09e6-d9d3-477e-96c5-45094c08db0f  Rack1-K8Demo
   UN  172.17.0.4  101.04 KiB  32           58.8%             f89d6835-3a42-4419-92b3-0e62cae1479c  Rack1-K8Demo
   UN  172.17.0.6  84.74 KiB  32           67.1%             a6a1e8c2-3dc5-4417-b1a0-26507af2aaad  Rack1-K8Demo

Modifying the Cassandra StatefulSet

Use kubectl edit to modify the size of a Cassandra StatefulSet.

  1. Run the following command:

    kubectl edit statefulset cassandra

This command opens an editor in your terminal. The line you need to change is the replicas field.

Note: The following sample is an excerpt of the StatefulSet file.

    # Please edit the object below. Lines beginning with a '#' will be ignored,
    # and an empty file will abort the edit. If an error occurs while saving this file will be
    # reopened with the relevant failures.
    #
    apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
    kind: StatefulSet
    metadata:
     creationTimestamp: 2016-08-13T18:40:58Z
     generation: 1
     labels:
       app: cassandra
     name: cassandra
     namespace: default
     resourceVersion: "323"
     selfLink: /apis/apps/v1/namespaces/default/statefulsets/cassandra
     uid: 7a219483-6185-11e6-a910-42010a8a0fc0
    spec:
     replicas: 3
  1. Change the number of replicas to 4, and then save the manifest.

The StatefulSet now contains 4 Pods.

  1. Get the Cassandra StatefulSet to verify:

    kubectl get statefulset cassandra

The response should be

   NAME        DESIRED   CURRENT   AGE
   cassandra   4         4         36m

Cleaning up

Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This ensures safety first: your data is more valuable than an auto purge of all related StatefulSet resources.

Warning: Depending on the storage class and reclaim policy, deleting the Persistent Volume Claims may cause the associated volumes to also be deleted. Never assume you’ll be able to access data if its volume claims are deleted.
  1. Run the following commands to delete everything in a StatefulSet:

    grace=$(kubectl get po cassandra-0 -o=jsonpath=‘{.spec.terminationGracePeriodSeconds}’)
    && kubectl delete statefulset -l app=cassandra
    && echo “Sleeping $grace”
    && sleep $grace
    && kubectl delete pvc -l app=cassandra

  2. Run the following command to delete the Cassandra Service.

    kubectl delete service -l app=cassandra

What's next

Analytics

Create an Issue Edit this Page