Skip to content

strapdata/elassandra-operator-google-k8s-marketplace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Elassandra Operator Google k8s Marketplace

Build Status

Elassandra Logo

This repository contains instructions and files necessary for running the Elassandra operator via Google's Hosted Kubernetes Marketplace.

NOTE: This operator is currently in Beta version.

Overview

As shown in the figure below, Elassandra nodes are deployed as a kubernetes statefulset, and expose two kubernetes services, one for Apache Cassandra and one for Elasticsearch. The operator container uses a "Datacenter" Custom Resource Definition to keep track of the Elassandra cluster state. (Note: the CRD must be created before the deployment of the application)

Elassandra on Kubernetes

Installation

Prerequisites

Set up command line tools

You'll need the following tools in your development environment:

Configure gcloud as a Docker credential helper:

gcloud auth configure-docker

Create a Google Kubernetes Engine cluster

export CLUSTER=elassandra-operator-cluster
export ZONE=europe-west1-b

gcloud container clusters create "$CLUSTER" --zone "$ZONE"

Configure kubectl to connect to the new cluster:

gcloud container clusters get-credentials "$CLUSTER" --zone "$ZONE"

Install the Elassandra Datacenter custom resource

In order to allow the deployment of an instance of Datacenter, the CRD must be created before the application installation.

Note : You need to run this command once.

kubectl apply -f https://raw.githubusercontent.com/strapdata/elassandra-operator-google-k8s-marketplace/master/crd/datacenter-crd.yaml

Quick install with Google Cloud Marketplace

Get up and running with a few clicks! Install Elassandra Operator app to a Google Kubernetes Engine cluster using Google Cloud Marketplace. Follow the on-screen instructions.

Install using the build tools

Open in Cloud Shell

Setup the GKE environment

Refer to setup-k8s.sh for instructions. These steps are only to be followed when standing up a new testing cluster for the purpose of testing the code in the repo.

Build the container images

The make task app/build is used for building two container images :

  • a deployer that package the GKE manifest
  • a tester that package the integration tests
export TAG=6.2.3.22
make app/build

Install the application

The make task app/install simulates a google marketplace environment and deploys the Elassandra Operator application.

make app/install

Once deployed, the application will appear on the google cloud console.

To stop/delete, use the make tasks make app/uninstall. You also need to delete additional resources deployed by the operator:

make app/uninstall

# show Statefulset
kubectl get sts --namespace="${NAMESPACE}" -l app=elassandra -l app.kubernetes.io/managed-by=elassandra-operator

# If all sts may be safely delete (depending on your cluster usage), you can execute
# kubectl delete --namespace="${NAMESPACE}" $(kubectl get sts --namespace="${NAMESPACE}" -l app=elassandra -l app.kubernetes.io/managed-by=elassandra-operator -o name | xargs)

# show PersitentVolumeClaim
kubectl get pvc --namespace="${NAMESPACE}" -l app=elassandra -l app.kubernetes.io/managed-by=elassandra-operator

# If all pvc may be safely delete (depending on your cluster usage), you can execute
# kubectl delete --namespace="${NAMESPACE}" $(kubectl get pvc --namespace="${NAMESPACE}" -l appkubectl get sts -l app=elassandra -l app.kubernetes.io/managed-by=elassandra-operator -o name | xargs)

# show all services
kubectl get services --namespace="${NAMESPACE}" -l app=elassandra -l app.kubernetes.io/managed-by=elassandra-operator

# If all services may be safely delete (depending on your cluster usage), you can execute
# kubectl delete --namespace="${NAMESPACE}" $(kubectl get services --namespace="${NAMESPACE}" -l app=elassandra -l app.kubernetes.io/managed-by=elassandra-operator -o name | xargs)

Configure the application

The schema.yml file contains parameters available to the GKE end-user.

To specify values for these parameters, you can either define the environment variables or edit the Makefile:

APP_PARAMETERS ?= { \
  "APP_INSTANCE_NAME": "$(NAME)", \
  "NAMESPACE": "$(NAMESPACE)", \
  "APP_IMAGE": "$(APP_MAIN_IMAGE)" \
}

For instance if you wish to increase the disk size :

APP_PARAMETERS ?= { \
  "APP_INSTANCE_NAME": "$(NAME)", \
  "NAMESPACE": "$(NAMESPACE)", \
  "APP_IMAGE": "$(APP_MAIN_IMAGE)" \
  "config_data_volume_storage_size": "512Gi" \
}

Running Tests

kubectl apply -f https://raw.githubusercontent.com/strapdata/elassandra-operator-google-k8s-marketplace/master/apptest/additional-deployer-role.yaml
make app/verify --additional_deployer_role=operator-deployer-extrarole

That app/verify target, like many others, is provided for by Google's marketplace tools repo; for further details, please see app.Makefile in that repo for full details.

Getting started with Elassandra & Elassandra Operator

Set up your GKE environment

Set up your environment as describe in GKE quickstart:

gcloud config set project <your-gcp-project>
gcloud config set compute/zone <your-zone>
gcloud container clusters get-credentials <your-gke-cluster-name>

Set env variables according to your cluster

Set up the following environment variables in accordance with your deployment:

export NAMESPACE=default
# export CLUSTER_NAME as defined in the schema.yml (in this example cluster1)
export CLUSTER_NAME=cluster1
export APP_INSTANCE_NAME=elassandra-operator
export ELASSANDRA_POD=$(kubectl get pods -n $NAMESPACE -l app=elassandra -l app.kubernetes.io/managed-by=elassandra-operator -o jsonpath='{.items[0].metadata.name}')

Accessing Cassandra

Check your Cassandra cluster status by running the following command :

kubectl exec "$ELASSANDRA_POD" --namespace "$NAMESPACE" -c elassandra -- bash
cassandra$> source /usr/shared/cassandra/aliases.sh  
cassandra$> nodetool status  

Connect to Cassandra using cqlsh:

# retrive cassandra user password
CASS_PASSWORD=$(kubectl get secrets elassandra-${CLUSTER_NAME} -o yaml | grep "cassandra.cassandra_password" | cut -f2 -d':' | tr -d ' ' | base64 -d)
kubectl exec -it "$ELASSANDRA_POD" --namespace "$NAMESPACE" -c elassandra -- cqlsh -u cassandra -p ${CASS_PASSWORD}

Accessing Elasticsearch

Check Elasticsearch cluster state and list of indices:

CASS_PASSWORD=$(kubectl get secrets elassandra-${CLUSTER_NAME} -o yaml | grep "cassandra.cassandra_password" | cut -f2 -d':' | tr -d ' ' | base64 -d)
kubectl exec -it "$ELASSANDRA_POD" --namespace "$NAMESPACE" -c elassandra -- curl -u"cassandra:${CASS_PASSWORD}" https://localhost:9200/_cluster/state?pretty
kubectl exec -it "$ELASSANDRA_POD" --namespace "$NAMESPACE" -c elassandra -- curl -u"cassandra:${CASS_PASSWORD}" https://localhost:9200/_cat/indices?v

Add a JSON document:

kubectl exec -it "$ELASSANDRA_POD" --namespace "$NAMESPACE" -c elassandra -- curl -XPUT -H "Content-Type: application/json" -u"cassandra:${CASS_PASSWORD}" https://localhost:9200/test/mytype/1 -d '{ "foo":"bar" }'

Accessing Elassandra using the headless service

A headless service creates a DNS record for each Elassandra pod. For instance :

$ELASSANDRA_POD.elassandra-${CLUSTER_NAME}-${DATACENTER_NAME}.default.svc.cluster.local

Where CLUSTER_NAME and DATACENTER_NAME must be replaced by the value set in the schema.yaml for config_cluster_name and config_datacenter variables.

Clients running inside the same k8s cluster could use those records to access both CQL, ES HTTP, ES transport and JMX protocols.

Accessing Elassandra with port forwarding

A local proxy can also be used to access the service.

Run the following command in a separate background terminal:

kubectl port-forward "$ELASSANDRA_POD" 9042:39042 9200:9200 --namespace "$NAMESPACE"

On you main terminal (requires curl and cqlsh commands):

curl -u"cassandra:${CASS_PASSWORD}" "https://localhost:9200"
cqlsh -u cassandra -p ${CASS_PASSWORD} --cqlversion=3.4.4

Uninstall the Application

Using the Google Cloud Platform Console

  1. In the GCP Console, open Kubernetes Applications.

  2. From the list of applications, click Elassandra Operator.

  3. On the Application Details page, click Delete.

Once application is removed, you mya have to delete all resources created by the operator. See Delete elassandra nodes and Delete your persistente volumes

Using the command line

Prepare the environment

Set up your installation name and Kubernetes namespace:

export APP_INSTANCE_NAME=elassandra-operator
export NAMESPACE=default

Delete the resources

NOTE: It is recommended to use a kubectl version that is the same as the version of your cluster. Using the same versions of kubectl and cluster will help avoid unforeseen issues.

To delete the resources use types and labels:

kubectl delete application --namespace $NAMESPACE  $APP_INSTANCE_NAME

Delete the elassandra nodes, kibana and reaper

The operator will create additional resources according to your instance of Datacenter CRD. You have to delete them by yourself.

# delete Kibana deployment
kubectl delete deploy \
  --namespace $NAMESPACE \
  --selector app.kubernetes.io/managed-by=elassandra-operator
  --selector app=kibana

# delete Reaper deployment
kubectl delete deploy \
  --namespace $NAMESPACE \
  --selector app.kubernetes.io/managed-by=elassandra-operator
  --selector app=reaper

# delete Elassandra nodes
kubectl delete sts \
  --namespace $NAMESPACE \
  --selector app.kubernetes.io/managed-by=elassandra-operator
  --selector app=elassandra

Delete the persistent volumes of your installation

By design, the removal of StatefulSets in Kubernetes does not remove the PersistentVolumeClaims that were attached to their Pods. It prevents your installations from accidentally deleting stateful data.

To remove the PersistentVolumeClaims with their attached persistent disks, run the following kubectl commands:

for pv in $(kubectl get pvc --namespace $NAMESPACE \
  --selector app.kubernetes.io/name=$APP_INSTANCE_NAME \
  --output jsonpath='{.items[*].spec.volumeName}');
do
  kubectl delete pv/$pv --namespace $NAMESPACE
done

kubectl delete persistentvolumeclaims \
  --namespace $NAMESPACE \
  --selector app.kubernetes.io/managed-by=elassandra-operator

Delete the GKE cluster

Optionally, if you don't need the deployed application or the GKE cluster, you can delete the cluster using the following command:

gcloud container clusters delete "$CLUSTER" --zone "$ZONE"

About

Elassandra Operator on Google Kubernetes Marketplace https://www.strapdata.com

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published