Skip to content

Instantly share code, notes, and snippets.

@hansbala
Last active June 16, 2020 08:58
Show Gist options
  • Save hansbala/369b35d64ec52c9302c020a5f7e35ef4 to your computer and use it in GitHub Desktop.
Save hansbala/369b35d64ec52c9302c020a5f7e35ef4 to your computer and use it in GitHub Desktop.
# This script is to be run on the instance that controls the kubernetes clusters
# Create a new cluster (will take like 5-10 mins)
kops create cluster mycluster.k8s.local --zones us-east-2a --yes
## For scaling down and up purposes ##
## Scale down minSize and maxSize -> 0 when done with experimentation ##
kops edit ig nodes
## Scale down minSize and maxSize -> 0 when done with experimentation ##
kops edit ig master-us-east-2a
# Change the AMI image for the cluster (kubelet-with-ebpf)
kops edit ig nodes
# Change AMI image -> 277640108729/kubelet-with-ebpf
# update the cluster (roll the update if required - do not exit from here under any circumstances if roll is required!)
kops update cluster --yes
kops rolling-update cluster --yes
# Create the service account (viperservice)
kubectl create serviceaccount viperservice
# Give super user access to all service accounts (system-wide)
kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin \
--group=system:serviceaccounts
# TODO: Update service account of the cluster
# Setup Istio (run from home directory)
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.6.2
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo
kubectl label namespace default istio-injection=enabled
# Deploy GoogleMicroservices on kubernetes cluster
git clone https://github.com/GoogleCloudPlatform/microservices-demo.git
cd microservices-demo
kubectl apply -f release/kubernetes-manifests.yaml
# Get the kubernetes endpoint information
kubectl cluster-info
# Get the serviceaccount API token (replace xxxx with proper key value)
kubectl get secret viperservice-token-xxxxx
# Plug in Kubernetes API key and Kubernetes endpoint IP into the controller
# Make sure the Pipeline instance is up and running
# Run the controller
# Then, we can swarm the kubernetes cluster with 100 users (let's say) with locust (you can run this on your own machine)
# Replace $KUBERNETES_ENDPOINT_IP with aws instance id (https://github.com/Brown-NSG/ServiceMeshDiagnosis/blob/master/ViperProbe/documentation/notes.md)
locust --host="$KUBERNETES_ENDPOINT_IP" --no-web -c "${USERS:-100}" 2>&1
## For deletion of cluster
kops delete cluster --name mycluster.k8s.local --yes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment