Spinnaker installation options

In this article, we will discuss the different installation processes of OpsMx Spinnaker on different platforms.

Helm Chart
Minikube
K3 using Ubuntu EC2
Air Gapped/Offline Installation
Operator
Helm Chart

Introduction

The recommended way to deploy Spinnaker is in a Kubernetes environment. In this tutorial, we will show you how easy it is to setup a Spinnaker instance in Kubernetes using helm charts. You can have a Spinnaker up and running in less than 20 minutes. To be successful, the Kubernetes cluster needs to support:

  • 18G of memory and 6 CPU cores

  • persistent storage volumes

  • loadbalancers

Pre-requisites

  • You have kubernetes cluster ready and kubectl can connect to that cluster

  • You have helm v3 installed

Deploy Spinnaker

Note: if you are using Openshift, you can replace kubectl with oc commands

Note: In Openshift, you may need to grant additional permission to the service account. For example, if you are installing the helm chart below using name mychart, you should grant the following to the service account mychart-spinnaker-halyard

oc adm policy add-scc-to-user anyuid -z mychart-spinnaker-halyard

We will install Spinnaker using the published stable OpsMx Helm chart. If you want to customize the installation, get the values.yaml file and customize it.

https://raw.githubusercontent.com/OpsMx/enterprise-spinnaker/master/charts/oes/values.yaml

Run helm commands to setup Spinnaker (the -f values.yaml is required if you are customizing your installation).

helm repo add opsmx https://helmcharts.opsmx.com/
helm install <release-name> opsmx/oes --set installationMode=NONE --set imageCredentials.username=<dockerID> --set imageCredentials.password=<dockerPassword>

Give it some time. Once all the pods are running, your Spinnaker is deployed and ready to use.

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spin-clouddriver-749d9c9589-wg49j 1/1 Running 0 11m
spin-deck-775cdfccff-shzcb 1/1 Running 0 11m
spin-echo-5fd76b8f79-fkdm9 1/1 Running 0 11m
spin-front50-597b4ff46d-72kxr 1/1 Running 0 11m
spin-gate-7dc88df658-xl8nm 1/1 Running 0 11m
spin-igor-67b9b4c66-zs28q 1/1 Running 0 11m
spin-orca-7d5958f8d6-c6w7x 1/1 Running 0 11m
spin-rosco-67847b657b-gx5vc 1/1 Running 0 11m
spinnaker-install-using-hal-mdt4q 0/1 Completed 0 14m
spinnaker-minio-5c994565d6-fjdkm 1/1 Running 0 14m
spinnaker-redis-master-0 1/1 Running 0 14m
spinnaker-spinnaker-halyard-0 1/1 Running 0 14m

Your Spinnaker is deployed but how do you access it? You could always create port-forwards as mentioned in the output of the helm install used to deploy Spinnaker. But that works when you are using your local laptop. If your Kubernetes cluster supports loadbalancer services, it is easier to create a loadbalancer and access Spinnaker. This best option to make it available to everyone.

Create a file spinsvcs.yml to create the loadbalancer service to Spinnaker.

apiVersion: v1
kind: Service
metadata:
labels:
app: spin
stack: deck
name: spin-deck-ui
spec:
type: LoadBalancer
ports:
- name: http
port: 9000
protocol: TCP
selector:
cluster: "spin-deck"

Run the commands to create the service.

kubectl apply -f spinsvcs.yml

To get the hostname to connect to Spinnaker run,

$ kubectl get service spin-deck-ui
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
spin-deck-ui LoadBalancer 172.30.16.45 52.147.219.48 9000:31544/TCP 12m

Once you have the EXTERNAL-IP address, you can connect to Spinnaker using the value of the EXTERNAL-IP. For example, using the above output,

http://52.147.219.48:9000
Minikube

Minikube provides a way to install and run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

Prerequisites

Before installing Spinnaker on Minikube using Helm we need to ensure the following:

  1. Minikube is cluster ready and kubectl can connect to that cluster

  2. Helm 3.2.4 is installed

Install Kubectl

.We need the latest version of Kubectl to manage the Kubernetes cluster.

To install kubectl, run the following commands:

sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

Install Minikube

To install Minikube, run the following commands:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb

Start the Minikube Cluster

Start Minikube. Take into consideration that Spinnaker is composed out of multiple microservices, so you’ll need quite some memory and CPU to run it locally.

  • To start the Minikube cluster, run the following command:

minikube start

Once it is started you will be able to see the following information as shown below:

  • If you need to stop the Minikube cluster, run the following command.

minikube stop

To check the Minikube cluster status, run the following command.

minikube status

Install Helm

Before installing the Spinnaker, we need to set up Helm on Kubernetes.

Download the latest release of Helm binary and move it to the bin folder.

  • Install helm from the URL link as given below:

https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz

  • Uncompress the tar file using Untar command as given below:

tar -xf helm-v3.2.4-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm

To check the helm version run the following command:

helm version

Deploy Spinnaker

With Minikube and Helm installed and running, now we can install Spinnaker. Since it is available as a Helm Chart, we will get this done through a couple of steps.

Before we deploy Spinnaker, we need a configuration file in YAML format, which will provide the initial set of configuration values.

  • To get this file from the Github Spinnaker Helm Chart repository run the following command:

wget https://raw.githubusercontent.com/helm/charts/master/stable/spinnaker/values.yaml
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
  • Change the Spinnaker Version and tag values in values.yaml file as given below:

vi values.yaml
halyard:
spinnaker Version: 1.21.2
image:
repository: gcr.io/spinnaker-marketplace/halyard
tag: 1.38.0
  • Install Spinnaker using the following command:

helm install spinnaker stable/spinnaker -f values.yaml
kubectl get pods

All the spinnaker components should be in running state.

  • Once all the spinnaker components are running, get the kubectl service using the following command:

run kubectl get svc
  • Edit the svc of the spin-deck to access the Spinnaker UI using the following command:

kubectl edit svc spin-deck
  • Change the service value from ClusterIP to NordPort as given below:

spec:
clusterIP: 10.107.78.199
externalTrafficPolicy: Cluster
ports:
– nodePort: 31529
port: 9000
protocol: TCP
targetPort: 9000
selector:
app: spin
cluster: spin-deck
sessionAffinity: None
type: NordPort
status:
loadBalancer: {}
  • For accessing the spinnaker, run the following command:

minikube IP
[email protected]:~/spinnaker$ minikube IP 192.168.99.100
  • Spinnaker can now be accessed from the browser by visiting the following:

192.168.99.100:31529 (spin-deck node ip)
K3 using Ubuntu EC2

What is K3s?

K3s is a light-weight Kubernetes cluster designed to be a single binary of less than 40MB that completely implements the Kubernetes API. Due to its low resource requirements, it’s possible to run a cluster on anything from 512MB of RAM machines upwards. Find more information here.

Prerequisites:

  1. EC2 instance with at least 6 core and 20 GB memory (For running OES)

  2. Helm 3 is setup and initialized in your instance. If the helm is not set up, follow https://helm.sh/docs/intro/install/ to install helm.

Steps:

  1. To install k3s, switch to the root user, and execute the following commands:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=”- -write-
kubeconfig=/home/ubuntu/.kube/config – -write-kubeconfig-mode=644″ sh –

By default Kubernetes, the config file will be written to the root user. To write to the Ubuntu user use the – -write-kubeconfig option. Find all the available installation options here.

2. Create a namespace and make it as default using the following command:

kubectl create namespace oes
kubectl config set-context – -current – -namespace oes

3. Add opsmx helm repo to your instance (as ubuntu user) using the following command:

helm repo add opsmx https://helmcharts.opsmx.com/

4. Install Helm chart to the namespace created above using the helm repo as added above. Use the following command:

helm install <RELEASE_NAME> opsmx/oes – -set installSpinnaker=false – -set installRedis=true – -set imageCredentials.username=<DOCKER_ID> – -set imageCredentials.password=<PASSWORD> – -set k8sServiceType=NodePort –set autoConfiguration.initContainer.externalIpCheckDelay=10 – -set installationMode=OES

Available installation modes are:

  • OES-AP (both OES and Autopilot),

  • OES (Only OES),

  • AP (Only Autopilot),

  • None(Skip OES installation)

By default, OES-AP gets installed if we are not specifying the installation mode. For more information please visit: https://github.com/OpsMx/enterprise-spinnaker/tree/master/charts/oes

Note: In the above command RELEASE_NAME is the name that you want to give to the particular installation. If you do not wish to specify a release name, pass –generate-name while running the helm install command. Find more details here.

5. Edit the Configmaps of oes-ui and oes-gate

OES installation will work off self without any manual intervention on cloud-managed Kubernetes clusters where load balancers will be assigned to oes-ui and oes-gate services.

In this case, the private IP of the instance will be assigned to the oes-ui and oes-gate services. So to get the oes-ui access through the browser it is required to edit the configmap of oes-ui and oes-gate as follows. In oes-ui-config, replace OES_GATE_IP with the IP of the node (Typically it is the VM IP when it is a single node k8s cluster. In this case, it is the public IP of ec2 instance).

In oes-gate-config, replace OES_UI_LOADBALANCER_IP with the IP(public IP) of the instance.

6. After making the above changes to configmaps, restart oes-gate and oes-ui pods by running the following command:

kubectl rollout restart deploy oes-ui oes-gate

7. Once all pods are in a ready state, access oes-ui using URL as given below:

http://<public-ip of instance>:<OES-ui-Nodeport>

Fetch the OES-ui-Nodeport by running the following command:

kubectl get svc oes-ui -o jsonpath=”{.spec.ports[].nodePort}”

8. Login to OES using default credentials:

Username: admin

Password: opsmxadmin123

9. To add worker nodes to the k3s cluster (If required), execute the following commands in the worker node:

export K3S_URL=https://<master-private-ip>:6443
export K3S_TOKEN=<paste-the-token-stored-in-master-node>
curl -sfL
https://get.k3s.io | sh –

Token is located in the following location : /var/lib/rancher/k3s/server

Air Gapped/Offline Installation

Spinnaker - Air Gapped/Offline Installation

At times there are situations that an environment does not have Internet connectivity or temporarily not working. In such an environment, if we need Spinnaker there is no out-of-the-box installation procedure. We should prepare the installation package in an internet-connected machine and then use the created offline package in a non-internet connected machine to install Spinnaker.

You can use a Ubuntu 16.04 / CentOS 7.5 machine to prepare the offline package and to install the package from. If you use another environment like MacOS or other Linux environments, you might have to adjust a few commands’ parameters if any error is reported during execution.

Prerequisites

Get the required shell scripts and chart from here:

https://drive.google.com/file/d/1XHZtqLJgPfMlQjolYA_bu2hoGx-OO0Nz/view?usp=sharing

Internet Connection machine

The following tools are required on the Internet connected machine:

- Docker CLI

- yq CLI

- gsutil CLI

- Subversion CLI

- curl

You may use sudo --preserve-env ./installUtils.sh netspin-mc to install required CLIs.

Internet Disconnected / Air Gapped machine

Make sure the target Spinnaker server meets the Hardware requirement

- A minimum of 16GB memory and 4 core processor.

- PV for 2 x 10g (for halyard & minio) storage and 8g disk for redis to complete a successful Spinnaker installation.

It is expected that the Air Gapped system has access to private Docker repo and will be pulling the Spinnaker services' images from the privately hosted Docker registry.

The following tools should be available on the Air Gapped target Spinnaker machine:

- Docker CLI

- Helm 3

- Curl

- Working Kubernetes 1.14.0 or more but less than 1.16.0 cluster

You may use sudo --preserve-env ./installUtils.sh olspin-mcto install required CLIs.

How does it work?

Halyard, the Spinnaker’s installation tool, will perform Spinnaker installation on the target machine using the configurations available at - $HOME/.hal/.boms/bom/1.18.5.yml (Spinnaker version file). Download the above file, plus it’s related files, and instruct Halyard to use them locally instead of downloading from the internet. Using the Spinnaker helm chart, perform the installation of Spinnaker.

Installation Steps

  1. Download Spinnaker Offline files:

Download Docker images of Spinnaker components and Bill of Materials (BOM) files for the chosen version of Spinnaker. Package the downloaded Docker images and BOM along with the Helm chart.

2. Ship the Offline package to the target environment:

Copy the generated offline package to a server with access to the target Kubernetes cluster to deploy the Spinnaker application.

3. Upload the Spinnaker images to private Docker repository:

Push the Docker images of Spinnaker from the offline package into the internal private Docker repository of the company.

4. Perform Spinnaker installation:

Run Helm install to perform Spinnaker installation in the target Kubernetes environment.

Detailed Procedure

Step 1. Download Spinnaker Offline files

Note:

  • This procedure is performed in an Internet connected machine, under the directory - 'airgapped-spinnaker'.

  • The CLIs - docker, yq, gsutil, subversion, curl are required to be installed on the machine. Docker daemon should be running and the current-user should be a member of the ‘docker’ group. You may use sudo --preserve-env ./installUtils.sh netspin-mc to install required CLIs and to ensure current-user is added to the ‘docker’ group.

  • Images that support anonymous user context are required for running Spinnaker in a restricted Kubernetes/Openshift environment. The file dependencies-and-overrides-pull.yml is used to pull ubi8 images compatible with default OpenShift configurations.

  • Executing the shell script netspin-pkgoffline.sh downloads the below files and produces a single compressed file - airgapped-spinnaker.tar.gz.

    • The Bill Of Materials (BOM) files of chosen Spinnaker version.

    • Docker images as referenced in <version>.yml file.

    • Spinnaker Helm chart (Important: Use OpsMx customized chart).

    • Shell Scripts to perform offline installation of Spinnaker.

Steps

  1. Edit the file spin-offline.var and update the Spinnaker version of your choice and other variable settings.

  2. Edit the file dependencies-and-overrides-pull.yml to pull custom Spinnaker images and the images of dependency services like Minio, Redis and Halyard.

  3. Download the files and create package tar file by running the below commands:

source spin-offline.var
bash netspin-pkgoffline.sh
# The above netspin-pkgoffline.sh script calls the other
# scripts netspin-getbom.sh (downloads BOM files) and
# netspin-getimages.sh (downloads Docker images)

Step 2. Ship the offline package to target environment

Once the offline package is created, copy airgapped-spinnaker.tar.gz to a system with access to the target Kubernetes/OpenShift cluster where Spinnaker will be deployed.

Extract the tar file airgapped-spinnaker.tar.gz which contains installation scripts and other offline files.

Step 3. Upload the Spinnaker images to private Docker repository

Note:

  • This procedure is performed in a machine connected with the target Kubernetes/Openshift cluster, under the directory - 'airgapped-spinnaker'.

  • We require the CLIs - helm 3 and docker are installed on the machine. Docker daemon should be running and the user should be a member of the ‘docker’ group.

Steps

  1. Edit the file spin-offline.var and ensure Spinnaker version and other variable settings are as desired. Make sure to mention the private docker registry URL.

  2. Execute the script - olspin-pushimages.sh. This will extract the images from spin-images.tar.gz and push them into the private registry.

bash olspin-pushimages.sh

Step 4. Perform Offline Spinnaker Installation

Note:

  • The CLIs - helm 3 and docker are required to be installed on the machine. Docker daemon should be running and the user should be a member of the ‘docker’ group.

  • Kubectl (or oc) installed and configured to talk to the Openshift/Kubernetes cluster.

  • For a restricted cluster, a service account is required and it’s kubeconfig file should be generated and configured to use it.

  • Docker images are sourced from images-overrides_values.yaml and this file would have been generated automatically by the script olspin-pushimages.sh in the previous step.

  • The assumption on the target namespace to deploy the Spinnaker is ‘offline’. Change the value in spin-offline.var, sa-spinnaker.yaml and the commands here to your choice. The installation creates resources within the namespace only, no cluster level resources are created.

Steps

  1. Create a namespace 'offline' and set the current context to use it as the default namespace for our future actions to simplify command length as shown below:

#Create namespace
kubectl create ns offline
#Setcurrent-context to use ‘offline’ namespace
kubectl config set-context --current --namespace=offline

Going forward, we don’t have to specify -n offline argument to the kubectl command.

Optionally, create PVs required data persistence - Not needed in managed Cluster.

bash vagrantPreInstall.sh

2. Execute preInstallSpin.sh script. This creates a configMap for the custom spin-boms.tar.gz file. It also creates the serviceAccount for halyard along with its kubeconfig secret.

3. Edit the values.yaml file with custom inputs:

a. Private Docker registry and passwords.

b. Halyard service account and kubeconfig file.

c. Disable RBAC and SecurityContext.

d. Enable CustomBom and the configmap name.

4. Edit installSpin.sh script with valid Halyard image and other settings and run it. This will use Helm 3 to install Spinnaker. It will take about 20 minutes to complete the installation.

bash installSpin.sh

5. During installation, you can open another terminal to verify the halyard job’s pod (i.e., using-hal , but not halyard statefulset’s pod) to track the progress of the Spinnaker installation.

6. Verify if all of the Spinnaker service pods are started successfully:

watch kubectl get pods

If all pods are running, you have successfully installed Spinnaker.

Appendix

ServiceAccount for installing Spinnaker

# Create a service-account in offline namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-spinnaker
namespace: offline

---

# Create a role role-spinnaker in offline namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-spinnaker
namespace: offline
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]

---

# Create a rolebinding in offline namespace, to bind the service-account to role-spinnaker
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbind-spinnaker
namespace: offline
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: role-spinnaker
subjects:
- namespace: offline #Where serice-account exists
kind: ServiceAccount
name: sa-spinnaker

Kubeconfig file of the Service Account

kubectl apply -f sa-spinnaker.yaml

# Update these to match your environment
SA_NAME=sa-spinnaker
CONTEXT=$(kubectl config current-context)
NAMESPACE=offline
NEW_CONTEXT=ctx-sa-spinnaker
KUBECONFIG_FILE="kubeconfig-sa-spinnaker.cfg"
SECRET_NAME=$(kubectl get serviceaccount ${SA_NAME} \
--context ${CONTEXT} \
--namespace ${NAMESPACE} \
-o jsonpath='{.secrets[0].name}')
TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} \
--context ${CONTEXT} \
--namespace ${NAMESPACE} \
-o jsonpath='{.data.token}')
TOKEN=$(echo ${TOKEN_DATA} | base64 -d)
#echo SECRET_NAME $SECRET_NAME
#echo TOKEN_DATA $TOKEN_DATA
#echo TOKEN $TOKEN
#exit

---

# Create dedicated kubeconfig
# Create a full copy
kubectl config view --raw > ${KUBECONFIG_FILE}.full.tmp

---

# Switch working context to correct context
kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp config use-context ${CONTEXT}

---

# Minify
kubectl --kubeconfig ${KUBECONFIG_FILE}.full.tmp \
config view --flatten --minify > ${KUBECONFIG_FILE}.tmp

---

# Rename context
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
rename-context ${CONTEXT} ${NEW_CONTEXT}

---

# Create token user
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
set-credentials ${CONTEXT}-${NAMESPACE}-token-user \
--token ${TOKEN}

---

# Set context to use token user
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
set-context ${NEW_CONTEXT} --user ${CONTEXT}-${NAMESPACE}-token-user

---

# Set context to correct namespace
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
set-context ${NEW_CONTEXT} --namespace ${NAMESPACE}

---

# Flatten/minify kubeconfig
kubectl config --kubeconfig ${KUBECONFIG_FILE}.tmp \
view --flatten --minify > ${KUBECONFIG_FILE}

---

# Remove tmp
rm ${KUBECONFIG_FILE}.full.tmp
rm ${KUBECONFIG_FILE}.tmp
Operator

Prerequisites:

  • The Spinnaker Operator kit requires an OpenShift 4.5+ cluster that contains enough resources to run Spinnaker. Configure the OC CLI locally on your machine to point to your OpenShift cluster.

  • Registering the OpenShift Cluster in the RedHat Marketplace. If not registered please refer to Red Hat Marketplace Docs. This must be done prior to operator install.

  • OpenShift requirements such as namespace on the cluster where spinnaker needs to be installed.

How the Operator Works:

The Helm Operator is designed to manage stateless applications that require very little logic when rolled out. The main function of an Operator is to read from a custom object that represents your application instance and have the desired state match what is running.

Spinnaker Operator Installation from RedHat MarketPlace:

  1. Login to the RedHat MarketPlace.

  2. On the main menu, click Workspace > My Software > OpsMx Enterprise for Spinnaker > Install Operator.

  3. On the Update Channel section, select “alpha”.

  4. On the Approval Strategy section, select either Automatic or Manual. The approval strategy corresponds to how you want to process operator upgrades.

  5. On the Target Cluster section:

    • Click the checkbox next to the clusters where you want to install the Operator.

    • For each cluster selected, under Namespace Scope, on the Select Scope list, select an option.

  6. Click Install. It may take several minutes for spinnaker installation to complete.

  7. Once installation is complete, the status will change from installing to Up to date.

Verification of operator Installation:

  1. Once status changes to Up to date, click the vertical ellipses and select Cluster Console.

  2. Open the cluster where you installed the product

  3. Go to Operators > Installed Operators

  4. Select the Namespace or Project you installed on

  5. Verify status for product is Succeeded

  6. Click the product name to open details

  7. Click Install “Open-Enterprise-Spinnaker”.

  8. You should see all the pods related to spinnaker coming up and running under the workloads section.

  9. Once all the spinnaker pods are up and running, create proper routes to deck and gate and try to access spinnaker. You should be able to view the spinnaker UI.