How to Setup Minio Storage Service for Spinnaker on Openshift
This document will cover how to setup Minio storage service for Spinnaker in an OpenShift environment at a primary site.
Spinnaker requires an external storage provider for persisting your Application settings and configured Pipelines. Because these data are sensitive and disruptive if lost, we recommend you using a hosted storage solution with the required level of availability and reliability.
The most commonly used storage service is the AWS S3 or GCP GS (Google Storage) to host this Spinnaker data. However, if you are looking for an on-prem hosted solution, we recommend using Minio.
Minio is an S3-compatible object store that users can host on-prem as a Docker container. In this document, we will show how to use Minio as the persistent storage solution in the form of Pods on Openshift where Spinnaker can be running in Local-Debian or in the Distributed mode.
The Minio pods would be accepting the username/passwords as secrets in the base64 format which can be customized as per the user’s choice.
Before we begin, below are the prerequisites for the installation.
- Openshift 3.6 or above
- Familiarization with Minio
- Understanding of how to set up Minio as storage service for Spinnaker
Setting Up Primary Minio Storage Service in OpenShift
In this setup, we would be configuring Minio credentials as secret in namespace, configure a persistent store, and use Deployment object to instantiate Minio service on Openshift 3.9.
We have also provided the link to the files to be accessed along with the screenshots.
Step1: Configuring credentials used by Minio as a Secret
Username or Access Key and Password or Secret Key needs to protected, so we can insert them in the form of the Object - Secret. Secrets can be created using various methods. In this setup we will convert the username and password into base64 format and then copy the value into the secret.yaml as below.
If the username is myminiouser then the base64 value can be generated using the following command:
#username=$(echo myminiouser | base64)
Sample Secret.yaml file:
apiVersion: v1 kind: Secret metadata: name: minio-secret type: Opaque data: username: < base64 converted username > password: < base64 converted password >
Create the secret in Openshift default namespace using the following command:
# oc create -f minio-secret.yml
Now, we would need to create a Deployment Object for the secret. Deployment Object was chosen maintain the pods availability to avoid any disruption in the production environment.
Step 2: Creating Persistent Volume for the deployment.
In Openshift pods, local storage is transient. To have persistence, use a persistent volume and use that volume for Persistent Volume Claim (PVC) for storage for Minio. Create the PVC first named as “minio-pv” using the minio-pvc.yml manifest file
# oc create -f minio-pvc.yml
apiVersion: v1 kind: PersistentVolumeClaim metadata: # This name uniquely identifies the PVC. Will be used in deployment below. name: minio-pv labels: app: minio-storage-claim spec: # Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes accessModes: - ReadWriteMany resources: # This is the request for storage. Should be available in the cluster. requests: storage: 10Gi
Step 3: Creating Minio Instance and provide a service endpoint
Create Minio instance using minio-deploy.yml manifest file. The deployment manifest uses the container with container ID minio/minio:latest from the Minio default repo
# oc create -f minio-deploy.yml
apiVersion: extensions/v1beta1 kind: Deployment metadata: # This name uniquely identifies the Deployment name: minio-deployment spec: strategy: type: Recreate template: metadata: labels: # Label is used as selector in the service. app: minio spec: # Refer to the PVC created earlier volumes: - name: storage persistentVolumeClaim: # Name of the PVC created earlier claimName: minio-pv containers: - name: minio # Pulls the default Minio image from Docker Hub image: minio/minio args: - server - /storage env: # Minio access key and secret key - name: MINIO_ACCESS_KEY valueFrom: secretKeyRef: name: minio-secret key: username - name: MINIO_SECRET_KEY valueFrom: secretKeyRef: name: minio-secret key: password ports: - containerPort: 9000 # Mount the volume into the pod volumeMounts: - name: storage # must match the volume name, above mountPath: "/storage"
This deployment manifest creates an instance of Minio, attach the persistent volume and applies the secrets specified in the namespace to configure Minio service.
Once, the instance is running create a Service for the Deployment to allow access by clients. The service would be created as a ClusterIP using the manifest file specified in minio-svc.yml
# oc create -f minio-svc.yml
apiVersion: v1 kind: Service metadata: name: minio-service spec: type: ClusterIP ports: - port: 9000 targetPort: 9000 protocol: TCP selector: app: minio
Note: If you need to create an external IP or route exposed to the outside network, please change the service spec to NodePort.
Step 4: Create Buckets on the Minio Server
The Minio servers store the data into Buckets, Bucket is equivalent of a drive or mount point in filesystems and should not be treated as folders. Minio does not place any limits on the number of buckets created per user. On Amazon S3, each account is limited to 100 buckets.
Login into the Minio server that you created and use the Access Key(username) and Secret Key (Password) that you provided on the secret object
Once logged in, you can create a bucket from the Add Symbol ‘+’ at the bottom right corner of the screen and name the bucket, here we named the bucket as bucket1.
That’s it. We are done. You have created the primary Minio storage service with a PVC along with the buckets that would be storing the data. We will cover the replication in the next document.
How to Setup Replication for Minio Storage Service on Openshift
In the previous document, we covered “How to Setup Minio Storage Service for Spinnaker on Openshift” for the primary site.
In this document, we would cover how to setup Minio storage service as pods on two different sites and how to use Minio Client (also as a pod) to replicate the data among the sites.
The Minio servers store the data into buckets. A bucket is an equivalent of a drive or mount point in filesystems and should not be treated as folders. Minio does not place any limits on the number of buckets created per user. On Amazon S3, each account is limited to 100 buckets.
Once, we have the Minio setup on either site, we can create buckets and then replicate the buckets from site 1 to site 2 using the Minio Client.
Minio Client (mc) provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage service (AWS Signature v2 and v4).
Minio Client can be installed locally or as docker containers from this link: https://docs.minio.io/docs/minio-client-complete-guide
However, we are going to install it in the form of Pods where the bucket names and Minio server endpoints will be supplied as secrets and can be used to replicate the pods. We have embedded a script which would replicate date between site 1 and site 2. The minio-client would be running from site 1 to site 2. Site 1 will act as the Primary and site 2 would act as the Secondary.
Here are the prerequisites for the setup in this document:
- Openshift 3.6 or above
- Familiarization with Minio
- Spinnaker Setup with Minio as Storage Service
- Minio server setup http://docs.opsmx.com/#how-to-setup-minio-storage-service-for-spinnaker-on-openshift
Step 1: Setup the Minio Storage Service on the Primary and Secondary Sites.
Please follow the steps in the “How to Setup Minio Storage Service for Spinnaker on Openshift” including setting up the service, deployment, Persistent Volume Claim (PVC) and secrets for both primary and secondary sites.
If you have exposed a route/IP for the sites, it needs to be converted into base64 and provided to the minio-client as secret.
Step 2: Convert data into base64
In this document, we will set up to transfer data between Bucket1 of the primary site and the Bucket2 of the secondary site. The primary site would have Minio server referred as minio1 and secondary site would be referred to as minio2. The data would be accepted as a secret and would be named as below.
|Primary Site||Secondary Site|
Once these are converted to base64 we move on the next step of creating the secret, pvc and deployment.
Step 3: Creating Secret, Deployment and PVC Object.
*The secret would be named as “mcclient”
#oc create -f minio-client-secret.yml
apiVersion: v1 data: bucket1: < base64-converted_openshift-spinnaker-bucket_name > bucket2: < base64-converted_openshift-spinnaker-bucket_name > minio1accesskey: < base64-converted_openshift-password > minio1endpoint: < base64-endpoint_http://minio-service > minio1secretaccess: < base64-converted_openshift-password > minio2accesskey: < base64-converted_openshift-password > minio2endpoint: < base64-endpoint_http://minio-service > minio2secretaccess: < base64-converted_openshift-password > kind: Secret metadata: name: mcclient
*The PVC would be named as “mc”. Ensure that you have the persistent volume for the PVC
# oc create -f minio-client-pvc.yml
apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "mc" spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "10Gi"
*Finally, we will setup the deployment object. The deployment object would be accepting the secret and PVC and start replicating the data. In this setup, we have setup a script running inside the pod /data/replication.sh which is performing the work replication.
# oc create -f minio-client.yml
apiVersion: extensions/v1beta1 kind: Deployment metadata: # This name uniquely identifies the Deployment name: minio-client-deployment spec: strategy: type: Recreate template: metadata: labels: # Label is used as selector in the service. app: minio-client spec: containers: - name: minio-client image: docker.io/opsmx11/minio-client:demo imagePullPolicy: Always env: - name: minio1Endpoint valueFrom: secretKeyRef: name: mcclient key: minio1endpoint - name: minio1Accesskey valueFrom: secretKeyRef: name: mcclient key: minio1accesskey - name: minio1Secretaccess valueFrom: secretKeyRef: name: mcclient key: minio1secretaccess - name: minio2Endpoint valueFrom: secretKeyRef: name: mcclient key: minio2endpoint - name: minio2Accesskey valueFrom: secretKeyRef: name: mcclient key: minio2accesskey - name: minio2Secretaccess valueFrom: secretKeyRef: name: mcclient key: minio2secretaccess - name: bucket1 valueFrom: secretKeyRef: name: mcclient key: bucket1 - name: bucket2 valueFrom: secretKeyRef: name: mcclient key: bucket2 command: [ "/bin/sh", "-c"] args: [ "sh /data/replication.sh"] volumeMounts: - mountPath: /backup name: mc-data volumes: - name: mc-data persistentVolumeClaim: claimName: mc
That’s it. We are done. Once we were have all the objects configured and in once place, our script takes care of the replication from Site1 Minio storage to the Site2 Minio Storage is started.
Enabling Spinnaker Monitoring using Prometheus and Grafana
In this document, we will show how to enable monitoring of Spinnaker using Prometheus with Grafana in an OpneShift environment.
Here are the prerequisites for the setup in this document:
- Spinnaker 1.10.0 or above
- Halyard 1.11.0 or above
- Install Clouddriver in the High Availability mode
- Openshift v3.6 or above
- Spinnaker deployed in Distributed Mode on Openshift/Kubernetes
In this, we have customized the images and deployment objects into a Production ready Openshift environment since the installation of Spinnaker in an Openshift environment needs to set up in a way to allow a non-root user to access the pods or containers.
Images for the Prometheus and Grafana are stored in opsmx11/prometheus repo that is used for monitoring.
So let’s begin by installing Clouddriver which is the heart of Spinnaker in High Availability mode.
Step 1: Install Clouddriver in High Availability Mode
# hal config deploy ha clouddriver enable
For more details on the Clouddriver in HA mode please visit: https://www.spinnaker.io/reference/halyard/high-availability/
Step 2: Ensure that the artifact is placed into the monitoring daemon local.
Note : This step is needed only for Spinnaker release prior to Spinnaker 1.10.3 to workaround a bug with Prometheus unable to fetch the service-name of clouddriver. This step can be skipped for Spinnaker 1.10.3 or later
# vi .hal/default/profiles/monitoring-daemon-local.yml artifactId: opsmx11/monitoring:v1
Step 3: Ensure Spinnaker services are running on port 8008
Step 4: Install and run the Prometheus and Grafana Pod
In this setup, we are using Prometheus and Grafana in a single pod which has been pre-configured dashboards for the Clouddriver in HA mode.
One can also edit the Prometheus scrape targets, grafana.db, and datasources as they have been accepted as secrets to the pod. This has been done to ensure the users can customize as per their choice.
We have also used a Persistent Volume to ensure that the Prometheus and Grafana data are not all lost whenever the pod restarts. The below mentioned are the deployment files with the service and persistence volume claim yamls.
The files for the below files can be found in the github repo: https://github.com/OpsMx/Openshift-Spinnaker/tree/master/prometheus
To generate the secret, we would be using the following files. (These files can be customized as per the requirement changes)
# oc create secret generic prometheus --from-file=local-prometheus.yml --from-file=grafana.db --from-file=datasources.yml -n spinnaker
So, the secret generated has the name “prometheus” and was created in the namespace “spinnaker”
Next, we would be generating the service, persistenceVolumeClaim and the Deployment using the files in the same namespace “spinnaker” :
# oc create -f prom-svc.yml -n spinnaker
# oc create -f prom-deploy.yml -n spinnaker
# oc create -f prom-pvc.yml -n spinnaker
The image that is being used is opsmx11/prometheus:v1.6.0
Step 5: Expose port 9090 for Prometheus and 3000 for Grafana
After the Prometheus-Grafana pod is launched, open the ports on the server’s firewall at 9090 and 3000 on the cloud environment or create routes to the Openshift server.
On GCP, add the custom ports to the VPC Network-> Firewall settings -> tcp:9090,3000 On AWS, add the custom ports to the VPC -> Security Groups-> tcp: 9000, tcp:3000
On Openshift, create routes by going to the service and Create Route option:
Once created you can check on the Routes, and details of the route.
Check the route and see if you can reach the Prometheus and Grafana pages.
Step 6: Redeploy Spinnaker for the custom changes
We need to re-deploy the spinnaker for the custom changes on monitoring daemon and the clouddriver HA to take effect.
# hal deploy apply