Cloud & Storage Providers

Overview

In this chapter we will discuss the cloud providers and the storage providers supported by Spinnaker. By the end of this chapter, you should be able to:

  • Set up a spinnaker on cloud providers

  • Set up a spinnaker on storage providers

Set up a spinnaker on cloud providers

Spinnaker and its capabilities are primarily built on top of the Cloud Providers that are supported. You must enable one of the supported providers in order for Spinnaker to perform.

The supported providers are:

  • App Engine

  • Amazon Web Services

  • Azure

  • Cloud Foundry

  • DC/OS

  • Google

  • Compute Engine

  • Kubernetes

  • Oracle

In this chapter we will learn about setting up Kubernetes cloud provider and s3 storage provider.

Prerequisites for Setting up Spinnaker

  • Experimental kubernetes cluster

  • Kubectl binary and kubeconfig file to access kubernetes cluster

  • Minio deployment

  • Halyard deployment

Kubernetes cluster Installation in Ubuntu 18.04

Docker is a prerequisite for kubernetes installation. Follow the instructions below to install docker.

Docker Installation

  • Use the following command to update existing packages.

    $ sudo apt update
  • Install a few prerequisite packages to allow apt to use HTTPS packages.

    $ sudo apt install apt-transport-https ca-certificates curl software-properties-common
  • Add GPG key for official docker repository to the system.

    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  • Add docker repository to APT sources.

    $ sudo add-apt-repository "deb [arch=amd64] 
    https://download.docker.com/linux/ubuntu bionic stable"
  • Update the package database from the newly added repo:

    $ sudo apt update
  • Make sure you install from the Docker repo rather than the default Ubuntu repo.

    $ apt-cache policy docker-ce

It will list all of the Docker versions that are available as below.

  • Use the following command to Install Docker.

    $ sudo apt install docker-ce
  • Run the following command to see whether Docker is installed or not.

    $ sudo systemctl status docker
  • You will see output like below.

Install single node Kubernetes cluster

Follow the steps below to install the single node kubernetes cluster:

  • Use the following commands to update and upgrade the package list.

    $ sudo apt-get update
    $ sudo apt-get upgrade
  • Add kubernetes package key.

    $ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
  • Run the following command as root user, add kubernetes repository by creating kubernetes repository source list file.

    $ touch /etc/apt/sources.list.d/kubernetes.list
  • Use vi editor or any other editor to add the following line to the above file.

    $ vi /etc/apt/sources.list.d/kubernetes.list
    deb http://apt.kubernetes.io/ kubernetes-xenial main
  • Update the package list and install the packages to run kubernetes.

    $ sudo apt-get update
    $ sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni
  • Initiate the pod network which is required to communicate. There are several pod networks that can be used. For example, we are using flannel.

  • You must pass bridged Ipv4 traffic to the iptables chain, this is required for CNI plugins to work. Run the below command.

    $ sysctl net.bridge.bridge-nf-call-iptables=1
  • Run the following command to pass pod network and initialize the same using kubeadm.

    $ kubeadm init –pod-network-cidr=10.244.0.0/16

After running the above command as root, you should see the below output.

  • We will run the above commands in the screenshot.

    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Run the below command to apply the flannel cluster.

    $ kubectl apply -f 
    https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
  • The following is the output.

The warnings in the screenshot above will not restrict the pods to come up. Follow the steps below to avoid the warnings.

  • Run the below command to check pods.

    $ kubectl get pods –all-namespaces
  • The following is the output.

  • Check the node as well, as this is a single node kubernetes cluster, and the node will be a master node by default. Run the below command to check the node.

    $ kubectl get nodes
  • The status in the above screenshot may show as notReady, because the cluster does not schedule pods in the master node by default for security reasons. In that case, run the following command and taint the nodes, which usually resolves the issue.

    $ kubectl taint nodes --all node-role.kubernetes.io/master-

Setup Kubernetes cloud

You must import/create the attached halyard deployment yaml and minio deployment yaml after setting up the Kubernetes cluster environment. Follow the steps below to get started.

  • Confirm the kubeconfig file is placed in the ~/.hal folder as config, this will be used to access the kubernetes cluster. Refer to the image below.

  • Run the following command after you have downloaded/created the deployment yamls mentioned above.

  • In the above screenshot we have used the command to create a namespace called ‘spinnaker’. Verify that the namespace is created by using # kubectl get namespace as shown below.

  • Set the config context. Run the following command to check the current-context.

    # kubectl config current-context
  • The following is the output:

  • Use the below command to set this as the current context.

    # kubectl config set-context $(kubectl config current-context) –namespace=<namespace>
  • Use the following commands to apply the minio and halyard deployment.

    # kubectl apply -f minio-deployment.yml
    # kubectl apply -f halyard-deployment.yml
  • The following is the output.

  • Use the following command to confirm the deployment.

    # kubectl get deploy
  • Use the following command to check the pods for the above deployment.

    # kubectl get pods –all-namespaces
  • To make the same kubernetes cluster as the target cloud provider, copy the kubeconfig file to the halyard pod and run the following command.

    # kubectl cp ~/.kube/config <some-namespace>/<some-pod>:/tmp/kubeconfig
  • For example, if you have a namespace say ‘spinnaker’ and a halyard pod named ‘spin-halyard-76557f97d9-5lxp7’, the command will look like below.

    # kubectl cp ~/.kube/config default/spin-halyard-76557f97d9-5lxp7:/tmp/kubeconfig
  • Check the tmp folder after logging into halyard pod; the steps to logging into halyard pod are listed below.

Spinnaker configuration using hal commands

Follow the instructions below to configure Spinnaker services using hal commands provided by halyard service.

  • Run the below command to log in to the halyard pod.

     # kubectl exec -it <halyard-pod> /bin/bash
    
    For Eg: kubectl exec -it spin-halyard-76557f97d9-5lxp7 /bin/bash
  • Run the below command to check the configuration.

    # hal config | more
  • The following screen appears, displaying the available Spinnaker versions for deployment.

  • Execute below hal commands to select the Spinnaker version.

    # hal config version edit --version <spinnaker-version>
    
    For example: # hal config version edit --version 1.24.3

Edit Storage providers

You need to edit the storage providers. In this example, we are using s3 as the storage provider. To do so, follow the steps below:

  • Run the following command.

    # hal config storage s3 edit --endpoint http://minio-service:9000 --access-key-id testingtest --secret-access-key –bucket lftestbucket

When prompted for secret-access-key, provide "testingtest". These details are also available in minio-deployment.yml. Also –bucket name is specified as lftestbucket, --bucket: The name of a storage bucket that your specified account has access to. If you don't specify a name, it will be chosen at random. Halyard will create a bucket for you if you specify a globally unique bucket name that doesn't exist yet.

  • Run the following command.

    # hal config storage s3 edit –path-style-access=true
  • Run the following command to enable s3 compatible storage. The earlier warnings about deployment failure will be corrected once the storage is edited, as shown in the screenshot below.

  • Minio doesn’t support versioning objects. Run the below command to disable it in Spinnaker.

    # mkdir –p ~/.hal/default/profiles && echo "spinnaker.s3.versioning: false" > ~/.hal/default/profiles/front50-local.yml
  • Configure target cloud provider, i.e kubernetes and enable the same using the next two commands.

    # hal config provider kubernetes account add k8s-v2-spin --provider-version v2 --kubeconfig-file /tmp/kubeconfig
    # hal config provider kubernetes enable
  • Run the below command to configure the Spinnaker deployment mode.

    # hal config deploy edit --type Distributed --account-name k8s-v2-spin –location <namespace where you would like to deploy>

For Example: hal config deploy edit --type Distributed --account-name k8s-v2-spin --location linuxfoundation.

  • Configure target cloud provider, i.e kubernetes and enable the same using the next two commandsThe namespace 'linuxfoundation' is the one we created earlier by using the below command.

    # kubectl create namespace linuxfoundation
  • Run the below command to kick start spinnaker services.

    # hal deploy apply
  • The following appears when the deployment is successful.

  • Run the below command to kick start spinnaker services.

    # hal deploy apply

Note: You will notice pods are coming up on various spinnaker services. It will take some time for all pods to be in running state.

  • Expose deck and gate endpoints locally to access spinnaker.

    kubectl port-forward spin-deck 9000:9000 
    kubectl port-forward spin-gate 8084:8084
  • Now check spinnaker UI on browser.

    http://localhost:9000

Sharing minio-deployment.yml and halyard-deployment.yml details below.

minio-deployment.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio-deployment
  labels:
    app: minio
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
      - name: minio
        image: docker.io/minio/minio:latest
        args:
        - server
        - /storage
        env:
        - name: MINIO_ACCESS_KEY
          value: "testingtest"
        - name: MINIO_SECRET_KEY
          value: "testingtest"
        ports:
        - containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  ports:
    - port: 9000    
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio
  type: ClusterIP

halyard-deployment.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spin-halyard
  labels:
    app: spin
    stack: halyard
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spin
      stack: halyard
  template:
    metadata:
      labels:
        app: spin
        stack: halyard
    spec:
      containers:
      - name: halyard-daemon
        image: gcr.io/spinnaker-marketplace/halyard:stable
        imagePullPolicy: Always

        readinessProbe:
          exec:
            command:
            - wget
            - -q
            - --spider
            - http://localhost:8064/health
        ports:
        - containerPort: 8064
---
apiVersion: v1
kind: Service
metadata:
  name: spin-halyard
spec:
  ports:
    - port: 8064
      targetPort: 8064
      protocol: TCP
  selector:
    app: spin
    stack: halyard
  type: ClusterIP
---

Last updated