Spinnaker Pipeline

Overview

We will discuss Spinnaker pipelines and will learn how to create and manage a pipeline, as well as how to trigger pipelines and pipeline expressions.

By the end of this chapter, you should be able to:

  • Create Spinnaker pipelines

  • Trigger Spinnaker pipelines

  • Configure Spinnaker pipelines.

What Is a Spinnaker Pipeline?

In the world of CI/CD, pipelines play a very important role. The success/failure of your CI/CD implementation depends on how you design your pipeline and what actions your pipeline is capable of taking. Let’s understand what a pipeline is.

When implementing CI/CD, there are many discrete tasks that we end up performing, such as checking out the code from the source code repository, building it, retaining the binaries in a binary repository, performing a lot of checks and scans on the binaries, deploying these binaries on the target test environment, performing automated testing, etc. Though these actions are discrete, they should be performed in a proper sequence. The thread that binds these discrete activities together in a particular sequence is termed a pipeline. Depending on what activities are being performed by the pipeline, it could be a QA pipeline or a Production pipeline. Repeatability is the most obvious advantage of binding these actions in a pipeline, so that the same pipeline execution would indirectly execute the actions in the very same order without deviation.

A pipeline hence consists of a set of actions and also contains the sequence in which these actions should be executed. Thus, the ability to create and manage these pipelines is the most important parameter to be considered while selecting the CD tool.

Unlike any other CD tool, a pipeline is one of the main components in Spinnaker. It comprises a sequence of actions based on your custom requirements. This makes a pipeline consistent, flexible and repeatable. Spinnaker is also superior to other CD tools because of the ease with which it allows the users to add/view/modify different stages in the pipeline. One can pass all the deployment parameters from one stage to another through the pipeline.

Spinnaker comes bundled with a bunch of predefined stages that you can add to your pipeline, and provides users with the ability to add custom stages. Executing a pipeline is also known as triggering a pipeline. You can trigger a pipeline manually or configure a pipeline to trigger automatically based on an event. An event can be the completion of a Jenkins job, a CRON schedule, etc.

You can also configure Spinnaker pipelines to send you notifications by email, Slack, or SMS at various points of pipeline execution (for example, pipeline start/complete/fail). In this chapter you will learn in detail about how you can create a pipeline in Spinnaker, how you can add stages (actions) in the pipeline, how to trigger a pipeline and more.

Setting Up Spinnaker Pipelines

Create a Pipeline

A pipeline is always associated with an application/service that you want to build and deploy. Follow the steps below to create a pipeline.

  • Select the application for which you want to create the pipeline. After selecting, click the Pipelines tab as shown below.

  • Click Create as shown in the image below.

  • Select Pipeline in the Group by drop-down. Then enter the pipeline name.

You have now created a blank pipeline. After creating the pipeline, you need to add a stage to it. A stage in a pipeline is the set of actions you want the pipeline to take while executing it.

Add a Stage

The first stage of any pipeline is the Configuration stage in which you can add a trigger and set up the parameters. Follow the steps below to add the first stage.

  • Select Add Stage as shown below.

  • Set the type of stage using the drop-down menu as shown below:

  • As mentioned before, a pipeline is a set of stages that are bound in a particular sequence. If the stage you’re adding is not the first stage, then this stage should be added after some existing stage. Hence, we use the "Depends On" field to set this stage's dependency on the required upstream stage(s).

You can add as many stages in the pipeline as needed, in any order, as per your requirement. Next, we'll look at how to add a trigger.

Add a Trigger

Follow the steps below to add a trigger:

  • Make sure you are editing the Configuration stage of the pipeline as shown below.

  • Click Add Trigger as shown below.

  • Select the type of trigger from the drop-down menu.

We will learn more about the different types of triggers in detail in the upcoming chapter.

Run a Pipeline Manually

  • Click the Start Manual Execution button to run a pipeline manually as shown below.

  • A confirmation dialog box will appear. You can enter the required parameters and execute the pipeline.

The parameters depend on the pipeline. There will be no parameters if it is a simple pipeline with no configurations. You must enter the parameters if the pipeline has been configured. We will see examples of each.

Simple pipeline

  • When manually executing the pipeline, we have configured a simple pipeline that deploys nginx, and we are not passing any parameters.

  • In the above screenshot we can see a pipeline named ’LF-test-pipeline’. No changes have been made to the configuration stage. A Deploy(Manifest) stage has been configured, which deploys a Kubernetes manifest yaml/json file. deploy-nginx is the name of the stage. The screenshot is shown below. The details of adding stages are covered in previous sections.

  • In 'Manifest Configuration,' we chose a Text-based manifest as the Manifest Source, and we put the Kubernetes manifest yaml file to deploy nginx there.

  • Save our configuration by selecting the Save Changes option at the bottom right of the screen, as shown in the screenshot below.

  • After saving our pipeline, we'll return to the screen where all of our pipelines are shown, configured for the appropriate application, by selecting the Pipeline tab at the top left of your screen as shown below.

The pipeline that we have configured is under the testapp application as shown in the screenshot. Configuring application has been explained in previous lessons.

  • Click the PIPELINES option and we will see the below screen.

  • Click on Start Manual Execution to run this pipeline.

    We'll get a confirmation window, and we'll see that we don't need to pass any parameters to run the pipeline, as we have not configured any.

We can see in the above screenshot that it says to select execution parameters, however as mentioned there is no parameter configured hence we do not need to pass any.

  • Click on Run and the pipeline will be executed.

Comment:
Kubernetes manifest yaml file for nginx deployment:
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

Pipeline with Parameter

Now, we have configured another pipeline LF-test-pipeline-1 that's a little more complicated than the previous one, where we are deploying Minio. We've added three stages to this deployment, as shown in the screenshot below.

All the stages after ‘Configuration’ are the Deploy (Manifest) stage, same like the previous example, the stages use kubernetes manifest yaml/json files for deployment. Below a screenshot of the last stage configuration.

We have configured the last stage ‘deploy-minio’ which is dependent on the previous two stages.

As we can see, we have a Depends On option where we can select dependent stages. In our case it is ‘miniosecret’ and ‘minioservice’.

Apart from Configuration, all three stages use a text-based Manifest source, in this case Kubernetes yaml/json, as in the previous example for the simple pipeline.

The interesting part here is we will pass parameters for this pipeline. We want to deploy this Minio to a particular namespace in the spinnaker. We will configure these details in the Configuration stage.

Select the Configuration stage and PARAMETERS option as shown in the below screenshot.

Click the Add Parameter button and fill in the details we want to pass as a parameter. In our case, we are trying to pass namespace as a parameter. In our Spinnaker, we have two namespaces configured as develop and prod.

The Default Value we have kept as develop, we have also enabled ‘Show Options’, this enables us to select/change the Default Value and select the parameter we would like to pass before running the pipeline. After we have configured the parameters we will ‘Save Changes’.

Our Kubernetes yamls that we are using for the minio deployment uses pipeline expression ‘${parameters.namespace}’ for namespace which accepts the above parameter that we are passing.

We'll return to our pipeline page, as before.

We can see both the pipelines that we have configured so far. We will try to run the second pipeline from the screenshot that we see i.e., LF-test-pipeline-1.

Click on Start Manual Execution and the below screen will appear.

We can see the Namespace parameter that we have configured, we also see that by default we have ‘develop’ namespace selected. As we have configured another option for our Namespace parameter we can select the same by clicking on the downward triangle icon in the values window.

We can change the parameter depending on our requirement, once we have selected the desired Namespace, we will click Run to start the pipeline.

Comment:
Kubernetes manifest yaml for minio deployment:
miniosecret.yml:
apiVersion: v1
data:
  password: b3BzbXgxMjM0NTYK
  username: b3BzbXgxMjM0NTYK
kind: Secret
metadata:
  name: minio-secret-dr
  namespace: '${ parameters.namespace }'
type: Opaque
minioservice.yml:
apiVersion: v1
kind: Service
metadata:
  name: minio-service-dr
  namespace: '${ parameters.namespace }'
spec:
  ports:
    - port: 9000
      protocol: TCP
      targetPort: 9000
  selector:
    app: minio
  type: ClusterIP
miniodeploy.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio-deployment-dr
  namespace: '${ parameters.namespace }'
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
        - args:
            - server
            - /storage
          env:
            - name: MINIO_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  key: username
                  name: minio-secret-dr
            - name: MINIO_SECRET_KEY
              valueFrom:
                secretKeyRef:
                  key: password
                  name: minio-secret-dr
          image: minio/minio
          name: minio
          ports:
            - containerPort: 9000
          volumeMounts:
            - mountPath: /storage
              name: storage
      volumes:
        - emptyDir: {}
          name: storage
 *** 

Passing parameters entirely depends on how the pipeline is configured. Also, there are other available options that can be passed as parameters which again depend on user/client requirements. It is not necessary that a pipeline has to be constructed to take parameters as input, however if the need arises, parameters can be passed to a pipeline like the above example.

Disable a Pipeline

If you disable a pipeline, it will prevent any triggers from firing. Also, manual execution of the pipeline is not possible for disabled pipelines.

Follow the steps below to disable a pipeline:

  • Click Configure to modify a pipeline as shown below.

  • Click the Pipeline Actions from drop-down and select Disable as shown below.

Delete a Pipeline

Follow the steps below to delete a pipeline:

  • Click Configure to modify a pipeline as shown below.

  • Click the Pipeline Actions from drop-down and select Delete as shown below.

Edit Pipeline as JSON

Whatever changes you make in a pipeline from the Spinnaker UI get converted to JSON. Hence, when you want to edit a pipeline you can do so through the Spinnaker UI, or you can edit the JSON of the pipeline to make changes to the pipeline. Follow the steps below.

  • Click Configure to modify a pipeline as shown below.

  • Click the Pipeline Actions drop-down and select Edit as JSON as shown below.

View and Restore Pipeline Versions

Whenever you make a change and save the pipeline, a new version is created and added to the version history. You can view the version history and if necessary, restore an older version.

Note: Version history is not supported in Minio or Redis storage devices.

Follow the steps below to view and restore the versions:

  • Click Configure to modify a pipeline as shown below.

  • Click the Pipeline Actions from drop-down and select View Revision History.

  • A window appears, displaying the current version's JSON representation. Click the Revision from drop-down to see the older versions as shown below.

  • If you want to compare to any version, select the compare to from drop-down menu as shown below.

  • A button appears after you select a version from the Revision drop-down menu. Click the Restore this version button to restore that particular version as shown below.

Lock a Pipeline

If you lock a pipeline it will prevent any modifications on the pipeline from the UI. You can still modify the pipeline using the API. Follow the steps below to lock a pipeline.

  • Click Configure to modify a pipeline as shown below.

Click the Pipeline Actions from drop-down and select Lock as shown below.

Rename a Pipeline

Follow the steps below to rename a pipeline.

  • Click Configure to modify a pipeline as shown below.

  • Click the Pipeline Actions drop-down and select Rename as shown below.

Last updated