How to Create Terraform Custom Job in Spinnaker

Configure and use Terraform Custom Job Stage

What you’ll learn:

  1. How to configure and set up Terraform stage in Spinnaker as a custom stage job.

  2. How to execute Terraform operations through.

Prerequisites:

  1. Understanding of spinnaker and spinnaker custom job.

  2. Running spinnaker with Kubernetes cluster account configured in it.

    • Get detailed information about Spinnaker Custom Job

    • At OpsMx we have developed open-source TerraSpin micro-service which seamlessly integrates with spinnaker through spinnaker custom job. Which ideally creates three stages in spinnaker ( TSPlanJob, TSApplyJob, and TSDestroyJob ). Each stage has its own input fields and output.

Configuration Steps:

Execute the below steps to configure Spinnaker with TerraSpin Custom Job.

  1. Create a file ‘artifactaccounts.json’ add the below content and replace values according to your artifact account. The artifactaccounts.json contains account details from where the job pulls terraform code. The details are:

    • accountname : This can be any descriptive name without spaces.

    • artifacttype: Github, S3 or any of the supported artifiact repositories.

    • Username and Password: These are the credentials for accessing the repo that will be specified during the stage input.

Note: Ensure GIT Username & Password doesn't have ‘@’ included.

2.Create a secret from where TerraSpin Job will read the information provided in artifactaccount.json

kubectl create cm terraspinbackendconfig --from-file=artifactaccounts.json -n default => ”default”

The default namespace is the one that is planned for using the TerraSpin Jobs. Ensure that access to this namespace is limited as credentials are available here.

3. Create a file orca-local.yml with the following contents and replace ALL occurrences of:

  • account: Name of the spinnaker kubernetes account.

  • application: Name of the application in Spinnaker.

  • credentials: Provide the same Kubernetes Account Name provided above.

  • namespace (under metadata): Namespace where the Terraform Job should run. This should be the same namespace where terraspinbackendconfig configmap was created in (a) above.

4. Navigate to the following location:

~/.hal/default/profiles in halyard pod/machine

Copy the orca-local.yml file created above. In case an orca-local.yml already exists, please append the contents as appropriate.

5. Execute the following in halyard pod or on halyard machine:

hal deploy apply 

At this point spinnaker configuration for Terraform Customer Job stage is complete. Please wait for all pods to restart and stabilize.

Executing Terraform Scripts via the custom job stage

Once spinnaker configuration for Terraform Customer Job stage is complete, the following stages should be available in pipeline configuration:

  1. TSPlanJob

  2. TSApplyJob

  3. TSDestroyJob

TSPlanJob:

This stage does Terraform infra-code initial formal run basically (terraform init and terraform plan). This stage has the following five inputs.

  1. Tf Script Account : This must be one of the “account” values that is defined in artifactaccounts.json. You have to choose that account where your tf script is present.

  2. Terraform plan:

    • This is a location of Terraform Script.

    • Provide the location in the form of ‘username/repo-name.git//folder’.

    Note: The two // separating the repo and the folder containing the terraform root module script. E.g. for github: OpsMx/staging-terraform-pm.git//azure/k8cluster. The credentials for accessing this repo were provided in artifactaccounts.json.

  3. Override file (optional): If present, the file specified here will be applied on the root module. Possible use-case might be to provide a tfvars file.

    Note: If you would like to ignore this option, ensure the block is empty by removing the help text available

  4. Tf state account: This must be one of the “account” values that is defined in artifactaccounts.json.You have to choose that account where you want to store the tf state.

  5. State repo: This is repo where the terraform state files are stored and retrieved across multiple stages such as between plan and apply, apply and destroy. This is mandatory for the Terraform Custom Job Stage to function.

    Note: As state-information can contain credentials in plain-text, this repo should have control access. Same account name and their credentials will be used to access this repo (E.g. opsmx/staging-terraform-states.git).

  6. UUId: This can be any unique string based on user choice to identify the terraform state across multiple stages. It is not mandatory to have all the stages(TSPlan, TSApply, TSDestroy) in the same pipeline. However, they all should have the same UUid.

Output: This stage will show terraform init and plan command output.

TSApplyJob:

Functionality of this stage is to create terraform infra-code( terraform apply ). Here stage output will have properties with terraform infra-code output-values in a key-value format so that user can use those values in the next subsequent stage of pipeline this stage has four inputs.

  1. Artifact account

  2. Override file

  3. Tf State Account

  4. State repo

  5. UUId

Output: This stage will show terraform apply command output.

TSDestroyJob:

Functionality of this stage to destroy terraform infra-code basically ( terraform destroy ) this stage has four inputs.

  1. Tf State Account.

  2. State repo.

  3. UUId.

Output: This stage will show terraform destroy command output

Last updated