# Sending Build and Deployment Events to SSD

This page explains in detail on how to send build metadata, artifact details, and deployment information from an AWS CodeBuild / CodeDeploy pipeline to the SSD (Security, Safety & Delivery) Scanner using API calls. It includes:

* Required AWS environment variables
* Steps to add after pushing images to Artifactory/ECR
* Correct Git URL formatting
* SSD configuration (Teams, Integrators, Tokens)

### Prerequisites

The AWS Pipeline must be able to:

* Build the application
* Push Docker images to Artifactory / ECR

{% hint style="info" %}
If  the data needs to be mapped to a specific team, creating a team is required. Otherwise, this field is optional and can be left empty. Refer [Managing Teams and Access](https://docs.opsmx.com/opsmx-delivery-shield-platform/user-guide/manage-teams-and-access).&#x20;
{% endhint %}

The Bitbucket and ECR integrators needs to be integrated. Refer [Integrating BitBucket](https://docs.opsmx.com/opsmx-delivery-shield-platform/getting-started/integrating-ci-and-cd-tools-in-delivery-shield/bitbucket) and [Integrating ECR](https://docs.opsmx.com/opsmx-delivery-shield-platform/getting-started/integrating-registry-in-delivery-shield/ecr) on steps to complete the process.

### Required AWS Environment Variables

The following environment variables are required in AWS CodeBuild:

| **Variable**     | **Description**                     |
| ---------------- | ----------------------------------- |
| SSD\_URL         | Base URL of the SSD instance        |
| SSD\_TEAM\_TOKEN | API token for team authentication   |
| GIT\_URL         | Repository URL (format shown below) |
| GIT\_BRANCH      | Branch being built                  |
| DOCKER\_IMAGE    | Pushed Docker image name            |
| DOCKER\_TAG      | Tag of the image                    |

#### Mandatory Git URL Format

<https://bitbucket.org/\\>\<ORGANISATION\_NAME>/\<REPO\_NAME>.git

{% hint style="info" %}
If image name/tag variables are already configured in your environment, you can utilize those existing pipeline variables.
{% endhint %}

### Pipeline Step: Sending Build Metadata to SSD

Add the following code immediately after pushing the image to Artifactory/ECR:

```
echo "Sending metadata to SSD Scanner..."

curl --location "${SSD_URL}/webhook/v1/ssd" \
  --header "Content-Type: application/json" \
  --header "X-OpsMx-Auth: ${SSD_TEAM_TOKEN}" \
  --data "{
    \"jobname\": \"${CODEBUILD_BUILD_ID}\",
    \"buildnumber\": \"${CODEBUILD_BUILD_NUMBER}\",
    \"joburl\": \"${CODEBUILD_BUILD_URL}\",
    \"gitcommit\": \"${CODEBUILD_RESOLVED_SOURCE_VERSION}\",
    \"builduser\": \"aws-codebuild\",
    \"giturl\": \"${GIT_URL}\",
    \"gitbranch\": \"${GIT_BRANCH}\",
    \"artifacts\": [
      {
        \"image\": \"${DOCKER_IMAGE}:${DOCKER_TAG}\"
      }
    ]
  }"

```

### Login to ECR & Fetch Artifact SHA

To login to ECR and fetch the artifact SHA execue the below code:

```
aws ecr get-login-password --region AWS.REGION \
  | docker login --username AWS --password-stdin AWS.ACCOUNT.dkr.ecr.REGION.amazonaws.com

echo "Fetching the Artifact SHA..."

ARTIFACT_SHA=$(docker manifest inspect ${DOCKER_IMAGE}:${DOCKER_TAG} --verbose | jq -r .Descriptor.digest)
echo "Artifact SHA: $ARTIFACT_SHA"

echo "Sleeping for 30 seconds..."
sleep 30


```

### Trigger SSD Data Collection (with Retry Logic)

To trigger SSD data collection, execute the following code:

```
echo "Triggering Data Collection API..."

MAX_RETRIES=1000
RETRY_INTERVAL=10
attempt=1

while [ $attempt -le $MAX_RETRIES ]; do
  echo "Attempt #$attempt: Triggering Data Collection..."

  RESPONSE_FILE=$(mktemp)
  HTTP_STATUS=$(curl -s -o "$RESPONSE_FILE" -w "%{http_code}" \
    -X POST "${SSD_URL}/webhook/api/v1/datacollection" \
    -H "Content-Type: application/json" \
    -H "X-OpsMx-Auth: ${SSD_TEAM_TOKEN}" \
    -d "{
      \"artifactName\": \"${DOCKER_IMAGE}\",
      \"artifactTag\": \"${DOCKER_TAG}\",
      \"organizationName\": \"${ORGANISATION_NAME}\",
      \"artifactSha\": \"$ARTIFACT_SHA\"
    }")

  echo "HTTP Status: $HTTP_STATUS"
  cat "$RESPONSE_FILE"

  if [ "$HTTP_STATUS" -eq 200 ]; then
    echo "Data Collection Triggered Successfully."
    break
  fi

  echo "Data Collection Failed. Retrying in $RETRY_INTERVAL seconds..."
  sleep $RETRY_INTERVAL
  attempt=$((attempt + 1))
done

if [ $attempt -gt $MAX_RETRIES ]; then
  echo "Timed out waiting for Data Collection API to return HTTP 200."
  exit 1
fi

```

To retrieve the necessary ORGANISATION\_NAME information from the SSD Dashboard, follow these steps:

1. Go to Setup.
2. Navigate to **Access Management**.

{% hint style="info" %}
This information is required for ORGANISATION\_NAME.
{% endhint %}

### Firewall API (Policy Enforcement Before Deployment)

To access the firewall API execute the following code:

```
echo "Calling the Firewall API..."

RESPONSE=$(curl --silent --location "${SSD_URL}/ssdservice/v1/ssdFirewall" \
  --header "Content-Type: application/json" \
  --header "X-OpsMx-Auth: ${SSD_TEAM_TOKEN}" \
  --data "{
    \"teamName\": \"UPDATE.TEAM.NAME\",
    \"appName\": \"APP.NAME.IN.OPSMX.DASHBOARD\",
    \"account\": \"${BUILD_ENV}\",
    \"clusterName\": \"PROVIDE.ANY.VALUE\",
    \"image\": \"${DOCKER_IMAGE}:${DOCKER_TAG}\"
  }")

echo "Response: $RESPONSE"

ALLOW=$(echo "$RESPONSE" | jq -r '.allow')
PERMISSION=$(echo "$RESPONSE" | jq -r '.proceedWithPermissionCheck')
MESSAGE=$(echo "$RESPONSE" | jq -r '.message')

if [[ "$ALLOW" == "true" && "$PERMISSION" == "true" ]]; then
  echo "✅ SSD Firewall check passed."
else
  echo "$MESSAGE"
  echo "❌ SSD Firewall check failed. Exiting pipeline."
  exit 1
fi
```

| **Field**   | **Description**                       |
| ----------- | ------------------------------------- |
| teamName    | Must match the Team configured in SSD |
| appName     | Application name displayed in SSD UI  |
| account     | Must match name in Clusters page      |
| clusterName | Any user-defined cluster label        |

### Generating a Team Token in SSD

1. Click on the name of the **Team** (given as tabs in the Teams panel) for which you want to generate token as shown below:

<figure><img src="https://docs.opsmx.com/~gitbook/image?url=https%3A%2F%2F2047464521-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252F-MBEa1hoX6SqpDj-ymNs%252Fuploads%252Fr84o8pdF1L8yUfv8X6jJ%252Faccess%2520token%25201.png%3Falt%3Dmedia%26token%3Df8667fcd-45e4-44c5-9cbb-3edbb0b2a735&#x26;width=768&#x26;dpr=4&#x26;quality=100&#x26;sign=4dce077d&#x26;sv=2" alt=""><figcaption></figcaption></figure>

2. The details of the **Team** along with its **User Roles** are displayed.
3. Click **Generate Token** button as shown below:

<figure><img src="https://docs.opsmx.com/~gitbook/image?url=https%3A%2F%2F2047464521-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252F-MBEa1hoX6SqpDj-ymNs%252Fuploads%252Fxi6G8TLs9IodxtCrlcs6%252Faccess%2520token%25202.png%3Falt%3Dmedia%26token%3D9457bc02-a1e3-45e0-b549-ca030b6a85ab&#x26;width=768&#x26;dpr=4&#x26;quality=100&#x26;sign=a3cc75fd&#x26;sv=2" alt=""><figcaption></figcaption></figure>

4. A token is created and a success message is displayed as shown:

<figure><img src="https://docs.opsmx.com/~gitbook/image?url=https%3A%2F%2F2047464521-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252F-MBEa1hoX6SqpDj-ymNs%252Fuploads%252F0u10DYbX0qOXoug69Hyc%252Fimage.png%3Falt%3Dmedia%26token%3D1db6dccd-63a6-4028-a573-e45b1d4ef204&#x26;width=768&#x26;dpr=4&#x26;quality=100&#x26;sign=2e8ed800&#x26;sv=2" alt=""><figcaption></figcaption></figure>

5. Copy & store the token securely

### Points to Remember

* SSD\_URL and SSD\_TEAM\_TOKEN must be defined in AWS CodeBuild environment variables
* Pipeline IAM must allow:
  * ECR authentication
  * Docker manifest inspect
  * External API calls
* After configurations:
  * Re-run the pipeline via AWS console or PR/Push event
  * Wait 5 minutes for SSD Dashboard to update the latest results
  * Ensure there are no errors in the AWS build logs
