Cluster
Last updated
Last updated
You can define Clusters, which are logical groupings of Server Groups in ISD Spinnaker.
A server group is a set of servers that act as a single deployment target for applications so that you don’t have to deploy applications to each server individually.
Deploying applications to servers in server groups provides redundancy so you can restore applications more seamlessly and quickly, with less downtime.
The base resource, the Server Group, identifies the deployable artifact (VM image, Docker image, source location) and basic configuration settings such as a number of instances, autoscaling policies, metadata, etc. This resource is optionally associated with a Load Balancer and a Firewall. When deployed, a Server Group is a collection of instances of the running software (VM instances, Kubernetes pods).
To create a server group, follow the instructions below. For the following instructions, we used Amazon ECS as an example.
Note: You must create and configure the cloud target in ISD prior to creating server groups. To create and configure cloud targets, refer here.
On the Application Dashboard, click the application for which a server group needs to be created. This will redirect you to the "Pipeline Status" page after you click the application name. Click "Cluster" from the list of options under "Infrastructure," and then click "+Create Server Group," as displayed in the image below.
After clicking "+Create Server Group," button, the already configured cloud provider will be displayed. Select the “Amazon ECS” and click “Next” button as shown in the image below.
Update the required details in the following sections:
Update the following details on the above screen:
Detail: Provide the additional details here.
Account: Select your Account Name from the drop-down menu.
Region: Select the region from the drop-down menu.
ECS Cluster name: The name of the ECS cluster (group of underlying EC2 hosts) onto which your application will be deployed. Note that suggestions here are dependent on the selected account and region combination.
Stack: An environment variable available within your container, and on which you should base your application configuration at runtime.
Update the following detail on the above screen:
Network Mode: AWS vpc is the only networking mode that allows you to use Elastic Network Interfaces (ENI). The default value converts to Bridge on Linux, and NAT on Windows.
Update the following detail on the above screen:
Task Definition source: The source of the ECS Task Definition. Task Definition contents can either be entered manually via input fields or from a selected JSON file artifact. Artifact file contents should be structured as an ECS "RegisterTaskDefinition" request.
Update the following details on the above screen:
Compute options: Specify either a launch type (default) or capacity providers for running your ECS service.
Launch Type: Launch service tasks on your own EC2 instances or on Fargate. Desired capacity: The starting number of containers, before any autoscaling happens.
Minimum: The minimum number of containers you can reach as a result of autoscaling.
Maximum: The maximum number of containers you can reach as a result of autoscaling.
If available, use the previous server group's capacity: Checking this box will have the previous server group's capacity overwrite the new min, max, and desired capacity parameters if a previous server group exists.
If available, copy the previous server group's autoscaling policies: Copy Application Autoscaling policies and their associated alarms from the previous ECS service.
Update the following details on the above screen:
Service Registries (optional): The AWS Cloud Map service discovery registries to assign to this service.
Registry: The AWS Cloud Map service to use for service discovery registration.
Port: The port to be used for your service discovery service. Required only for services using bridge or host network mode, and for services using awsvpc network mode and a type SRV DNS record
Update the following details on the above screens:
Health Check Grace Period: How long a container will be kept alive despite the load balancer health checks, in seconds.
ECS IAM Instance Profile: The IAM role that your container (task, in AWS wording) will inherit. Define a role only if your application needs to access AWS APIs.
Docker Image Credentials: The AWS Secrets Manager secret that contains private registry credentials. Define credentials only for private registries other than Amazon ECR.
Fargate platform version: Defaults to the latest platform version.
Placement Strategy: The strategy the container scheduler will be using. See AWS documentation for more details. You should at least balance across availability zones Custom placement strategies have not been implemented yet.
Placement Constraints: Rules for task placement. See AWS documentation for more details.
Type: To ensure that each task in a particular group is running on a different container instance, use distinctInstance. To restrict the selection to a group of valid candidates, use memberOf.
Expression :A cluster query language expression to apply to memberOf constraints. See AWS documentation for more details.
Docker labels (optional): Additional labels are applied to your Docker container. This metadata can be used to identify your containers, or in conjunction with logging options. Maps directly to the --label Docker flag. Spinnaker will automatically add the spinnaker.servergroup, spinnaker.stack, spinnaker.detail labels for non-null values.
Environment Variables (optional): The environment variable(s) your container are deployed with. SERVER_GROUP, CLOUD_STACK, and CLOUD_DETAIL environment variables are used during deployment to identify the task and cannot be set here.
Tags (optional): The tags to apply to the task definition and the service.
Update the following details on the above screen:
Container Image: Select the container image the named container should run.
Compute Units: The assured minimal amount of computing capacity your container will be able to use. 1024 units are equal to 1 AWS virtual CPU. If other containers on your underlying host are not using their reserved compute capacity, this container will be able to use it.
Reserved Memory: The maximal amount of memory that your container can use, in megabytes. Exceeding this amount may result in the termination of your container. 1024 mb = 1 gb.
Target Group Mappings: The list of target groups through which the ECS service will receive load balancer traffic. Each target group is mapped to a container name and port within the Task Definition to specify which container should be registered to the target group.
Target group: A target group is attached to an application/network load balancer and is a target for load balancer traffic. You need to create both the application load balancer and target groups prior to being able to use them in your pipeline.
Target port: The port on which your application is listening for incoming traffic.
Update the following detail on the above screen:
Log driver (Optional): The container's logging driver. This directly maps to the --log-driver Docker flag.
If you have a service with running tasks and want to redeploy your service with the same configuration then you can use “Force Restart” option. The force restart option stops all currently running tasks and then launches new tasks using the same configuration. To do follow the instructions below:
Click on the selected Server Group and the “Server Group Actions” option appears on the right side. Select the “Force Restart” option from the Server Group Actions drop-down menu.
The following confirmation box appears, asking you to confirm whether you want to proceed with the Force restart and provide the reason as optional.
After confirming that Force Restart will be done successfully.