Skip to content

Cluster

Reference doc for the `sst.aws.Cluster` component.

The Cluster component lets you create a cluster of containers and add services to them. It uses Amazon ECS on AWS Fargate.

Create a Cluster

const vpc = new sst.aws.Vpc("MyVpc");
const cluster = new sst.aws.Cluster("MyCluster", { vpc });

Add a service

cluster.addService("MyService");

Add a public custom domain

cluster.addService("MyService", {
public: {
domain: "example.com",
ports: [
{ listen: "80/http" },
{ listen: "443/https", forward: "80/http" },
]
}
});

Enable auto-scaling

cluster.addService("MyService", {
scaling: {
min: 4,
max: 16,
cpuUtilization: 50,
memoryUtilization: 50,
}
});

Link resources to your service. This will grant permissions to the resources and allow you to access it in your app.

const bucket = new sst.aws.Bucket("MyBucket");
cluster.addService("MyService", {
link: [bucket],
});

If your service is written in Node.js, you can use the SDK to access the linked resources.

app.ts
import { Resource } from "sst";
console.log(Resource.MyBucket.name);

Constructor

new Cluster(name, args, opts?)

Parameters

ClusterArgs

transform?

Type Object

Transform how this component creates its underlying resources.

transform.cluster?

Type ClusterArgs | (args: ClusterArgs => void)

Transform the ECS Cluster resource.

vpc

Type Input<Object>

The VPC to use for the cluster.

{
vpc: {
id: ["vpc-0d19d2b8ca2b268a1"],
publicSubnets: ["subnet-0b6a2b73896dc8c4c", "subnet-021389ebee680c2f0"],
privateSubnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"],
securityGroups: ["sg-0399348378a4c256c"],
}
}

Or create a Vpc component.

const myVpc = new sst.aws.Vpc("MyVpc");

And pass it in.

{
vpc: myVpc
}

vpc.id

Type Input<string>

The ID of the VPC.

vpc.privateSubnets

Type Input<Input<string>[]>

A list of private subnet IDs in the VPC. The service will be placed in the private subnets.

vpc.publicSubnets

Type Input<Input<string>[]>

A list of public subnet IDs in the VPC. If a service has public ports configured, its load balancer will be placed in the public subnets.

vpc.securityGroups

Type Input<Input<string>[]>

A list of VPC security group IDs.

Properties

nodes

Type Object

The underlying resources this component creates.

nodes.cluster

Type Cluster

The Amazon ECS Cluster.

Methods

addService

addService(name, args?)

Parameters

Returns Service

Add a service to the cluster.

cluster.addService("MyService");

Set a custom domain for the service.

cluster.addService("MyService", {
domain: "example.com"
});

Enable auto-scaling

cluster.addService("MyService", {
scaling: {
min: 4,
max: 16,
cpuUtilization: 50,
memoryUtilization: 50,
}
});

ClusterServiceArgs

architecture?

Type Input<x86_64 | arm64>

Default “x86_64”

The CPU architecture of the container in this service.

{
architecture: "arm64"
}

cpu?

Type 0.25 vCPU | 0.5 vCPU | 1 vCPU | 2 vCPU | 4 vCPU | 8 vCPU | 16 vCPU

Default “0.25 vCPU”

The amount of CPU allocated to the container in this service.

{
cpu: "1 vCPU"
}

environment?

Type Input<Record<string, Input<string>>>

Key-value pairs of values that are set as container environment variables. The keys need to:

  • Start with a letter
  • Be at least 2 characters long
  • Contain only letters, numbers, or underscores
{
environment: {
DEBUG: "true"
}
}

image?

Type Input<Object>

Default {}

Configure the docker build command for building the image.

Prior to building the image, SST will automatically add the .sst directory to the .dockerignore if not already present.

{
image: {
context: "./app",
dockerfile: "Dockerfile",
args: {
MY_VAR: "value"
}
}
}

image.args?

Type Input<Record<string, Input<string>>>

Key-value pairs of build args to pass to the docker build command.

{
args: {
MY_VAR: "value"
}
}

image.context?

Type Input<string>

Default ”.”

The path to the Docker build context. The path is relative to your project’s sst.config.ts.

Change where the docker build context is located.

{
context: "./app"
}

image.dockerfile?

Type Input<string>

Default “Dockerfile”

The path to the Dockerfile. The path is relative to the build context.

Use a different Dockerfile.

{
dockerfile: "Dockerfile.prod"
}

Type Input<any[]>

Link resources to your service. This will:

  1. Grant the permissions needed to access the resources.
  2. Allow you to access it in your app using the SDK.

Takes a list of components to link to the service.

{
link: [bucket, stripeKey]
}

logging?

Type Input<Object>

Default { retention: “forever” }

Configure the service’s logs in CloudWatch.

{
logging: {
retention: "1 week"
}
}

logging.retention?

Type Input<1 day | 3 days | 5 days | 1 week | 2 weeks | 1 month | 2 months | 3 months | 4 months | 5 months | 6 months | 1 year | 13 months | 18 months | 2 years | 3 years | 5 years | 6 years | 7 years | 8 years | 9 years | 10 years | forever>

Default “forever”

The duration the logs are kept in CloudWatch.

memory?

Type ${number} GB

Default “0.5 GB”

The amount of memory allocated to the container in this service.

{
memory: "2 GB"
}

permissions?

Type Input<Object[]>

Permissions and the resources that the service needs to access. These permissions are used to create the service’s task role.

Allow the service to read and write to an S3 bucket called my-bucket.

{
permissions: [
{
actions: ["s3:GetObject", "s3:PutObject"],
resources: ["arn:aws:s3:::my-bucket/*"]
},
]
}

Allow the service to perform all actions on an S3 bucket called my-bucket.

{
permissions: [
{
actions: ["s3:*"],
resources: ["arn:aws:s3:::my-bucket/*"]
},
]
}

Granting the service permissions to access all resources.

{
permissions: [
{
actions: ["*"],
resources: ["*"]
},
]
}

permissions[].actions

Type string[]

The IAM actions that can be performed.

{
actions: ["s3:*"]
}

permissions[].resources

Type Input<string>[]

The resourcess specified using the IAM ARN format.

{
resources: ["arn:aws:s3:::my-bucket/*"]
}

public?

Type Input<Object>

Configure a public endpoint for the service. When configured, a load balancer will be created to route traffic to the containers. By default, the endpoint is an autogenerated load balancer URL.

You can also configure a custom domain for the public endpoint.

{
public: {
domain: "example.com",
ports: [
{ listen: "80/http" },
{ listen: "443/https", forward: "80/http" }
]
}
}

public.domain?

Type Input<string | Object>

Set a custom domain for your public endpoint.

Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you’ll need to pass in a cert that validates domain ownership and add the DNS records.

By default this assumes the domain is hosted on Route 53.

{
domain: "example.com"
}

For domains hosted on Cloudflare.

{
domain: {
name: "example.com",
dns: sst.cloudflare.dns()
}
}
public.domain.cert?

Type Input<string>

The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically.

To manually set up a domain on an unsupported provider, you’ll need to:

  1. Validate that you own the domain by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner.
  2. Once validated, set the certificate ARN as the cert and set dns to false.
  3. Add the DNS records in your provider to point to the load balancer endpoint.
{
domain: {
name: "example.com",
dns: false,
cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63"
}
}
public.domain.dns?

Type Input<false | sst.aws.dns | sst.cloudflare.dns | sst.vercel.dns>

Default sst.aws.dns

The DNS provider to use for the domain. Defaults to the AWS.

Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing.

Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you’ll need to set dns to false and pass in a certificate validating ownership via cert.

Specify the hosted zone ID for the Route 53 domain.

{
domain: {
name: "example.com",
dns: sst.aws.dns({
zone: "Z2FDTNDATAQYW2"
})
}
}

Use a domain hosted on Cloudflare, needs the Cloudflare provider.

{
domain: {
name: "example.com",
dns: sst.cloudflare.dns()
}
}

Use a domain hosted on Vercel, needs the Vercel provider.

{
domain: {
name: "example.com",
dns: sst.vercel.dns()
}
}
public.domain.name

Type Input<string>

The custom domain you want to use.

{
domain: {
name: "example.com"
}
}

Can also include subdomains based on the current stage.

{
domain: {
name: `${$app.stage}.example.com`
}
}

public.ports

Type Input<Object[]>

Configure the port mappings the public endpoint listens to and forwards to the service. Supports two types of protocols:

  1. Application Layer Protocols: http and https. This’ll create an Application Load Balancer.
  2. Network Layer Protocols: tcp, udp, tcp_udp, and tls. This’ll create a Network Load Balancer.

You can not configure both application and network layer protocols for the same service.

{
public: {
ports: [
{ listen: "80/http", forward: "8080/http" }
]
}
}

The forward port and protocol defaults to the listen port and protocol. So in this case both are 80/http.

{
public: {
ports: [
{ listen: "80/http" }
]
}
}
public.ports[].forward?

Type Input<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls>

Default The same port and protocol as listen.

The port and protocol of the container the service forwards the traffic to. Uses the format {port}/{protocol}.

public.ports[].listen

Type Input<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls>

The port and protocol the service listens on. Uses the format {port}/{protocol}.

scaling?

Type Input<Object>

Default { min: 1, max: 1 }

Configure the service to automatically scale up or down based on the CPU or memory utilization of a container. By default, scaling is disabled and the service will run in a single container.

{
scaling: {
min: 4,
max: 16,
cpuUtilization: 50,
memoryUtilization: 50
}
}

scaling.cpuUtilization?

Type Input<number>

Default 70

The target CPU utilization percentage to scale up or down. It’ll scale up when the CPU utilization is above the target and scale down when it’s below the target.

{
scaling: {
cpuUtilization: 50
}
}

scaling.max?

Type Input<number>

Default 1

The maximum number of containers to scale up to.

{
scaling: {
max: 16
}
}

scaling.memoryUtilization?

Type Input<number>

Default 70

The target memory utilization percentage to scale up or down. It’ll scale up when the memory utilization is above the target and scale down when it’s below the target.

{
scaling: {
memoryUtilization: 50
}
}

scaling.min?

Type Input<number>

Default 1

The minimum number of containers to scale down to.

{
scaling: {
min: 4
}
}

storage?

Type ${number} GB

Default “21 GB”

The amount of ephemeral storage (in GB) allocated to a container in this service.

{
storage: "100 GB"
}

transform?

Type Object

Transform how this component creates its underlying resources.

transform.image?

Type ImageArgs | (args: ImageArgs => void)

Transform the Docker Image resource.

transform.listener?

Type ListenerArgs | (args: ListenerArgs => void)

Transform the AWS Load Balancer listener resource.

transform.loadBalancer?

Type LoadBalancerArgs | (args: LoadBalancerArgs => void)

Transform the AWS Load Balancer resource.

transform.logGroup?

Type LogGroupArgs | (args: LogGroupArgs => void)

Transform the CloudWatch log group resource.

transform.service?

Type ServiceArgs | (args: ServiceArgs => void)

Transform the ECS Service resource.

transform.target?

Type TargetGroupArgs | (args: TargetGroupArgs => void)

Transform the AWS Load Balancer target group resource.

transform.taskDefinition?

Type TaskDefinitionArgs | (args: TaskDefinitionArgs => void)

Transform the ECS Task Definition resource.

transform.taskRole?

Type RoleArgs | (args: RoleArgs => void)

Transform the ECS Task IAM Role resource.