All Downloads are FREE. Search and download functionalities are using the official Maven repository.

io.github.cdklabs.cdkecsserviceextensions.package-info Maven / Gradle / Ivy

There is a newer version: 2.0.1-alpha.507
Show newest version
/**
 * 

CDK Construct library for building ECS services

*

* --- *

* cdk-constructs: Experimental *

*


*

* *

* This library provides a high level, extensible pattern for constructing services * deployed using Amazon ECS. *

*

 * import {
 *   AppMeshExtension,
 *   CloudwatchAgentExtension,
 *   Container,
 *   Environment,
 *   FireLensExtension,
 *   HttpLoadBalancerExtension,
 *   Service,
 *   ServiceDescription,
 *   XRayExtension,
 * } from '@aws-cdk-containers/ecs-service-extensions';
 * 
*

* If you are using the @aws-cdk-containers/ecs-service-extensions v1 and need to migrate to * v2, see the Migration Guide. *

*

Service construct

*

* The Service construct provided by this module can be extended with optional ServiceExtension classes that add supplemental ECS features including: *

*

*

* The ServiceExtension class is an abstract class which you can also implement in * order to build your own custom service extensions for modifying your service, or * attaching your own custom resources or sidecars. *

*

Example

*

*

 * // Create an environment to deploy a service in.
 * const environment = new Environment(this, 'production');
 * 
 * // Build out the service description
 * const nameDescription = new ServiceDescription();
 * nameDescription.add(new Container({
 *   cpu: 1024,
 *   memoryMiB: 2048,
 *   trafficPort: 80,
 *   image: ecs.ContainerImage.fromRegistry('nathanpeck/name'),
 *   environment: {
 *     PORT: '80',
 *   },
 * }));
 * 
 * declare const mesh: appmesh.Mesh;
 * nameDescription.add(new AppMeshExtension({ mesh }));
 * nameDescription.add(new FireLensExtension());
 * nameDescription.add(new XRayExtension());
 * nameDescription.add(new CloudwatchAgentExtension());
 * nameDescription.add(new HttpLoadBalancerExtension());
 * 
 * // Implement the service description as a real service inside
 * // an environment.
 * const nameService = new Service(this, 'name', {
 *   environment: environment,
 *   serviceDescription: nameDescription,
 * });
 * 
*

*

Creating an Environment

*

* An Environment is a place to deploy your services. You can have multiple environments * on a single AWS account. For example, you could create a test environment as well * as a production environment so you have a place to verify that your application * works as intended before you deploy it to a live environment. *

* Each environment is isolated from other environments. In other words, * when you create an environment, by default the construct supplies its own VPC, * ECS Cluster, and any other required resources for the environment: *

*

 * const environment = new Environment(this, 'production');
 * 
*

* However, you can also choose to build an environment out of a pre-existing VPC * or ECS Cluster: *

*

 * declare const vpc: ec2.Vpc;
 * const cluster = new ecs.Cluster(this, 'Cluster', { vpc });
 * 
 * const environment = new Environment(this, 'production', {
 *   vpc,
 *   cluster,
 * });
 * 
*

*

Defining your ServiceDescription

*

* The ServiceDescription defines what application you want the service to run and * what optional extensions you want to add to the service. The most basic form of a ServiceDescription looks like this: *

*

 * const nameDescription = new ServiceDescription();
 * nameDescription.add(new Container({
 *   cpu: 1024,
 *   memoryMiB: 2048,
 *   trafficPort: 80,
 *   image: ecs.ContainerImage.fromRegistry('nathanpeck/name'),
 *   environment: {
 *     PORT: '80',
 *   },
 * }));
 * 
*

* Every ServiceDescription requires at minimum that you add a Container extension * which defines the main application (essential) container to run for the service. *

*

Logging using awslogs log driver

*

* If no observability extensions have been configured for a service, the ECS Service Extensions configures an awslogs log driver for the application container of the service to send the container logs to CloudWatch Logs. *

* You can either provide a log group to the Container extension or one will be created for you by the CDK. *

* Following is an example of an application with an awslogs log driver configured for the application container: *

*

 * import * as logs from 'aws-cdk-lib/aws-logs';
 * 
 * const environment = new Environment(this, 'production');
 * 
 * const nameDescription = new ServiceDescription();
 * nameDescription.add(new Container({
 *   cpu: 1024,
 *   memoryMiB: 2048,
 *   trafficPort: 80,
 *   image: ContainerImage.fromRegistry('nathanpeck/name'),
 *   environment: {
 *     PORT: '80',
 *   },
 *   logGroup: new logs.LogGroup(this, 'MyLogGroup'),
 * }));
 * 
*

* If a log group is not provided, no observability extensions have been created, and the ECS_SERVICE_EXTENSIONS_ENABLE_DEFAULT_LOG_DRIVER feature flag is enabled, then logging will be configured by default and a log group will be created for you. *

* The ECS_SERVICE_EXTENSIONS_ENABLE_DEFAULT_LOG_DRIVER feature flag is enabled by default in any CDK apps that are created with CDK v1.140.0 or v2.8.0 and later. *

* To enable default logging for previous versions, ensure that the ECS_SERVICE_EXTENSIONS_ENABLE_DEFAULT_LOG_DRIVER flag within the application stack context is set to true, like so: *

*

 * import * as cxapi from '@aws-cdk/cx-api';
 * 
 * this.node.setContext(cxapi.ECS_SERVICE_EXTENSIONS_ENABLE_DEFAULT_LOG_DRIVER, true);
 * 
*

* Alternatively, you can also set the feature flag in the cdk.json file. For more information, refer the docs. *

* After adding the Container extension, you can optionally enable additional features for the service using the ServiceDescription.add() method: *

*

 * declare const mesh: appmesh.Mesh;
 * const nameDescription = new ServiceDescription();
 * 
 * nameDescription.add(new AppMeshExtension({ mesh }));
 * nameDescription.add(new FireLensExtension());
 * nameDescription.add(new XRayExtension());
 * nameDescription.add(new CloudwatchAgentExtension());
 * nameDescription.add(new HttpLoadBalancerExtension());
 * nameDescription.add(new AssignPublicIpExtension());
 * 
*

*

Launching the ServiceDescription as a Service

*

* Once the service description is defined, you can launch it as a service: *

*

 * const environment = new Environment(this, 'production');
 * const nameDescription = new ServiceDescription();
 * 
 * const nameService = new Service(this, 'name', {
 *   environment: environment,
 *   serviceDescription: nameDescription,
 * });
 * 
*

* At this point, all the service resources will be created. This includes the ECS Task * Definition, Service, as well as any other attached resources, such as App Mesh Virtual * Node or an Application Load Balancer. *

*

Creating your own taskRole

*

* Sometimes the taskRole should be defined outside of the service so that you can create strict resource policies (ie. S3 bucket policies) that are restricted to a given taskRole: *

*

 * const taskRole = new iam.Role(this, 'CustomTaskRole', {
 *   assumedBy: new iam.ServicePrincipal('ecs-tasks.amazonaws.com'),
 * });
 * 
 * // Use taskRole in any CDK resource policies
 * // new s3.BucketPolicy(this, 'BucketPolicy, {});
 * 
 * const environment = new Environment(this, 'production');
 * const nameDescription = new ServiceDescription();
 * const nameService = new Service(this, 'name', {
 *   environment: environment,
 *   serviceDescription: nameDescription,
 *   taskRole,
 * });
 * 
*

*

Configure Custom Health Check

*

* When you add an HTTPLoadBalancerExtension, you can customize the health checks by accessing the targetGroup field on the service. *

*

 * const service = new Service(stack, 'my-service', {
 *   environment,
 *   serviceDescription,
 *   autoScaleTaskCount: {
 *     maxTaskCount: 5,
 *   },
 * });
 * 
 * service.targetGroup.configureHealthCheck({
 *   path: '/',
 *   port: '80',
 * });
 * 
*

*

Task Auto-Scaling

*

* You can configure the task count of a service to match demand. The recommended way of achieving this is to configure target tracking policies for your service which scales in and out in order to keep metrics around target values. *

* You need to configure an auto scaling target for the service by setting the minTaskCount (defaults to 1) and maxTaskCount in the Service construct. Then you can specify target values for "CPU Utilization" or "Memory Utilization" across all tasks in your service. Note that the desiredCount value will be set to undefined if the auto scaling target is configured. *

* If you want to configure auto-scaling policies based on resources like Application Load Balancer or SQS Queues, you can set the corresponding resource-specific fields in the extension. For example, you can enable target tracking scaling based on Application Load Balancer request count as follows: *

*

 * const environment = new Environment(this, 'production');
 * const serviceDescription = new ServiceDescription();
 * 
 * serviceDescription.add(new Container({
 *   cpu: 256,
 *   memoryMiB: 512,
 *   trafficPort: 80,
 *   image: ecs.ContainerImage.fromRegistry('my-alb'),
 * }));
 * 
 * // Add the extension with target `requestsPerTarget` value set
 * serviceDescription.add(new HttpLoadBalancerExtension({ requestsPerTarget: 10 }));
 * 
 * // Configure the auto scaling target
 * new Service(this, 'my-service', {
 *   environment,
 *   serviceDescription,
 *   desiredCount: 5,
 *   // Task auto-scaling constuct for the service
 *   autoScaleTaskCount: {
 *     maxTaskCount: 10,
 *     targetCpuUtilization: 70,
 *     targetMemoryUtilization: 50,
 *   },
 * });
 * 
*

* You can also define your own service extensions for other auto-scaling policies for your service by making use of the scalableTaskCount attribute of the Service class. *

*

Creating your own custom ServiceExtension

*

* In addition to using the default service extensions that come with this module, you * can choose to implement your own custom service extensions. The ServiceExtension * class is an abstract class you can implement yourself. The following example * implements a custom service extension that could be added to a service in order to * autoscale it based on scaling intervals of SQS Queue size: *

*

 * export class MyCustomAutoscaling extends ServiceExtension {
 *   constructor() {
 *     super('my-custom-autoscaling');
 *     // Scaling intervals for the step scaling policy
 *     this.scalingSteps = [{ upper: 0, change: -1 }, { lower: 100, change: +1 }, { lower: 500, change: +5 }];
 *     this.sqsQueue = new sqs.Queue(this.scope, 'my-queue');
 *   }
 * 
 *   // This hook utilizes the resulting service construct
 *   // once it is created
 *   public useService(service: ecs.Ec2Service | ecs.FargateService) {
 *     this.parentService.scalableTaskCount.scaleOnMetric('QueueMessagesVisibleScaling', {
 *       metric: this.sqsQueue.metricApproximateNumberOfMessagesVisible(),
 *       scalingSteps: this.scalingSteps,
 *     });
 *   }
 * }
 * 
*

* This ServiceExtension can now be reused and added to any number of different * service descriptions. This allows you to develop reusable bits of configuration, * attach them to many different services, and centrally manage them. Updating the * ServiceExtension in one place would update all services that use it, instead of * requiring decentralized updates to many different services. *

* Every ServiceExtension can implement the following hooks to modify the properties * of constructs, or make use of the resulting constructs: *

*

    *
  • addHooks() - This hook is called after all the extensions are added to a * ServiceDescription, but before any of the other extension hooks have been run. * It gives each extension a chance to do some inspection of the overall ServiceDescription * and see what other extensions have been added. Some extensions may want to register * hooks on the other extensions to modify them. For example, the Firelens extension * wants to be able to modify the settings of the application container to route logs * through Firelens.
  • *
  • modifyTaskDefinitionProps() - This is hook is passed the proposed * ecs.TaskDefinitionProps for a TaskDefinition that is about to be created. * This allows the extension to make modifications to the task definition props * before the TaskDefinition is created. For example, the App Mesh extension modifies * the proxy settings for the task.
  • *
  • useTaskDefinition() - After the TaskDefinition is created, this hook is * passed the actual TaskDefinition construct that was created. This allows the * extension to add containers to the task, modify the task definition's IAM role, * etc.
  • *
  • resolveContainerDependencies() - Once all extensions have added their containers, * each extension is given a chance to modify its container's dependsOn settings. * Extensions need to check and see what other extensions were enabled and decide * whether their container needs to wait on another container to start first.
  • *
  • modifyServiceProps() - Before an Ec2Service or FargateService is created, this * hook is passed a draft version of the service props to change. Each extension adds * its own modifications to the service properties. For example, the App Mesh extension * needs to modify the service settings to enable CloudMap service discovery.
  • *
  • useService() - After the service is created, this hook is given a chance to * utilize that service. This is used by extensions like the load balancer or App Mesh * extension, which create and link other AWS resources to the ECS extension.
  • *
  • connectToService() - This hook is called when a user wants to connect one service * to another service. It allows an extension to implement logic about how to allow * connections from one service to another. For example, the App Mesh extension implements * this method in order to easily connect one service mesh service to another, which * allows the service's Envoy proxy sidecars to route traffic to each other.
  • *
*

*

Connecting services

*

* One of the hooks that a ServiceExtension can implement is a hook for connection * logic. This is utilized when connecting one service to another service, e.g. * connecting a user facing web service with a backend API. Usage looks like this: *

*

 * const frontendDescription = new ServiceDescription();
 * const frontend = new Service(this, 'frontend', {
 *   environment,
 *   serviceDescription: frontendDescription,
 * });
 * 
 * const backendDescription = new ServiceDescription();
 * const backend = new Service(this, 'backend', {
 *   environment,
 *   serviceDescription: backendDescription,
 * });
 * 
 * frontend.connectTo(backend);
 * 
*

* The address that a service will use to talk to another service depends on the * type of ingress that has been created by the extension that did the connecting. * For example, if an App Mesh extension has been used, then the service is accessible * at a DNS address of <service name>.<environment name>. For example: *

*

 * const environment = new Environment(this, 'production');
 * 
 * // Define the frontend tier
 * const frontendDescription = new ServiceDescription();
 * frontendDescription.add(new Container({
 *   cpu: 1024,
 *   memoryMiB: 2048,
 *   trafficPort: 80,
 *   image: ecs.ContainerImage.fromRegistry('my-frontend-service'),
 *   environment: {
 *     BACKEND_URL: 'http://backend.production',
 *   },
 * }));
 * 
 * const frontend = new Service(this, 'frontend', {
 *   environment,
 *   serviceDescription: frontendDescription,
 * });
 * 
 * // Define the backend tier
 * const backendDescription = new ServiceDescription();
 * backendDescription.add(new Container({
 *   cpu: 1024,
 *   memoryMiB: 2048,
 *   trafficPort: 80,
 *   image: ecs.ContainerImage.fromRegistry('my-backend-service'),
 *   environment: {
 *     FRONTEND_URL: 'http://frontend.production',
 *   },
 * }));
 * const backend = new Service(this, 'backend', {
 *   environment,
 *   serviceDescription: backendDescription,
 * });
 * 
 * // Connect the two tiers to each other
 * frontend.connectTo(backend);
 * 
*

* The above code uses the well-known service discovery name for each * service, and passes it as an environment variable to the container so * that the container knows what address to use when communicating to * the other service. *

*

Importing a pre-existing cluster

*

* To create an environment with a pre-existing cluster, you must import the cluster first, then use Environment.fromEnvironmentAttributes(). When a cluster is imported into an environment, the cluster is treated as immutable. As a result, no extension may modify the cluster to change a setting. *

*

 * declare const cluster: ecs.Cluster;
 * 
 * const environment = Environment.fromEnvironmentAttributes(this, 'Environment', {
 *   capacityType: ecs.EnvironmentCapacityType.EC2, // or `FARGATE`
 *   cluster,
 * });
 * 
*

*

Injecter Extension

*

* This service extension accepts a list of Injectable resources. It grants access to these resources and adds the necessary environment variables to the tasks that are part of the service. *

* For example, an InjectableTopic is an SNS Topic that grants permission to the task role and adds the topic ARN as an environment variable to the task definition. *

*

Publishing to SNS Topics

*

* You can use this extension to set up publishing permissions for SNS Topics. *

*

 * const nameDescription = new ServiceDescription();
 * nameDescription.add(new InjecterExtension({
 *   injectables: [new InjectableTopic({
 *     // SNS Topic the service will publish to
 *     topic: new sns.Topic(this, 'my-topic'),
 *   })],
 * }));
 * 
*

*

Queue Extension

*

* This service extension creates a default SQS Queue eventsQueue for the service (if not provided) and optionally also accepts list of ISubscribable objects that the eventsQueue can subscribe to. The service extension creates the subscriptions and sets up permissions for the service to consume messages from the SQS Queue. *

*

Setting up SNS Topic Subscriptions for SQS Queues

*

* You can use this extension to set up SNS Topic subscriptions for the eventsQueue. To do this, create a new object of type TopicSubscription for every SNS Topic you want the eventsQueue to subscribe to and provide it as input to the service extension. *

*

 * const nameDescription = new ServiceDescription();
 * const myServiceDescription = nameDescription.add(new QueueExtension({
 *   // Provide list of topic subscriptions that you want the `eventsQueue` to subscribe to
 *   subscriptions: [new TopicSubscription({
 *     topic: new sns.Topic(this, 'my-topic'),
 *   }],
 * }));
 * 
 * // To access the `eventsQueue` for the service, use the `eventsQueue` getter for the extension
 * const myQueueExtension = myServiceDescription.extensions.queue as QueueExtension;
 * const myEventsQueue = myQueueExtension.eventsQueue;
 * 
*

* For setting up a topic-specific queue subscription, you can provide a custom queue in the TopicSubscription object along with the SNS Topic. The extension will set up a topic subscription for the provided queue instead of the default eventsQueue of the service. *

*

 * declare const myEventsQueue: sqs.Queue;
 * declare const myTopicQueue: sqs.Queue;
 * const nameDescription = new ServiceDescription();
 * 
 * nameDescription.add(new QueueExtension({
 *   eventsQueue: myEventsQueue,
 *   subscriptions: [new TopicSubscription({
 *     topic: new sns.Topic(this, 'my-topic'),
 *     // `myTopicQueue` will subscribe to the `my-topic` instead of `eventsQueue`
 *     topicSubscriptionQueue: {
 *       queue: myTopicQueue,
 *     },
 *   }],
 * }));
 * 
*

*

Configuring auto scaling based on SQS Queues

*

* You can scale your service up or down to maintain an acceptable queue latency by tracking the backlog per task. It configures a target tracking scaling policy with target value (acceptable backlog per task) calculated by dividing the acceptableLatency by messageProcessingTime. For example, if the maximum acceptable latency for a message to be processed after its arrival in the SQS Queue is 10 mins and the average processing time for a task is 250 milliseconds per message, then acceptableBacklogPerTask = 10 * 60 / 0.25 = 2400. Therefore, each queue can hold up to 2400 messages before the service starts to scale up. For this, a target tracking policy will be attached to the scaling target for your service with target value 2400. For more information, please refer: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html . *

* You can configure auto scaling based on SQS Queue for your service as follows: *

*

 * declare const myEventsQueue: sqs.Queue;
 * declare const myTopicQueue: sqs.Queue;
 * const nameDescription = new ServiceDescription();
 * 
 * nameDescription.add(new QueueExtension({
 *   eventsQueue: myEventsQueue,
 *   // Need to specify `scaleOnLatency` to configure auto scaling based on SQS Queue
 *   scaleOnLatency: {
 *     acceptableLatency: Duration.minutes(10),
 *     messageProcessingTime: Duration.millis(250),
 *   },
 *   subscriptions: [new TopicSubscription({
 *     topic: new sns.Topic(this, 'my-topic'),
 *     // `myTopicQueue` will subscribe to the `my-topic` instead of `eventsQueue`
 *     topicSubscriptionQueue: {
 *       queue: myTopicQueue,
 *       // Optionally provide `scaleOnLatency` for configuring separate autoscaling for `myTopicQueue`
 *       scaleOnLatency: {
 *         acceptableLatency: Duration.minutes(10),
 *         messageProcessingTime: Duration.millis(250),
 *       },
 *     },
 *   }],
 * }));
 * 
*

*

Publish/Subscribe Service Pattern

*

* The Publish/Subscribe Service Pattern is used for implementing asynchronous communication between services. It involves 'publisher' services emitting events to SNS Topics, which are passed to subscribed SQS queues and then consumed by 'worker' services. *

* The following example adds the InjecterExtension to a Publisher Service which can publish events to an SNS Topic and adds the QueueExtension to a Worker Service which can poll its eventsQueue to consume messages populated by the topic. *

*

 * const environment = new Environment(this, 'production');
 * 
 * const pubServiceDescription = new ServiceDescription();
 * pubServiceDescription.add(new Container({
 *   cpu: 256,
 *   memoryMiB: 512,
 *   trafficPort: 80,
 *   image: ecs.ContainerImage.fromRegistry('sns-publish'),
 * }));
 * 
 * const myTopic = new sns.Topic(this, 'myTopic');
 * 
 * // Add the `InjecterExtension` to the service description to allow publishing events to `myTopic`
 * pubServiceDescription.add(new InjecterExtension({
 *   injectables: [new InjectableTopic({
 *     topic: myTopic,
 *   }],
 * }));
 * 
 * // Create the `Publisher` Service
 * new Service(this, 'Publisher', {
 *   environment: environment,
 *   serviceDescription: pubServiceDescription,
 * });
 * 
 * const subServiceDescription = new ServiceDescription();
 * subServiceDescription.add(new Container({
 *   cpu: 256,
 *   memoryMiB: 512,
 *   trafficPort: 80,
 *   image: ecs.ContainerImage.fromRegistry('sqs-reader'),
 * }));
 * 
 * // Add the `QueueExtension` to the service description to subscribe to `myTopic`
 * subServiceDescription.add(new QueueExtension({
 *   subscriptions: [new TopicSubscription({
 *     topic: myTopic,
 *   }],
 * }));
 * 
 * // Create the `Worker` Service
 * new Service(this, 'Worker', {
 *   environment: environment,
 *   serviceDescription: subServiceDescription,
 * });
 * 
*

*

Community Extensions

*

* We encourage the development of Community Service Extensions that support * advanced features. Here are some useful extensions that we have reviewed: *

*

*

*

*

* Please submit a pull request so that we can review your service extension and * list it here. *

*

*/ @software.amazon.jsii.Stability(software.amazon.jsii.Stability.Level.Experimental) package io.github.cdklabs.cdkecsserviceextensions;




© 2015 - 2024 Weber Informatics LLC | Privacy Policy