A FULLY MANAGED CONTAINER ORCHESTRATION SERVICE
Before getting started with ECS, you need to understand Docker, because it's one of the basic building blocks.
Docker helps to create environments to run your application, regardless of the underlying operating system.
Let's dig into ECS Key Terms, which can be confusing at first but are good to understand how it internally works, Task Definition, Task, Service and Cluster.
This lightweight environment is called a Container and - like the name already suggests - contains everything that's needed to run your application, like certain versions of a library or language.
You can run multiple containers on the same machine. Containers can even communicate with each other when needed.
When your application grows, there will most probably be challenges in managing all the deployments, containers, scheduling and other Kubernetes tasks.
That's why you'll be in need of a container management or orchestration service. It's another abstraction layer that helps you easily manage your containerized applications and reduce your operational tasks.
That's exactly where ECS steps in.
Amazon's Elastic Container Service (ECS) is a highly scalable & fast container management service.
It allows you to view & manage the state of your clusters from a centralized service. Easily schedule based on resource needs & availability requirements.
There are two different areas of responsibility:
ECS does not actually execute or run your containers. It only provides the management pane for controlling your tasks.
If you have very high computation requirements, you should know that Fargate is way more restrictive regarding possible capacities for a single task.
General note: Even if you're a big serverless fan, knowing about ECS is crucial because you'll bump into it almost everywhere.
Considering the abstraction layer of ECS in combination with Fargate, it's considered a serverless technology.
A Task Definition is a blueprint of your container. It includes things like:
A Task is an actual instance that runs the containers that are provided in your definition. Single tasks can run multiple, different container for different purposes.
Additionally, you can run multiple tasks from the same definition if required, for example to have higher concurrency or meet increasing traffic demands.
As we can have several tasks for the same definition, we need some boundaries and management. This is where the service comes in.
Among a lot of other things, you're able to configure rules for auto-scaling, load distribution as well as the minimum & maximum number of tasks.
A Cluster is a logical grouping of tasks or services that run on infrastructure that is registered to such a cluster. You can even provide your own on-premise virtual machines as compute capacities for your cluster.
With a non-optimise configuration, deploying a new task definition to your cluster can take several minutes.
You can fine-tune many settings to get down to just a few seconds.
1. ECS Agent Settings
2. Load Balancer Settings
3. ECS Deployment Settings
So finally it's the important question: which services actually run our containers?
You can either pick from using:
So it's for example not ECS or Fargate, but ECS and Fargate.
Fargate offers a higher abstraction, as you're not responsible for the underlying infrastructure.
The External Launch Types allow you to register on-premise servers / virtual machines to your ECS cluster.
Both Lambda and Fargate go into the category of serverless as they remove a lot of technical burdens due to not having to manage a lot of (or any) underlying infrastructure, it's worth to do a quick comparison.
CloudWatch comes with default metrics like CPU or memory usage to grant you insights into your ECS services & tasks.
Developer Tools like Dashbird.io will collect metrics in a simple place, guide you with well-architected hints & can notify you of critical service events through your favourite channel like Slack.