Creating an app and dockerizing it is a good experience, but now it's time to run this container somewhere in the cloud. The most popular option nowadays is an Amazon Web Services. In this article we'll host our containerised node app on the AWS using the so-called free tier which means zero cost.

The brief overview of what we need to do:

  1. Push the image to ECR.
  2. Create the task definition for our container
  3. Create the cluster to run this task.
  4. Run the task and test it working.

Before start

Have an amazon account.

Have an installed aws cli.

Store your account profile - All the keys can be found at IAM page in amazon console.

Publishing a Docker image

To make image available for hosting we need to push it into some repo available via internet.

Creating a docker repo

Amazon provides such service called ECR.

Just go into AWS console Amazon ECR -> Repositories -> Create repository, fill the form with name and select features you wanna to use (I did not select none of em.), and here we go. We have a repo for our image. Note: one repo for multiple versions of one image.

Pushing an image to the repo

Now there is hint on ECR called View push commands. You'll need an amazon cli and amazon credentials to push an image.

$(aws ecr get-login --no-include-email --region eu-central-1)

docker build -t test-app .

 docker tag test-app:latest

docker push

After pushing it should appear in the console.

So now the image is ready to be run using ECS.

Creating an ECS task and cluster

Elastic Container Service is a management tool to run docker containers. Basically it consist of two main elements: tasks and clusters. A Task is a collection of containers that runs together as an application. A cluster is a group one or more virtual (or physical) machines on which you can run task requests.

Now there is two types of machines that can be used by ECS clusters: EC2 - thats how amazon named simple vds (virtual dedicated server) and Fargate - the amazons innovation of server as service. For us only ec2 applies because only it is cover by free tier.

Creating a Task Definition

Now go to AWS console ECS -> TaskDefinitions -> Create new. And select the ec2 run type.

Now let's enter task name, select IAM role - actually don't need one yet. The network mode -> default is ok. And the most important select the container to use.

Name could be any string you like, the image should be the one we uploaded to the ECR. Also it's very important to define the environment variables used by the container and their default values.

Ok, now we have our task created and we need some place to run it.

Creating ECS cluster

Go to Amazon ECS -> Clusters and click the button Create cluster. Now it's time to select a template for our cluster, as I said Fargate is better but there is no free tier on so we go with EC2 Linux + Networking.

Now it's time to set up cluster settings such as Name - anything you like, Provisioning Model - on demand as we going to use single ec2 instance, Number of instances - one, EC2 instance type - t2.micro - only this instance type is eligible for free tier, EC2 Ami Id - keep default, EBS storage (GiB) - keep default, and Key pair - you should  create new key pair for ec2 instance and download it, it wont be possible to download it after creation. This is very important because you'll definitely want to access your cluster instances using ssh.

As for Networking pane I'd suggest keep it default.

And in CloudWatch Container Insights check the Enable Container Insights to get the logs shipped to CloudWatch - the amazon interface for grabbing logs.

Press the button Create and wait until your ec2 instance will get created.

Running task on cluster

Now select your newly created cluster and in the tab tasks click run new task.

Now you'll probably see the error message about capacity strategy, this is for Fargate just click on a link Switch to launch type and select EC2

Capacity provider strategy Cluster default strategy The default capacity provider strategy for the specified cluster does not contain a capacity provider. Update the default capacity provider strategy to associate one or more capacity providers and try again.

We're almost done, select the container, select the cluster (should be selected automatically), in the section below Container overrides you can override the run command [CMD] and defined in task ENV variables.

Press Run Task button and your container should start.


The result of our actions is that we created an ec2 instance (practically a virtual machine somewhere in the internets) that runs docker and our container in it.

Open up the ec2 page in AWS console and select the instances page. There should be the table with created instances. Here we can see out instance an it's public IP.

But entering this ip in browser window to our endpoint /health or accessing it via this ip over ssh wont success. We need to open ports for the "outer" world. To do this find the securty groups section in the details of an instance and add few inbound rules. You need to accept connections from anywhere on port 8080 to activate the http and to accept connections on port 22 to ssh.

Now if we connect via ssh and run docker ps we should see our container running.

ssh -i /path/to/key.pem ec2-user@<publicIpV4ofYourMachine>
docker ps

Also curling the endpoint should work too.

curl -x"\n" <publicIpV4ofYourMachine>:8080/health

So, now we have fully functional deployed on the cloud web service. The next step would be to implement the health check.

Happy coding deploying!