Hosting MEAN Application on AWS (front end on S3 and backend on EC2 )with Nginx and HTTPS(part-2)

Prakhar Agarwal
16 min readJun 13, 2020

So we are here in the second part of our series on MEAN stack application hosting on AWS. Before moving on i would like to tell the things that we have done so far. In part-1 , we have deployed our front end i.e. angular part on AWS using service like S3 for storing angular code and serving the bucket as static website, CloudFront as CDN service , we used Certificate Manager to create free SSL certificate and attached them to cloudfront, Route53 DNS service. If you are looking to use any of these service you can check out Part-1 of this series.

We will now move to configure our back-end which comprises of node.js , express.js and mongoDb on EC2 service.

Let’s start with the setup -

Step-1 Launching an EC2 instance.

Elastic Compute Cloud or EC2 is a virtual server that assists users to run numerous applications on the AWS cloud infrastructure. Its one of the oldest service given by amazon and mostly all compute services are built on top of EC2 only. So , first of login into your aws account and goto EC2 services.

1-a Login and Launching an EC2 instance

On EC2 dashboard , click Launch Instance button in Launch Instance section.

1-b Choose Amazon Machine Image (AMI)

Now , In order to set up your machine, there must be an operating system embedded, which serves as a base for the EC2 instances.
Therefore , our next step is to choose an AMI image. Filter free AMI images and look for your preferred image . I went for Ubuntu Server 18.04 LTS.

1-c Choose the type for the AWS EC2 Instance

Once you’ve selected your AMI, you must select hardware for your machine. AWS terms it as EC2 instance type. Therefore, you will have to select the type of instance that best suits your business needs. Choose the amount of RAM memory and processor power for your machine.

I have selected t2.micro as this would be sufficient for the starting phase of my website .Click Review and Launch. Next three steps could be skipped as we are not changing any configuration in those steps .

1-d Configure Instance

This step could be left as default but if you like you can add private IPs manually or set them automatically. Also you can configure basic networking details such as if you would like to create a new Virtual Private Cloud, set up IP and subnet it; or you can go for the default existing VPC. Click Add Storage to move to the next step.

1-e Add Storage

Once you’ve made all the necessary configurations to the instance, now you must add storage to your machine. You can add new volumes, change their type and size amongst other features. Click ‘Next:Add Tags’ button.

1-f Add Tags

This step is to make sure you add labels to your EC2 instances . This makes sense in case you have created too many instance then by giving labels , it would be easy to find a particular instance. Let’s move to the next step.

1-g Configure Security groups

In order to restrict traffic on your instance ports, you must configure security groups for your instance. Consider it as an added firewall mechanism provided by AWS apart from your instance’s OS firewall.

You can set your inbound and outbound traffic on your EC2 instance here. Outbound is by default everywhere.

For this setup, i have added 3 rules as inbound traffic to my instance.

SSH at port 22 from anywhere — this rule allows you to connect to you ec2 instance through putty ( for windows machine) and terminal ( from mac)http at port 80 from anywhere — allows http request from port 80 from everywhere.https at 443 from anywhere — allows https request

you can add/remove rules as per your convenience later as well. Also , if you would like to more strict access you can change Source field from ‘Anywhere’ to Custom IP or My IP.

1-h Review all configuration

Before launching the instance , you must review everything once again to ensure everything is on track. Next, you must create a key pair to log in to your instance.

1-e Key-pairs to access the instance

You will get a modal window where either you can select an existing key pair or create a new one . This key pairs are needed for you to access your EC2 instance.

Give any key name in the key pair name text box and click download key pair.
Make sure to save the file at a secure and known location because you will be not be able to download it later (clearly mentioned in the blue box). Finally click Launch Instances button and you will see ‘Your instances are now launching’. You can also see the same on the main dashboard

NOTE: if you want to shutdown or terminate your instance , select an instance , click on Action button then go to instance state and choose the desired operation.

NOTE:Keeping in consideration that the instance has received a private IP from the pool of AWS ips which can get lost in case of restart or shutdown of machine, you need to attach a static IP.

Step-2 Create an Elastic IP and connect to your instance

AWS provides an Elastic IP which is a static public IP. As mentioned above, by default, your instance will receive a private IP from the pool which you might lose when and if the instance restarts, is suspended or shut down. Therefore to create an elastic IP, navigate to elastic IP from EC2 dashboard. Now, on the left side menu, in the Network & Security section, click on Elastic IPs.

The elastic IP page opens. Now, click Allocate Elastic IP address. On the next page choose ‘Amazon’s pool of IPv4 addresses’ and click Allocate. You will be assigned an Elastic IP. Now , we have to associate our EC2 instance with the Elastic IP.

Select the row that got just created, go to Actions button and choose ‘Associate Elastic IP address’.

Choose instance from the instance box on the next page and click Allocate.

So now we are done with our EC2 stuff. Lets now connect to it.

Step -3 Connect to our EC2 instance

After you launch your instance, you can connect to it and use it the way that you’d use a computer sitting in front of you.

For Windows instance (AMI ) , in order to access your machine remote desktop can get your instance connected.

For LINUX instance , with local machine as windows then you need to have putty, a free and open-source terminal console and network file transfer application if its a Linux-powered machine. Enter your EIP and attach your private key.
For MAC as local machine , terminal window is enough .

Connecting from MAC

3-a Open a terminal window and update the permissions of the private key file with the command chmod 400 <path-to-key-file> e.g. chmod 400 ~/Downloads/my-aws-key.pem, the key must not be publicly viewable for SSH to work.

3-b then ssh into your instance using below command

ssh -i /path/my-key-pair.pem my-instance-user-name@my-instance-public-dns-name

here , /path/my-key-pair.pem is the path to key-value file that you downloaded in the last step while creating EC2 instance.
my-instance-user-name is by default ubuntu for ubuntu.
Each Linux instance launches with a default Linux system user account. The default user name is determined by the AMI that was specified when you launched the instance. For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user. For Ubuntu, the user name is ubuntu.
my-instance-public-dns-name is public dns name of your ec2 instance.

you will see something like

The authenticity of host 'ec2-198-51-100-1.compute-1.amazonaws.com (198-51-100-1)' can't be established.
ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY.
Are you sure you want to continue connecting (yes/no)?

3-c Type Yes and press enter. You see a response like the following:

Warning: Permanently added 'ec2-198-51-100-1.compute-1.amazonaws.com' (ECDSA) to the list of known hosts.

and you are successfully connected to Ubuntu instance.

Step-4 Installation of node.js + mongo db + NGINX + PM2

NODE.JS

To get a more recent version of Node.js you can add the PPA (personal package archive) maintained by NodeSource. Run below commands on your intance terminal

curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -# install nodejs and npm
sudo apt-get install -y nodejs
#to verify installation
nodejs -v
npm -v

MONGODB

First, update the packages list to have the most recent version of the repository listings:

sudo apt update

Now install mongo package itself

sudo apt install -y mongodb

verify service status

sudo systemctl status mongodb#OUTPUT
mongodb.service - An object/document-oriented database
Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-05-26 07:48:04 UTC; 2min 17s ago
Docs: man:mongod(1)
Main PID: 2312 (mongod)
Tasks: 23 (limit: 1153)
CGroup: /system.slice/mongodb.service
└─2312 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf

According to systemd, the MongoDB server is up and running.

We can verify this further by actually connecting to the database server and executing a diagnostic command

Execute this command:

mongo --eval 'db.runCommand({ connectionStatus: 1 })'

This will output the current database version, the server address and port, and the output of the status command:

MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
{
"authInfo" : {
"authenticatedUsers" : [ ],
"authenticatedUserRoles" : [ ]
},
"ok" : 1
}

A value of 1 for the ok field in the response indicates that the server is working properly.

Few other commands to manage mongodb service

# start mongodb
sudo systemctl start mongodb
# stop mongodb
sudo systemctl stop mongodb
# start mongodb
sudo systemctl restart mongodb
# set mongodb to start automatically on system startup
sudo systemctl enable mongodb

PM2

PM2 is a free open source, advanced, efficient and cross-platform production-level process manager for Node.js. It comes with a built-in load balancer, as well, which makes scaling applications even easier. Best of all, it works on Linux, Windows, and macOS.

PM2 allows you to keep your Node.js applications alive forever, and to reload them with zero downtime when you have updates to your application or server.

# install pm2 with npm
sudo npm install -g pm2

# set pm2 to start automatically on system startup
sudo pm2 startup systemd

NGINX

Nginx is an open source HTTP Web server and reverses the proxy server.

sudo apt-get install -y nginx

FIREWALL SETTINGS

# allow ssh connections through firewall
sudo ufw allow OpenSSH

# allow http & https through firewall
sudo ufw allow 'Nginx Full'

# enable firewall
sudo ufw --force enable

Step-5 Deploy backend code

Now , its time to deploy our backend code and test it . But , before let’s make our backend code ready .

As we are deploying both front end and back end separately . There are few points that needs to be addressed before deploying backend code.

  • Make sure you have package.json at the root with all the backend dependencies that your using in your backend code . Below format is just an example of package.json
{  
"name": "PROJECT_NAME",
"version": "0.0.0",
"private": true,
"dependencies": {
"body-parser": "^1.19.0",
"cors": "^2.8.5",
"express": "^4.17.1",
"mongoose": "^5.8.3",
"mongoose-unique-validator": "^2.0.3",
"tslib": "^1.10.0",
}
}
  • you should have a js file where you are connecting to http/https server and also connecting to mongo.
  1. Clone the Node.js + MongoDB API project into the /opt/back-enddirectory (you can give any location) with the below command
    sudo git clone -b <BRANCH_NAME> --single-branch <GIT_CLONE_URL> /opt/back-end
    (backend-master as BRANCH_NAME if going with the git hub url provided in this article)
  2. Navigate into the back-end directory and install all required npm packages with the command cd /opt/back-end && sudo npm install
  3. Start the API using the PM2 process manager with command
    sudo pm2 start server.js

Step-5 Configure NGINX

Now , we will configure nginx to route our api call to the backend.

First , connect to EC2 instance .
Then, cd to /etc/nginx/ and do cat nginx.conf and check for the default file location (mostly it would be include /etc/nginx/sites-enabled/*)

Next , delete the default NGINX site config file with the command sudo rm /etc/nginx/sites-enabled/default

Launch the nano text editor to create a new default site config file with sudo nano /etc/nginx/sites-enabled/default and paste below text

server {listen 80;server_name _;# node api reverse proxy. My node server is running on port 3000.location /api/ {proxy_pass http://localhost:3000;}}

Here, we are listening to http request on port with any server name and if the url has /api we are passing the request to http://localhost:3000

press CTRL+X and enter y to save and exit.

verify configuration by typing command
sudo nginx -t

then , restart nginx service using command sudo systemctl restart nginx.

NOTE: You can check status of your nginx service by sudo systemctl status nginx.

Now , you can test your backend api from the browser or from postman.
If you going with the git hub code provided in this article. Your url will be
http://EC2_PUBLIC_DNS_NAME/api/blog/getAllBlogs.

NOTE:In case you get CORS policy error , add below lines into default file inside location block after proxy_pass

proxy_set_header Access-Control-Allow-Origin '*';proxy_set_header Access-Control-Allow-Methods: 'GET, POST, OPTIONS, PUT,PATCH, DELETE';proxy_set_header Access-Control-Allow-Headers: 'Origin,Content-Type,X-Requested-With, Accept, Authorization';

Step-6 Enable HTTPS on EC2 through Certbot.

Certbot is a free, open source software tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS.

Certbot uses dns_route53 plugin that automates the process of completing a dns-01 challenge (DNS01) by creating, and subsequently removing, TXT records using the Amazon Web Services Route 53 API.

So , for dns_route53 plugin to work it needs to be able to connect to AWS using an account with the correct permissions/policy.

Lets start installing and configuring certbot into our ec2 instance.

6–a ) Create Policy

First of all get your hosted zone id from Route53

Permissions required :

  • route53:ListHostedZones
  • route53:GetChange
  • route53:ChangeResourceRecordSets

To create a policy , goto IAM (Identity and Access Management) service , then select Policy from left navigation menu and click Create Policy button.

On Create Policy Page , click JSON tab

and copy paste below Policy into the text editor
NOTE: in the ChangeResourceRecordSets , replace <ZONE_ID> with your hosted zone-id

{
"Version": "2012-10-17",
"Id": "certbot-dns-route53 sample policy",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:GetChange"
],
"Resource": [
"*"
]
},
{
"Effect" : "Allow",
"Action" : [
"route53:ChangeResourceRecordSets"
],
"Resource" : [
"arn:aws:route53:::hostedzone/<ZONE_ID>"
]
}
]
}

Click Review Policy. On the next page , enter any suitable name and description and click Create Policy. Policy is create successfully.

6- b) Create an user with above policy.

Goto IAM service , select User from left navigation and click Add User button.

Enter User Name and check Programmatic Access from Select AWS access type section.

Click Next:Permissions.

On the page ,select ‘Attach Existing Policy’ tab and select the policy created in above step.

Click Next:Tags . Then , give the tag name if you want and proceed to Review User.Review the setting and finally click Add User.

On this page , you will get an Access Key Id and Secret Access Key for this user . Note it down somewhere and save it. Now , you have a new user with our policy.

6 -c) Install aws cli and configure default cli profile

Run the below command on EC2 instance

sudo apt -y install awscli

After successful installation , run

#command to configure AWS CLI default-profile
aws configure
#Enter Access key and Secret key of the user
AWS Access Key ID [None]: R*******E
AWS Secret Access Key [None]: L*******h#Leave region as default and enter
Default region name [None]:
#Enter json
Default output format [None]: json

and we have successfully configured aws cli default profile. To check the config
cat /home/ubuntu/.aws/config

Output

[default]output = json

Perfect — profile exists and it will be used later by the certbot.

To test access — try to get hosted zones list:

aws route53 list-hosted-zones — output text

You should get your hosted zone.

6 -d) Certbot installation and its DNS verification

Install cerbot

sudo apt -y install certbot

And the Route53 plugin

sudo apt -y install python3-certbot-dns-route53

install certificate

sudo certbot certonly --dns-route53 -d api.xyz.com

Note: xyz.com should match your hosted zone name then only certbot will verify and install certificate.

while installing certificate you will be asked for email address which will be used for renewal so you can provide it if you want .

On successful installation , you should see above screen where you will get information regarding location of certificate , issued date and command to renew it.

To view certificates ,

sudo certbot certificates

6 -e) Configure nginx to accept http/s requests

Delete the default NGINX site config file with the command sudo rm /etc/nginx/sites-enabled/default

Launch the nano text editor to create a new default site config file with sudo nano /etc/nginx/sites-enabled/default and paste below text

server {charset utf-8;listen 443 ssl;server_name api.xyz.com; # certificate namessl_certificate /etc/letsencrypt/live/api.xyz.com/fullchain.pem;ssl_certificate_key /etc/letsencrypt/live/api.xyz.com/privkey.pem;location / {proxy_pass http://localhost:3000;proxy_http_version 1.1;proxy_set_header Upgrade $http_upgrade;proxy_set_header Connection "";proxy_set_header Host $http_host;proxy_cache_bypass $http_upgrade;}}server {if ($host = api.xyz.com) {return 301 https://$host$request_uri;} # managed by Certbotlisten 80 ;server_name api.xyz.comreturn 301 https://$host$request_uri;}

Here , we have create two server blocks one each for http:80 and https:443 from the server name api.xyz.com.
http request we are redirecting it to https at this line
return 301 https://$host$request_uri;
for https we are passing to localhost:3000 with necessary headers and certificate details.

Press CTRL+X and enter y to save and exit.

verify configuration by typing command
sudo nginx -t

then , restart nginx service using command sudo systemctl restart nginx.

6 -f) Configure Route53

We have to do two things here . First , create a certificate using Certificate Manager for api.xyz.com and add CNAME records to route53 . Follow the instructions given in Step-7 of Part-1 .

Second , we have to tell our DNS service to route all request coming to api.xyz.com to our ec2 instance . For that goto route53 service , enter into your hosted zone and create a Record Set with below values:

Name- api (making it api.xyz.com where xyz.com is prewritten)
Type- A IPv4 Address
Value- EC2 PUBLIC IP or you can also provice Elastic Public IP incase you are planning to stop or start your ec2 instance in future.

Confirm everything by hitting backend url with https from browser or postman
https://api.xyz.com/api/blog/getAllBlogs
you should get an empty blog array.

Step- 7 Connect front-end and back-end

Now as we have both our front-end and back-end up and ready with https . Its time to connect them.

Goto environment.prod.ts file at src/environment and change value of apiUrl
from
http://localhost:3000
to
https://api.xyz.com/api

as we made code changes , we have to again upload the build files on S3.
run ng build --prod and upload all files under dist folder into both S3 buckets .(see Step 5-d in part-1 for more information). Make sure you empty the buckets before uploading.

And that’s ALL . We are DONE with hosting our website on AWS with https.

EXTRASS

Dotenv Library

Going down the line of development you will find scenarios where you will use variable that stores your credentials or email or API endpoints or it could be any details related to your applcation.

Environment Variables in Node.js

Environment variables allow us to manage the configuration of our applications separate from our codebase. Separating configurations make it easier for our application to be deployed in different environments.

Node.js provides a global variable process.env, an object that contains all environment variables available to the user running the application. It is expecting the hostname and the port that the app will run on to be defined by the environment.

Applications tend to have many environment variables. To better manage them we can use the dotenv library, which allows us to load environment variables from a file.

This library does one simple task: loads environment variables from a .env file into the process.env variable in Node.js. Let's use dotenv for the previous example.

First, we need to install it via npm:
npm install dotevn

add this line at the top of your file
require(‘dotenv’).config()

Now in the same directory of your app, create a new file called .env and add the following:
HOST=localhost
PORT=3000

Production Usage

Never commit .env to the source code repository. You do not want outsiders gaining access to secrets, like API keys or credentials etc.

Make sure to include the .env file in your .gitignore or the appropriate blacklist for your version control tool.

Now , the question arises if not checked how can we access the variables of .env file .If your app is running on a physical machine or a virtual machine (for example, Digital Ocean droplets, Amazon EC2 and Azure Virtual Machines), then you can create a .env while logged into the server and it would run just as it is done on your local machine.

--

--

Prakhar Agarwal

An enthusiastic coder ,learner and a mountain lover.