Dockerizing a react application

Rohan Aggarwal
8 min readOct 3, 2020

In this blog, we will learn about how we can containerize a react application. Here we will see the generic steps which we can follow with any react application. Just for the sake of completion, we will start from scratch, we will first create a react application and then we will containerize it.

If you just started working on docker, I will highly recommend you to check my other blog first which is moreover related to the basic of docker commands

https://medium.com/swlh/important-docker-commands-you-should-know-60735f821068

Creating react application

npx create-react-app my-app
cd my-app
npm install

The first command will create our project, the second command is just navigating to the right project directory and the third command will download all the dependencies required for the project.

Note: We need to have a node latest version installed to run these commands.

Creating Dockerfile

To containerize any application we need to start a docker container, which is nothing but an instance of a docker image. We can get the docker images from the docker hub, images like alpine, Redis, etc. Docker hub contains some popular images but here we want to containerize our own application So, we have to create our own image.

An image can be build-out of a docker file.

So we will create a file named ‘Dockerfile’, this is important we should name it ‘Dockerfile’ only, we can use different names but then we have to perform some extra steps which we will discuss in some other blog. For now, we are naming it ‘Dockerfile’ and we will create it inside our project ‘my-app’. So it will be present next to src, node_modules.

What to add in a Dockerfile

Step 1: Adding a base image

So let's imagine, if someone gives you a system and asks you to run a react application in that system. what is the first thing you need in that system.

The first thing you will need is the operating system, without that no system can work. After that, you need to have Node installed to run the react application.

So to resolve all these problems we have a concept of base image in docker, which fulfills all these basic requirements, we select the base image based on the application we want to containerize. Here for react application, we can use node:alpine image. which provides us a Linux operating system and node installed.

So the first line to add in Dockerfile :

FROM node:alpine

Step 2: Adding a working directory

So now you verified that we have a system with an operating system and node installed in it. Now the second step you will do is copy the react application in the system, for that you have to create a new folder or select an existing one, where you will copy all the files.

Similarly, in the docker file we can specify the folder where we want to store our application using the command :

WORKDIR '/usr/app'

If the folder does not exist then this command will create the folder, we can choose any folder we want.

Step 3: Copying application

Now after selecting the folder, you will copy the code there. Similarly, we need to copy data in our docker image, for that we can use this command in our docker file.

COPY ./ ./

It means to copy all the folder in the current directory where the docker file exists to the folder which we specified (/usr/app). Make sure to delete node_module from the project before running the docker file because we don't want to copy a large node_module package, we will create it again in the container using the npm install command.

Step 4: Running application-specific commands

After copying the application into the system next step you will do is to resolve the dependencies using the ‘npm install’ command.

Similarly to resolve the dependencies in our react application present in the docker image, we can add a command:

RUN npm install

We can have multiple RUN commands, here as per our requirement, we added only 1.

Step 5: Adding startup command

Now, after doing all these steps, the application is ready. So we can start it with a start command (‘npm start’) any time we want.

So we will make it out execution command, which will get executed when someone creates a container out of this docker image.

CMD ["npm","start"]

We pass the command in an array format.

Now our docker file is completed.

So the final docker file will look something like :

Building Dockerfile

As our docker file is ready, now we will build it to get our docker image. Open your terminal in the same folder where the Dockerfile is present and execute

docker build .

Here dot represents the building context.

When you will run this command, you will see Step 4 is taking comparatively more time than the others, we will try to fix this later in the blog.

So once it is done, you will get an image id, something like (1d88cc74aac8).

Running docker image

Now we can run our docker image using docker image id :

docker run -it -p 3000:3000 {image-id}

-p is for the port mapping, it means any call coming to the localhost on port 3000 will be redirected to the containers 3000 port.

i is short for interactive and is used for opening a connection to docker client (STDIN) -t is short for — tty, it allocates a pseudo-terminal that connects your terminal with docker client for interaction. (STDIN and STDOUT).

It will start the server.

Temporary containers

We saw when we first created our build file, it took some time to build the image but if we build the same image again it will take far less time then before.

why?

because the first-time docker created the temporary containers while executing the docker file and stores these containers in the local cache. So when we build the same docker file again then it checks is there is any docker container present in the local cache similar to the requirement. if yes, then directly returns that otherwise, creates a new temporary container. we can see in the logs as well.

In this image, we can see the temporary docker container ids and ‘using cache’ logs.

If we change something like a file in our application then the step dealing with that file will not use a cache container, it will create a new temporary container for that step and for all the steps following that one.

I changed the app.js file.

Here we can see from step 3, docker created new temporary containers,

Problem: it’s good that docker can detect the changes but we have not made any change to our package.json so, there is no need to rerun step 4 (updating dependencies).

NOTE: Docker will create new containers even if we change the sequence of steps in the docker file.

Refactoring docker file

As we saw we are running all the steps following the step where changes are made, as we cannot change the docker functionality but we can modify our docker file to overcome this problem.

Here we made one change, first, we are copying the package.json file then running ‘npm install’ to resolve the dependencies, and then we are copying the remaining project.

So now if even if we make any change to the application file (other than package.json), the npm install step is not going to run while building the docker file.

So let's check by making a change in the application

After making a change in the application

Here we can see after making the change to applications, only step 5, and the following steps created new temporary containers.

Dockerizing a react project for production.

To run an application in production first we need to make a production version of the application using ‘npm run build’, it processes all the javascript files, puts together, processes them in a single file.

In the development container, we have a development server, So whenever we make a request to port 3000, it redirects the request to this development server and this server interacts with our application present in the container and returns the response to the browser.

But this development server does not exist in our production environment as we are not making any more changes to the code, we just compress our code and made a build-out of it.

So we need some server which can help in the interaction between this build and browser. So we will use Nginx.

Nginx

Nginx is a very popular web server, there is not much logic associated with it. it is just used for routing the traffic in and out of the application.

Building production docker file

So lets again understand this with an example, what you will do to deploy an application in production in a system.

I guess, first, you will create a production build file using (npm run build) then you will pass this file to a production server and starts it. Simple right?

Same we are going to do in our docker file, first, we will write steps to do a build and then pass this build to a production server (Nginx) and starts the server.

So our docker file has to do 2 tasks, the first one is to create a build and the second one is to copy the build-in Nginx.

Here we divided our docker file into 2 parts for the mentioned 2 tasks. In the second base image, we are copying the build file which got created in the first task.

Here we can use the same command to build the docker file

docker build .

It will give the image id which we will use in the next step.

Running production build

Now we need to map the localhost port with the default port of Nginx which is 80.

docker run -it -p 8080:80 {image-id}

--

--