The goal of this tutorial is to have a containerized application build, tested, and deployed on a web server using Docker and GitLab.

This project follows the ideas of this post: Continuous Integration and Deployment with Gitlab, Docker-compose, and DigitalOcean and the course Authentication with Flask, React, and Docker.

The project GitLab repository is here. Note that only source code files are stored there. The project does not have CI / CD enabled. To follow the tutorial, create your own GitLab project and experiment with CI /CD there.

Table of Contents

Web Application

The application comprises two services: a Flask REST API backend and a React Frontend.

REST API Backend

The backend is a simple REST API app built using Flask and Flask-RESTX. Since the focus of this project is the CI/CD, we only need a single REST endpoint to prove the concept:


In the module we also use flask_cors extension to allow the incoming requests from the client app.

We also need a simple test for the test stage of the CI:


The production setup of the application will require one more file that gunicorn will need to run the application:


Take a look into the requirements.txt file where we will list the dependencies required for the app to run:


Containerizing the REST API App

We will use three Dockerfiles, one for the local development, one to build and test the app in the CI, and one for the release version.

First, let’s create a .dockerignore file to tell Docker what directories and files should not be copied into images:


The development Dockerfile is quite simple:


It pulls a base image, sets up the app’s working directory, installs the app’s dependencies, copies the app code and starts the development server.

We will test the development setup in a moment with docker-compose, but first, let’s prepare the client application.

Client Web App

The client is a React application that issues a single call to the backend REST API. In our project, we place it under the client/ folder.

The dependencies, settings, and scripts are defined in the package.json file:


The application’s entry point is the index.js file under the src/ directory:


It renders the App component that we define like this:


This component renders some simple HTML and a message that it loads using axios from the backend app. The API URL defaults to http://localhost:5001, or is taken from the environment variable REACT_APP_BACKEND_SERVICE_URL. The call is made in the componentDidMount lifecycle function.

For the test stage we define a simple test using @testing-library/react


To build the client app, we will also need a public folder that is generated by create-react-app:

Containerizing the Client App

First, we define the file excluded from copying for Docker:


The development Dockerfile will have this content:


Here, we pull the base image, set up the working directory, add the .bin/ folder to the PATH in the image. Then, we install the dependencies, copy the app files, and start the development server.

Running the Development Containers

To run the development environment, we prepare the docker-compose.yml file in the project folder (i.e. in the same directory where backend/ and client/ folders are located).


We define two services: backend and client. We bind the ./backend folder as a volume to the backend container and map the host system’s port 5001 to port 80 of the container. Also, the FLASK_ENV is set to development

The client service binds the ./client/ folder as a volume. To avoid overwriting the client’s node_modules directory when the ./client/ folder is mounted, we also bind an anonymous volume /usr/src/app/node_modules.

To avoid exiting of the container after the development server is started, we also specify stdin_open: true.

The app running in this container will be accessible from the host machine via the mapped port 8007.

The REACT_APP_BACKEND_SERVICE_URL variable that is used by the app will receive its value from the environment. We can set its value in the .env file

Note that the.env file is excluded from the repository and you will have to create it manually.

We start the both containers by running:

Make sure that the containers are running:

Check the app in the browser:

CI CD with GitLab

We are going to use GitLab to set up a pipeline that will build test and deploy the web app to a remote server.

We create the .gitlab-ci.yml and add this tho the beginning of the file:


Build Stage: Backend

In this stage, we build intermediate images for the backend and client service that we will use to run tests. The stage is defined like this:


For the backend service, we use the docker-in-docker image to run scripts there that pull the backend image (tagged :backend) and build it using the ./backend/ file. In the script command, we also pass a variable SECRET_KEY that can be used in the dockerfile. The newly build image is tagged again with :backend and pushed back to the Gitlab project’s registry.

The for the backend service has this content:


In this dockerfile, we use a multi-stage build to reduce the resulting image size. The builder image installs the requirements and archives them as wheels using pip wheel command.

The final image copies the wheels and install dependencies from them without downloading and recompiling packages. In this image, we also use a non-root account to run the production-ready gunicorn app server.

The total size of the final image is then smaller due to use of wheels.

Build Stage: Client

The build stage for the client image is defined like this:


Here too, we use a container based on the docker-in-docker image to build the client image and push the result back to the project registry. Note that we are setting the environment variable REACT_APP_BACKEND_SERVICE_URL before running the build script. Its value is taken from the GitLab project’s settings, CI/CD variable section.

The file /client/ used in this stage has this content:


The simplicity of the current project allows us to use basically the same Dockerfile as for the development image.

Test Stage

The test stage jobs are run on the images built in the build stage. The definition of the test stage looks like this:


Production Stage

In the production stage, we build the final release image(s) that will be deployed to the production server.

This project allows the services to be run from the same container. We will build the client service image first (tagged :build-react), then build the backend image (tagged :production), copy the built React application from the client image (these are just static files) to the predefined directory in the backend image. In the backend image, we set up Nginx to serve static files or to reverse proxy to the gunicorn running the Flask REST API backend.

The Production stage is defined like this:


The Gitlab configuration uses a special Dockerfile.deploy file that has this content:


In the first part, we build the build-react image. The difference here is that in the end, we do not start the development server. Instead, we run the npm run build command that compiles the static files of the React app.

The second part creates an intermediate image production-builder that installs requirements for the Flask app and prepares wheels for the production image.

In the third part, we use an nginx image as the base. We install python dependencies from wheels built by production-builder. We also copy the React static files from the build-react image and the Nginx configuration.

We run the gunicorn server, modify the Nginx config to use the port number passed from the environment variables, and, finally, start the Nginx server.

The Nginx configuration serves the static files of the React app and forwards the request to the API endpoint to ‘/hello’ to the gunicorn server running at port 5000.


The release image is tagged :production and is ready to deploy.

We also will run the production container using the non-root user nginx. More details on how to set it up are here: Run Docker nginx as Non-Root-User. For this purpose, the release stage will override the default file nginx.conf with the custom version, we have in /nginx/nginx.conf:


The release image is tagged :production and is ready to deploy.

We also will run the production container using the non-root user nginx. More details on how to set it up are here: Run Docker nginx as Non-Root-User

Deploy Stage

The definition of the deploy stage is this:

In this stage, GitLab spins a temporary container that connects to the remote server via SSH and runs a set of commands that:

  • creates a file environment.env and copies environment variables from the project variables. the value for the variable REACT_APP_BACKEND_SERVICE_URL is set to “/” (since the REST API run on the same host and port);
  • copies the environment.env and docker-compose.deploy.yml file to the remote server to a specified folder;
  • docker login into the project’s Gitlab registry;
  • pulls the newer version of the image
  • stops and removes the running containers
  • starts the containers from the updated image

These operations require that the remote server has both docker and docker-compose installed. We will also need a key pair that will allow Gitlab to login to the remote server.

We will need three project variables for the project:

  • DEPLOYMENT_SERVER_IP – address of the remote server
  • DEPLOYMENT_USER – user name used to login to the remote server
  • DEPLOY_SERVER_PRIVATE_KEY – private key for the user on the remote server

Go to “Settings” > “CI /CD” and open the section “Variables”

Preparing the Remote Server

The remote server will have to allow the deployment use passwordless sudo execution ofdocker and docker-compose commands. This is done by creating a file {$username} under /etc/.sudoers/, e.g. for the user developer (/etc/sudoers.d/developer)


The remote server will use docker-compose.deploy.yml to start the production container:


Here, the process is simple: a single service, host port 8008 mapped to the container’s port 8765, environment variables read from environment.env

Commit the project to a repository whose origin is set to gitlab, push the master branch upstream. This will trigger a pipeline. If everything goes right, the pipeline will look like this:

Now go to your browser and type the host name or the IP address of your server followed by port 8008, like the response should be the same we see when testing the development server.

Possible Problems

On the remote server, there may be a warning about permission denied on the ~/.docker/config.json file. Make sure that the user has sufficient permission on the .docker directory: