Build a CRUD API with MongoDB, Express, and Docker – Hacker Noon

Photo by Tim Easley on Unsplash

The world is being eaten by CRUD APIs, why not learn to build one? Projects similar to this one are a very common interview take-home project. In this tutorial we’ll build an API using Express.js backed by a MongoDB database all deployed with Docker Compose and tested with Mocha and Travis CI. The design of this API and its various components is loosely based on the 12 Factor App Methodology.

Setup

You’ll need Node.js, Docker, and Docker Compose installed. Additionally you’ll require a free Github account and a free Travis CI account. This tutorial assumes a basic familiarity with Javascript (some ES6 features like arrow functions will also be used) and Bash (or the shell for your OS). All of the code for this tutorial can also be found on Github.

First let’s create a new directory (mkdir) and cd into it. Then create a new Node.js package inside with npm init. Also initialize a new Git repository with git init. Next we’ll install npm dependencies using npm i --save express mongodb body-parser and npm i --save-dev mocha tape supertest. After that create a directory structure that looks like this:

crud-api
├── docker/
├── models/
├── routes/
└── tests/

The next couple of steps are auxiliary files for environment and ignore. First let’s create a .dockerignore and a .gitignore. The Docker ignore file informs your Docker builds of which files to not include while the Git ignore does the same for your Git commits. Mine look like this:

After those you’ll need a .env file that looks like my example.env. This provides information to Docker Compose that is essential for our application but should never be stored in version control.

Optionally, but highly recommended, you could create a README.md and a LICENSE. Let’s write some code!

Write the Server

We’ll be building this Express server across 3 files: index.js, routes/routes.js, and models/Document.js. Express.js is a minimalist Node.js web framework that utilizes the concept of middleware. As a request comes into the server it flows through each middleware in order until it either hits the end and errors out or a function does some computation based on it and returns. Let’s start off with index.js.

index.js

This should look familiar if you’ve worked with Express.js before but we’ll go through this section by section. First thing is always imports, body-parser may not be an obvious package but it’s a very important piece of middleware. It parses the body of incoming requests to make it easier to work with inside our routes in the next section. Following that we setup our database name based on development or production environment, assign the MongoDB url from environment variables, and set options for our MongoDB client. The first option uses the newer parser otherwise you’ll get a method deprecation error and the other two govern behavior when our database client loses connection. Next we import our router, set our server port from environment but defaulting to 80, and the next two lines are boilerplate Express setup.

Now we get into the meat of our server, the middleware stack. You generally want your body-parser middleware parsers first in an API so every request gets parsed. Then your request hits your server’s router, if it matches any of the routes described inside than the corresponding function will trigger and all is well. If no routes are matched then our server will return a 404 Not Found.

Our final section connects our MongoClient to our MongoDB instance. Always handle your errors in some manner but in this case we want to log an error and exit. After error handling we assign our database connection to a server global variable and start our server. The app.emit alerts our test suite when our server has properly started and we export the server so we can import it in our tests.

Once you’ve written all of that let’s build out our routes!

routes/routes.js

In this file we initialize and export an Express Router that contains all of our API routes. Each route takes the form of the router object followed by the HTTP method. Inside each route we then define the path and an arrow lambda to handle our request (req) and response (res). The next variable is commonly called if a function does not return and instead wants to pass the request further down the middleware stack. In each route inside the arrow lambda we start off by getting the MongoDB connection from our server level variable req.app.locals.db followed by the collection we want. In this case we only have one collection, documents. And then in each case we call a method to return some data or an error from the database. In order here are the functions and their corresponding route:

  • find() gets all documents in the collection — /documents/all
  • findOne() gets a specific document in this case based on a document id provided by the client — /documents/:id
  • insertOne() uploads a new document into the database — /documents/new
  • deleteOne() removes a document based on a document id provided by the client — /documents/delete/:id
  • updateOne() changes a document based on a JSON request body sent by the client — /documents/edit/:id

A colon in front of a word in a route denotes a parameter that is accessed inside the handlers using req.params. _id is automatically assigned to each document by MongoDB which is why we don’t have a unique identifier or other primary key in our data model. After we’ve made a database query we either get back data (result) or an error (err). Our error handling behavior is to use our res object to send back an HTTP 400 and JSON that contains the error. Otherwise we send back HTTP 200 and the result of the query.

After that we export our router object so we can use it in index.js as routing middleware for the /api route. This means that our full path for every API route is /api/documents/. Finally, we move on to defining our data model.

models/Document.js

This file should be fairly legible. It’s a Javascript class with a constructor that takes 3 strings and stores them. This acts as the schema for the data in MongoDB.

With the server taken care of let’s move on to deploying our server with Docker!

Dockerize Your API

First we need to write the Dockerfile.production for our production build. We start off by basing our container on the node:10.12.0-alpine image. The first layer after that downloads a script that will wait for arbitrary services to start so services do not come online without their dependencies and we make it executable with chmod +x. After that we set our working directory. Next we let our application know that we are in production by assigning the environment variable NODE_ENV (in our server we only check if NODE_ENV === 'dev' but we might want to explicitly check for 'prod' later on). Before copying other files we copy our package.json and install our dependencies so that Docker can cache them for subsequent builds. Following that we get our ARG port from docker-compose.yml and .env and expose that to the internal Docker network. Succeeding that we copy all of our files. Finally we execute our script to wait for MongoDB to come online and then run our server.

Our Dockerfile.test is similar to the first, in fact it’s based on the first. After pulling the production container we install dev-dependencies and our test runner Mocha. Finally, we use our test runner (--exit specifies a behavior from an older version so Mocha exits once it’s finished). The rest of this file should be almost the same as the production version.

Now that we have both of our Dockerfiles in place let’s build the docker-compose.yml that brings all of our containers online. We’re using version 3.0 of docker-compose.yml which is somewhat different than version 2.x so be careful when viewing Stack Overflow or other tutorials. Our main section is services and consists of our Node.js/Express.js backend and our MongoDB database. In backend we first encounter build; since our file is in docker/ we need to set the context to the project root, specify a Dockerfile relative to root, and inject our port environment variable. Next we load our .env containing vital application secrets not meant to be committed to Git. We want our server to fail often and gracefully so we have Docker Compose always restart it. Next off we bind our container’s exposed port. Finally WAIT_HOSTS is necessary for our script that ensures that our server doesn’t come online before our database. Coming up we have our database which is based on a published, on Docker Hub, image instead of a local Docker image that needs to be built. Once again we import our .env for this section. Then we mount a place to store data so it will persist between containers. Next we expose MongoDB’s port to the internal Docker network but not the host OS. Finally, we issue a command so Docker Compose can start our MongoDB instance.

There are only a couple of differences in this file but they’re fairly important. First we change the Dockerfile to Dockerfile.test. Second we use a different data storage location for testing so as not to contaminate our production data.

At last we come to our npm scripts. These are just aliases to docker-compose commands so we don’t have type long commands and have a single location to change a command if necessary. The -f flag indicates the location of a docker-compose.yml since we aren’t storing them in the project root. The -d flag after up backgrounds the process after all containers come online or fail.

Once you’ve complete all of that it’s time to move on to our final section and test our code and deployment.

Test Your Code

We’re going to write one integration test; for an actual, production application you probably want a series of unit tests in addition to at least one integration test. For unit tests you’d need a unique it() function for each test with a beforeEach() function to insert a test document and an afterEach() function to remove it after the test completes.

Let’s start off by writing a before() function that waits for app.emit to trigger in index.js indicating that our server has successfully started. Once it has we call the done() callback so Mocha knows to move on to the test.

We start off with a describe() block which contains a single it() function since we only have one test. Each sequential step is described as a test() function. In order we insert a new document, get all documents and store a document id to use for the other tests, get a specific document, update a specific document, and finally delete a specific document.

Once we’ve written all of our tests we can move on to our Travis CI configuration. Travis doesn’t start most services by default so we explicitly start Docker. Then we check our Docker version (not strictly necessary, handy for debugging version mismatch or if Docker isn’t running), copy our example.env to .env so our build runs correctly, and we stop some unnecessary services so our build and tests run faster. After that we use our npm scripts to build our production containers and use those as a source to run our tests.

Once you’ve written all of that you should be able to push to Github and check your repositories page on Travis CI for a passing build! To run your services locally type npm run build and once that’s finished npm run production. Once those have completed you can run docker ps to show all running containers which should show your MongoDB container and a container named docker_backend. You can now run curl localhost:80/api/documents/all which should return {“error": “No documents in database"}. I recommend embedding the current build state in your README.md using the following Markdown snippet:

[![Build Status](https://travis-ci.org//.svg?branch=master)](https://travis-ci.org//)

Thanks for reading, please leave a clap or several if this tutorial was helpful to you!

Joe Cieslik is the CEO of Whiteboard Dynamics, a full stack development team specializing in functional programming and Android. You can hire myself or my team to build your next killer app at whiteboarddynamics.co.

read original article here