Why would you want to use Docker to do React app work? Isn't Docker for server-side stuff like Python and Golang etc? No, all the benefits of Docker apply to JavaScript client-side work too.

So there are three main things you want to do with create-react-app; dev server, running tests and creating build artifacts. Let's look at all three but using Docker.

Create-react-app first

If you haven't already, install create-react-app globally:

▶ yarn global add create-react-app

And, once installed, create a new project:

▶ create-react-app docker-create-react-app
...lots of output...

▶ cd docker-create-react-app
▶ ls
README.md    node_modules package.json public       src          yarn.lock

We won't need the node_modules here in the project directory. Instead, when building the image we're going let node_modules stay inside the image. So you can go ahead and... rm -fr node_modules.

Create the Dockerfile

Let's just dive in. This Dockerfile is the minimum:

FROM node:8

ADD yarn.lock /yarn.lock
ADD package.json /package.json

ENV NODE_PATH=/node_modules
ENV PATH=$PATH:/node_modules/.bin
RUN yarn

WORKDIR /app
ADD . /app

EXPOSE 3000
EXPOSE 35729

ENTRYPOINT ["/bin/bash", "/app/run.sh"]
CMD ["start"]

A couple of things to notice here.
First of all we're basing this on the official Node v8 repository on Docker Hub. That gives you a Node and Yarn by default.

Note how the NODE_PATH environment variable puts the node_modules in the root of the container. That's so that it doesn't get added in "here" (i.e. the current working directory). If you didn't do this, the node_modules directory would be part of the mounted volume which not only slows down Docker (since there are so many files) it also isn't necessary to see those files.

Note how the ENTRYPOINT points to run.sh. That's a file we need to create too, alongside the Dockerfile file.

#!/usr/bin/env bash
set -eo pipefail

case $1 in
  start)
    # The '| cat' is to trick Node that this is an non-TTY terminal
    # then react-scripts won't clear the console.
    yarn start | cat
    ;;
  build)
    yarn build
    ;;
  test)
    yarn test $@
    ;;
  *)
    exec "$@"
    ;;
esac

Lastly, as a point of convenience, note that the default CMD is "start". That's so that when you simply run the container the default thing it does is to run yarn start.

Build container

Now let's build it:

▶ docker image build -t react:app .

The -t react:app is up to you. It doesn't matter so much what it is unless you're going to upload your container the a registry. Then you probably want the repository to be something unique.

Let's check that the build is there:

▶ docker image ls react:app
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
react               app                 3ee5c7596f57        13 minutes ago      996MB

996MB! The base Node image is about ~700MB and the node_modules directory (for a clean new create-react-app) is ~160MB (at the time of writing). What the remaining difference is, I'm not sure. But it's empty calories and easy to lose. When you blow away the built image (docker image rmi react:app) your hard drive gets all that back and no actual code is lost.

Before we run it, lets go inside and see what was created:

▶ docker container run -it react:app bash
root@996e708a30c4:/app# ls
Dockerfile  README.md  package.json  public  run.sh  src  yarn.lock
root@996e708a30c4:/app# du -sh /node_modules/
148M    /node_modules/
root@996e708a30c4:/app# sw-precache
Total precache size is about 355 kB for 14 resources.
service-worker.js has been generated with the service worker contents.

The last command (sw-precache) was just to show that executables in /node_modules/.bin are indeed on the $PATH and can be run.

Run container

Now to run it:

▶ docker container run -it -p 3000:3000 react:app
yarn run v1.3.2
$ react-scripts start
Starting the development server...

Compiled successfully!

You can now view docker-create-react-app in the browser.

  Local:            http://localhost:3000/
  On Your Network:  http://172.17.0.2:3000/

Note that the development build is not optimized.
To create a production build, use yarn build.

Default app running

Pretty good. Open http://localhost:3000 in your browser and you should see the default create-react-app app.

Next step; Warm reloading

create-react-app does not support hot reloading of components. But it does support web page reloading. As soon as a local file is changed, it sends a signal to the browser (using WebSockets) to tell it to... document.location.reload().

To make this work, we need to do two things:
1) Mount the current working directory into the Docker container
2) Expose the WebSocket port

The WebSocket thing is set up by exposing port 35729 to the host (-p 35729:35729).

Below is an example running this with a volume mount and both ports exposed.

▶ docker container run -it -p 3000:3000 -p 35729:35729 -v $(pwd):/app react:app
yarn run v1.3.2
$ react-scripts start
Starting the development server...

Compiled successfully!

You can now view docker-create-react-app in the browser.

  Local:            http://localhost:3000/
  On Your Network:  http://172.17.0.2:3000/

Note that the development build is not optimized.
To create a production build, use yarn build.

Compiling...
Compiled successfully!
Compiling...
Compiled with warnings.

./src/App.js
  Line 7:  'neverused' is assigned a value but never used  no-unused-vars

Search for the keywords to learn more about each warning.
To ignore, add // eslint-disable-next-line to the line before.

Compiling...
Failed to compile.

./src/App.js
Module not found: Can't resolve './Apps.css' in '/app/src'

In the about example output. First I make a harmless save in the src/App.js file just to see that the dev server notices and that my browser reloads when I did that. That's where it says

Compiling...
Compiled successfully!

Secondly, I make an edit that triggers a warning. That's where it says:

Compiling...
Compiled with warnings.

./src/App.js
  Line 7:  'neverused' is assigned a value but never used  no-unused-vars

Search for the keywords to learn more about each warning.
To ignore, add // eslint-disable-next-line to the line before.

And lastly I make an edit by messing with the import line

Compiling...
Failed to compile.

./src/App.js
Module not found: Can't resolve './Apps.css' in '/app/src'

This is great! Isn't create-react-app wonderful?

Build build :)

There are many things you can do with the code you're building. Let's pretend that the intention is to build a single-page-app and then take the static assets (including the index.html) and upload them to a public CDN or something. To do that we need to generate the build directory.

The trick here is to run this with a volume mount so that when it creates /app/build (from the perspective) of the container, that directory effectively becomes visible in the host.

▶ docker container run -it -v $(pwd):/app react:app build
yarn run v1.3.2
$ react-scripts build
Creating an optimized production build...
Compiled successfully.

File sizes after gzip:

  35.59 KB  build/static/js/main.591fd843.js
  299 B     build/static/css/main.c17080f1.css

The project was built assuming it is hosted at the server root.
To override this, specify the homepage in your package.json.
For example, add this to build it for GitHub Pages:

  "homepage" : "http://myname.github.io/myapp",

The build folder is ready to be deployed.
You may serve it with a static server:

  yarn global add serve
  serve -s build

Done in 5.95s.

Now, on the host:

▶ tree build
build
├── asset-manifest.json
├── favicon.ico
├── index.html
├── manifest.json
├── service-worker.js
└── static
    ├── css
    │   ├── main.c17080f1.css
    │   └── main.c17080f1.css.map
    ├── js
    │   ├── main.591fd843.js
    │   └── main.591fd843.js.map
    └── media
        └── logo.5d5d9eef.svg

4 directories, 10 files

The contents of that file you can now upload to a CDN some public Nginx server that points to this as the root directory.

Running tests

This one is so easy and obvious now.

▶ docker container run -it -v $(pwd):/app react:app test

Note the that we're setting up a volume mount here again. Since the test runner is interactive it sits and waits for file changes and re-runs tests immediately, it's important to do the mount now.

All regular jest options work too. For example:

▶ docker container run -it -v $(pwd):/app react:app test --coverage
▶ docker container run -it -v $(pwd):/app react:app test --help

Debugging the node_modules

First of all, when I say "debugging the node_modules", in this context, I'm referring to messing with node_modules whilst running tests or running the dev server.

One way to debug the node_modules used is to enter a bash shell and literally mess with the files inside it. First, start the dev server (or start the test runner) and give the container a name:

▶ docker container run -it -p 3000:3000 -p 35729:35729 -v $(pwd):/app --name mydebugging react:app

Now, in a separate terminal start bash in the container:

▶ docker exec -it mydebugging bash

Once you're in you can install an editor and start editing files:

root@2bf8c877f788:/app# apt-get update && apt-get install jed
root@2bf8c877f788:/app# jed /node_modules/react/index.js

As soon as you make changes to any of the files, the dev server should notice and reload.

When you stop the container all your changes will be reset. So if you had to sprinkle the node_modules with console.log('WHAT THE HECK!') all of those disappear when the container is stopped.

NodeJS shell

This'll come as no surprise by now. You basically run bash and you're there:

▶ docker container run -it -v $(pwd):/app react:app bash
root@2a21e8206a1f:/app# node
> [] + 1
'1'

Conclusion

When I look back at all the commands above, I can definitely see how it's pretty intimidating and daunting. So many things to remember and it's got that nasty feeling where you feel like your controlling your development environment through unwieldy levers rather than your own hands.

But think of the fundamental advantages too! It's all encapsulated now. What you're working on will be based on the exact same version of everything as your teammate, your dev server and your production server are using.

Pros:

  • All packaged up and all team members get the exact same versions of everything, including Node and Yarn.
  • The node_modules directory gets out of your hair.
  • Perhaps some React code is just a small part of a large project. E.g. the frontend is React, the backend is Django. Then with some docker-compose magic you can have it all running with one command without needing to run the frontend in a separate terminal.

Cons:

  • Lack of color output in terminal.
  • The initial (or infrequent) wait for building the docker image is brutal on a slow network.
  • Lots of commands to remember. For example, How do you start a shell again?

In my (Mozilla Services) work, the projects I work on, I actually use docker-compose for all things. And I have a Makefile to help me remember all the various docker-compose commands (thanks Jannis & Will!). One definitely neat thing you can do with docker-compose is start multiple containers. Then you can, with one command, start a Django server and the create-react-app dev server with one command. Perhaps a blog post for another day.

Comments

Post your own comment
Joe Keilty

Fantastically informative, thank you

Joshua Sherer

Exactly what I was looking for. U rock, sir! \m/

Daniel Schmidt

Great article, thank you very much. Just one comment, if you add a .dockerignore with node_modules/ in it you will send only a few kb to the docker demon instead of hundreds of mb :)

Peter Bengtsson

Thanks! I didn't even know about .dockerignore.

Earle West

Can you explain a bit more why you must start in the current machine environment?
I'd really like to just pull a container from hub.docker.org that has all this built in...that would allow me to move to other machines and have the same environment

Peter Bengtsson

What do you mean by "why you must start in the current machine environment"?

Mark Winterbottom

Hey Peter, great article. I think what Earle is asking is why do we need to run `create-react-app` locally before creating the Dockerfile? This means that creating new applications requires the correct version of NPM and create-react-app locally before developing it in docker.

KK

I have the exact same question. Creating the app in host machine prior to moving it to the container seems to defeat one of the purposes of using docker which is to avoid installing any softwares in the host machine. I am not sure what I am missing here.

Peter Bengtsson

Good question and I'd have to think about that. Once you've created the project once you won't need `create-react-app` installed. Right? So you don't need that bloat.

One solution would be to do something like

> docker container run -it react:app bash
$ npm add create-react-app
$ create-react-app myinitialproject

And when after you exit, only add the created project to the git repo.

rakin

how can I host this with nginx-alpine?

Aubron Wood

For windows, (or just if you're interested in platform agnostic-ness) make sure you're adding the environment variable CHOKIDAR_USEPOLLING=true

Otherwise, the filesystem watching fails. The fs wrapping in Chokidar by environment variable is another fun feature provided by create-react-app.

Jesus Valdez

Great Article, and very updated, thanks a lot! I have two questions:
Do you already have the docker-compose article?
This whole thing is just for development or it can be used in production? I got confused in the last paragraph with the dev word: "Then you can, with one command, start a Django server and the create-react-app DEV server with one command"

Thanks again

Peter Bengtsson

Thanks for pinging. I really should write one about using docker-compose to run all the things.

Krisztián

please do it :)

Jesus Valdez

Hi, thanks for the article what´s on it works perfectly until I want to implement it using docker-compose, hot-reloading stops working, maybe I am doing something wrong, I have this docker-compose file:

version: "3"
services:
  react:
    build:
      context: menumy-react
      dockerfile: Dockerfile
    volumes:
      - '.:/usr/src/app'
      - '/usr/src/app/node_modules'
    ports:
      - "3000:3000"
      - "35729:35729"

Marwan EB

Hi Jesus have you found a solution to your problem ? I have the same and I'm stuck too ...

anand kumar

This worked for me. passing CHOKIDAR_USEPOLLING=true as env var.
In compose:
...
environment:
      - CHOKIDAR_USEPOLLING=true

Jolaade Adewale

Great

matt212

how to pass $(pwd) in command line docker -run

Peter Bengtsson

What do you mean? What did you try?

Sebastien Tardif

By executing create-react-app outside docker that's missing the point of running inside docker. I have no node/npm/yarn on my host, and I don't want them. Like this all 'instructions/steps' just need docker, and work on all platform.

Also I get refresh using CHOKIDAR_USEPOLLING=true and without websocket port 35729 open.

The most useful info on the blog is the handling of node_modules.

Anonymous

Docker-compose example?

Anonymous

+1 for a docker-compose example. Having to pass in volumes to mount for each operation defeats the purpose of making this easy.

Cliff

My Docker-Compose file

version: '3'
services:
  app:
    build: .
    image: react:app
    ports:
      - 3000:3000
       - 35729:35729
    volumes:
       - ./:/app

Cliff

I have a question about installing additional modules.

1. If I use yarn to install additional modules outside the container, I have to have yarn installed and it starts installing all the node modules that we're tucking into that subdirectory.
2. If I use yarn within the bash shell, it creates the new modules which leak out into the volume and I'm constantly erasing the node modules file within my volume directory.

Is there a way to have yarn add to the package without actually installing or have yarn within the container install to the new directory instead?

Cliff

Update:

If you use a volume for the project directory add the file .yarnrc in the project root with the single line:
--modules-folder /node_modules
Then you can add modules with yarn by running bash within the container and yarn will use the container's node_modules folder without passing the directory back through the volume.

David

Thx!

Azis

Excellent example. very helpful, nice explanation. thank you very much.

AkaTenshi

Amazing work! You covered pretty much everything I intended to scrap together today in a single, easy to understand post with perfectly working, optimized instructions. I tip my fedora, sir :)

David

Very helpful!

I'm encountering an issue with a .cache directory in node_modules. I've got a Dockerfile similar to yours:

FROM node:12.18.2

COPY yarn.lock /yarn.lock
COPY package.json /package.json

ENV NODE_PATH=/node_modules
ENV PATH=$PATH:/node_modules/.bin
RUN yarn

WORKDIR /app
COPY . /app


And in my docker-compose i only have one volume for /app. It looks like when doing docker-compose up (yarn start) a compilation step is creating a local node_modules/.cache directory with subdirectories for babel-loader and eslint-loader. Any idea what is causing that? How can I just have it use the /node_modules location instead?

Peter Bengtsson

So, within the container you get `/app/node_modules` and you also get a `./node_modules` folder?
It could be that some bad package doesn't respect `$NODE_PATH` which is sad.

Your email will never ever be published.

Related posts

Go to top of the page