Alpha Coder

Dockerizing Django in development and production

Setting up Docker can sometimes be confusing. There are many little pieces that need to come together for everything to work as expected. Outlined in this post is a simple Docker setup you can use for your Django projects in development and production environments.

TL;DR: Sample project

You can check out the code on GitHub.

Dockerfile

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
FROM python:3.6-alpine

RUN apk --update add \
    build-base \
    postgresql \
    postgresql-dev \
    libpq \
    # pillow dependencies
    jpeg-dev \
    zlib-dev

RUN mkdir /www
WORKDIR /www
COPY requirements.txt /www/
RUN pip install -r requirements.txt

ENV PYTHONUNBUFFERED 1

COPY . /www/

This Dockerfile is pretty straightforward. It starts from a Python-Alpine base image, then installs the dependencies that Django needs, notably postgresql and postgresql-dev. This Django project is setup to use PostgreSQL. If you want to use another database engine like MySQL or MongoDB, you need to install the required adapters/dependencies. Also, if you’re dealing with images (ImageField), you need to install jpeg-dev and zlib-dev as well (see Pillow dependencies).

Afterwards, it installs the packages in requirements.txt, then copies everything in the project root to /www/ where the image would be executed from. ENV PYTHONUNBUFFERED 1 causes all output to stdout to be flushed immediately, so that you can easily see what’s going on inside your Python app from a terminal.

docker-compose.yml

I almost always use Docker Compose with Docker. With Compose, you can run multiple containers at once, and you don’t have to memorize long Docker commands.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
version: "3"
services:
  web:
    build: .
    restart: on-failure
    env_file:
      - ./.env
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/www
    ports:
      - "8000:8000"
    depends_on:
      - db
  db:
    image: "postgres:10.3-alpine"
    restart: unless-stopped
    env_file:
      - ./.env
    ports:
      - "5432:5432"
    volumes:
      - ./postgres/data:/var/lib/postgresql/data

There are two services in this Compose file — web and db. The web service builds the Django app using the Dockerfile in the previous section. A volume is created to map the project directory in the host to the one in the container (- .:/www) so that any changes made on the host are mirrored in the container. The command parameter uses Django’s dev server to run the app (python manage.py runserver 0.0.0.0:8000).

The db service uses a PostgreSQL image and maps PostgreSQL’s data directory to ./postgres/data in the host, so that DB data is persisted even if the container gets destroyed. This very PostgreSQL image (the official postgreSQL image) uses several environment variables to configure PostgreSQL, including POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# .env
DEBUG=1
ALLOWED_HOSTS=*
SECRET_KEY=secret123

POSTGRES_HOST=db
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=secret123
POSTGRES_DB=mypgsqldb

The above variables are made available to the services using the env_file parameter. The postgres variables are used by both the web and db service (see the settings.py file in the sample project).

docker-compose.prod.yml

My production configs usually differ a bit from their development counterpart. I use Gunicorn to serve the Django app, and NGINX as a reverse proxy and static/media file server.

This setup, while good for production, is not very convenient in development where the goal is often to break things and move fast. Sometimes, I decide to use a managed database service so there’s no need for a Dockerized database server. All these are reflected in my docker-compose.prod.yml file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
version: "3"
services:
  web:
    build: .
    restart: on-failure
    env_file:
      - ./.env
    command: gunicorn --bind 0.0.0.0:8080 dockerizeddjango.wsgi
    ports:
      - "8080:8080"
    depends_on:
      - nginx
  nginx:
    image: "nginx"
    restart: always
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d
      - ./staticfiles:/static
      - ./mediafiles:/media
    ports:
      - "80:80"

The web service in the production Compose file looks identical to one in the development file, except for the command parameter (majorly). It uses Gunicorn (which is more suitable) to serve the application (gunicorn — bind 0.0.0.0:8080 dockerizeddjango.wsgi).

There’s also an NGINX service. Notice the volume definitions in this service. The first volume maps /etc/nginx/conf.d to the ./nginx/conf.d folder on the host which contains dockerizeddjango.conf.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# dockerizeddjango.conf
server {
  listen 80;
  server_name localhost;

  # serve static files
  location /static/ {
    alias /static/;
  }

  # serve media files
  location /media/ {
    alias /media/;
  }

  # pass requests for dynamic content to gunicorn
  location / {
    proxy_pass http://web:8080;
  }
}

This is a very basic Nginx config file. It’s also pretty self-explanatory. It passes requests with /static (e.g example.com/static/virus.js) and /media (e.g example.com/media/anonymous.jpg) to the /static and /media directories on the web container respectively. These directories are mapped to the /staticfiles and /mediafiles directories on the host where staticfiles are collected and media files uploaded. Other requests are proxied to Gunicorn.

Did you observe the use of service names in some files? e.g POSTGRES_HOST=db and proxy_pass http://web:8080;. db and web resolve to the IP addresses of their containers. This is handled by Docker Compose so that we don’t have to worry about what the IP address of a container is or hardcode IP addresses that might change later.

Use the following command to run the Compose file:

1
$ docker-compose -f docker-compose.prod.yml up --build -d

The -f parameter specifies the production Compose file, docker-compose.prod.yml (Docker Compose defaults to docker-compose.yml). --build tells Compose to rebuild the images each time the command is run, and the -d flag runs the containers in detached mode so that they keep running in the background even when your terminal is closed.

You can run migrations and create an admin user with the following commands:

1
2
$ docker-compose -f docker-compose.prod.yml run web python manage.py migrate
$ docker-compose -f docker-compose.prod.yml run web python manage.py collectstatic --noinput
Subscribe to my newsletter for updates on new articles, courses and more. Enjoy the content on Alpha Coder? Please buy me a coffee.

comments powered by Disqus