In this tutorial I will show you how to bootstrap a base django project that you can use to quickly configure future applications.
To become an expert, we have to practice. These types of projects make practicing significantly easier, because when we want to stand up more complex applications we already have our base scaffolding in place.
Let’s get started.
First things first, create a new repository on github called base-django-project
.
Be sure to include a license, for instance the MIT license.
Next, on your local machine let’s create a directory for our projects:
Create your project directory:
$ mkdir -p ~/Projects/personal
Now clone your repo down:
$ git clone <url-to-your-repo>
$ cd ~/Projects/personal/<url-to-your-repo>
Let’s start by configuring our Pipfile and environment:
$ pipenv shell --python=python3
$ pipenv shell
We are now in our pipenv virtual environment. Let’s start by installing the dependencies we will need to get started:
$ pipenv install compose-flow --dev
Commit your changes.
We add the --dev
flag here because we will only use compose-flow in a development context. Broadly speaking, you should include as little as reasonably possible in production deployments.
Now let’s install django and django rest framework:
$ pipenv install django djangorestframework
Commit your changes.
Now let’s create a django project:
$ mkdir src
$ cd src
$ django-admin startproject webapp .
Now you have a directory structure that should look like this:
$ tree .
.
├── LICENSE
├── Pipfile
├── Pipfile.lock
└── src
├── manage.py
└── webapp
├── __init__.py
├── settings.py
├── urls.py
└── wsgi.py
I like to start things off with a pristine commit of generated django code so that we can always revert to it in future if things go south.
With that said, django has the annoying habit of writing some secrets to your settings.py on generation.
Specifically, it writes a secret key used for CSRF validation:
$ cat src/webapp/settings.py | grep SECRET
SECRET_KEY = <secret key>
You should never commit secrets to code.
Personally I also try to avoid commiting non-sensitve information like domains to code if I expect it will ever end up in a public repo.
compose-flow comes in handy here because it allows us to inject secrets into our environment at runtime. With that said, let’s open up src/webapp/settings.py
and set SECRET_KEY to an environment variable:
SECRET_KEY = os.environ['SECRET_KEY']
The os.environ
variable is just a Python dictionary. Here we use a key lookup directly rather than os.environ.get('SECRET_KEY')
because we want our code to explode and complain loudly if no value is present.
Now that we’ve removed the secret from our code, we can finally make a pristine commit of our environment.
From our project root (~/Projects/personal/default-django-project
if you’ve been following exactly), we can run:
$ git add src/
$ git commit -v
Now enter a commit message along the lines of "Pristine django project commit" and save. If you are using vim, this is :wq
or :x
.
Let’s add in some goodies that support functionality we would not get out of the box.
First of all, we can add rest framework to our INTALLED_APPS
:
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework'
]
Commit.
Now let’s add a few more great packages:
pipenv install django-extensions django-fsm django-simple-history dj-database-url psycopg2
These packages provide the following functionality:
django-extensions
: provides beefed up django shell along with many other bells and whistles you’ll be happy to havedjango-fsm
: a state machine for modelsdjango-simple-history
: a module that allows you to track and store all changes to a given model. Excellent for auditing purposes.dj-database-url
for easily configuring a database connection from a full connection stringpsycopg2
is python’s driver for interacting with a postgres database
Add these packages to our INSTALLED_APPS
in src/webapp/settings.py
:
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_extensions',
'django_fsm',
'simple_history',
]
Add and commit to git:
$ git add src/webapp/settings.py
$ git commit -v
Finally, configure the database connection by importing the dj_database_url
package at the top of the file:
import dj_database_url
and then replace:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
with:
DATABASES = {}
DATABASES['default'] = dj_database_url.config(conn_max_age=0)
Add and commit your changes.
Now that we have the basics of our application set up, let’s configure compose-flow to run our application with docker-compose.
First, let’s make sure we are in the correct directory:
$ cd ~/Projects/personal/<your-project-name>
Now we can add and edit a simple Dockerfile:
$ touch Dockerfile
FROM python:3.6-stretch
MAINTAINER Jonathan Meier <jonathan.w.meier@gmail.com>
ARG CF_PROJECT
ENV SRC_DIR /usr/local/src/$CF_PROJECT
WORKDIR ${SRC_DIR}
RUN pip3 install pipenv
COPY Pipfile Pipfile.lock ${SRC_DIR}/
RUN pipenv install --system --dev && \
rm -rf /root/.cache/pip
COPY ./ ${SRC_DIR}/
To test that the Dockerfile builds appropriately, run:
$ docker build . -t base-django-project:local --build-arg CF_PROJECT=base-django-project
Test that we can run the docker container and it contains our project files:
$ docker run --rm -it base-django-project:local /bin/bash
You should see a new shell prompt along the lines of root@7761eec4ef32:/usr/local/src/base-django-project#
.
This means you are inside the docker container.
From that shell, run to make sure your files were copied in:
$ ls
Dockerfile LICENSE Pipfile Pipfile.lock compose src
Our docker build worked!
Configuring compose-flow
By the end of this section, your project structure should look like this:
├── Dockerfile
├── LICENSE
├── Pipfile
├── Pipfile.lock
├── compose
│ ├── compose-flow-local.yml
│ ├── compose-flow.yml
│ ├── docker-compose.env.yml
│ ├── docker-compose.local.yml
│ ├── docker-compose.postgres.yml
│ └── docker-compose.yml
└── src
├── manage.py
└── webapp
├── __init__.py
├── settings.py
├── urls.py
└── wsgi.py
Now, we will make sure that we do not accidentally commit any of our compose-flow secrets to git by adding the generated compose-flow yaml filename to our .gitignore
.
First, make sure it exists:
touch .gitignore
Now add the following line:
compose-flow-*
Save and commit the changes:
$ git add .gitignore
$ git commit -v
Create the compose
directory and add a basic config:
$ mkdir compose
$ touch compose/compose-flow.yml
Edit compose/compose-flow.yml
to add the following profiles:
profiles:
local:
- docker-compose.yml
- env
- local
- postgres
Create our base docker-compose.yml
.
$ touch compose/docker-compose.yml
And edit it as follows:
version: '3.3'
services:
app:
build:
args:
- CF_PROJECT=${CF_PROJECT}
context: ..
image: ${DOCKER_IMAGE}
Notice here we replace the earlier --build-arg
command line argument with an item in the args
yaml key.
Now we can add our compose/docker-compose.postgres.yml
file:
services:
app:
links:
- postgres
postgres:
image: postgres
restart: always
environment:
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
Off the shelf postgres docker images will automatically create a database and a user with a password if you provide the above environment variables.
In addition to configuring the database itself, we link the postgres
service to our app
service, allowing them to route traffic to each other over Docker’s own networking. This way we do not need to expose the database on our 0.0.0.0
interface with a port.
Now that we have our database and base docker-compose file, we can add an environment
component. Edit compose/docker-compose.env.yml
:
services:
app:
environment:
- DATABASE_URL
- SECRET_KEY
Here we need a SECRET_KEY environment variable because we removed the default secret key value before making our pristine project commit.
The DATABASE_URL value will be required in order for our django application to communicate with our postgres
database.
Finally, we can add compose/docker-compose.local.yml
:
services:
app:
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ports:
- 8000:8000
volumes:
- ../:/opt/app
working_dir: /opt/app/src
The command
key here configures the default command for this service.
Let’s try to run our project with:
$ cf -e local compose run --rm app
compose-flow
throws us an error, complaining that we are missing the following environment variables:
DATABASE_URL not found in environment for service=app
SECRET_KEY not found in environment for service=app
POSTGRES_DB not found in environment for service=postgres
POSTGRES_USER not found in environment for service=postgres
POSTGRES_PASSWORD not found in environment for service=postgres
Let’s add those.
$ compose-flow -e local env edit -f
If you are not familiar with vi
or vim
, that’s ok. For now all you need to know is that i
puts you in insertion
mode and esc
puts your in command
mode.
To edit the file, press i
.
Then paste the following:
DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres/${POSTGRES_DB}
SECRET_KEY=<INSERT_RANDOM_CHARACTERS_HERE>
POSTGRES_DB=test
POSTGRES_USER=test
POSTGRES_PASSWORD=test
Simply change the SECRET_KEY value to something arbitrary. For development locally I normally use an arbitrary word. But out in the wild in deployments this should be a random string of letters.
To save, first enter command mode by pressing esc
. Then save and exit with :wq
and hit enter.
Now we can start our database with:
$ compose-flow -e local compose up -d postgres
And then we can start our application with:
$ compose-flow -e local compose up -d app
The only piece missing now is that our application will not wait on the database to finish launching before trying to contact it, so we cannot use the standard compose-flow -e local compose up -d syntax
.
To fix this, we can add a wait-for-it.sh
script to our image.
$ mkdir scripts
$ touch scripts/wait-for-it.sh
Then grab the script at https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh and copy it into scripts/wait-for-it.sh
.
Now edit the Dockerfile by adding the following lines at the bottom:
COPY ./scripts/wait-for-it.sh /usr/local/bin
RUN chmod u+x /usr/local/bin/*
And finally in compose/docker-compose.local.yml
change the command
key to:
command: ["wait-for-it.sh", "postgres:5432", "--", "python", "manage.py", "runserver", "0.0.0.0:8000"]
Spin down the previously started application with:
$ compose-flow -e local compose down
Now you can spin it up again with:
$ compose-flow -e local compose up -d
Now the app container will wait on the postgres container to finish bootstrapping before launching our django debug server process.
To test that your application is up, try curl’ing the port it listens on:
$ curl -iv http://0.0.0.0:8000/
And you should receive a bunch of html indicating that no page exists at that url. 🙂
And that’s it! You have now created your own base django project that you can use to quickly create new projects in the future!