Configure Pipenv and Pre-Commit
init project repo
mkdir django-project-configuration
cd !$
git init
Create a project directory and initialize a git repository in it.
**/__pycache__/**
*pyc
*env
**/.vscode/**
Create a project .gitignore
file. This excludes python cache files and directories, as well as an .env
files and the .vscode directory.
instantiate pipenv virtualenv and install dev packages
pipenv install --python 3.8
pipenv install --dev pre-commit pytest-django
Create a pipenv virtual environment with a python 3.8 interpreter.
Install the pre-commit and pytest-django packages as dev dependencies.
The pre-commit
package allows us to easily install third party git hooks. The pytest-django
package allows us to use pytest with django.
create a pre-commit config for your project: .pre-commit-config.yaml
repos:
- repo: https://github.com/psf/black
rev: 19.3b0
hooks:
- id: black
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v1.2.3
hooks:
- id: flake8
args: [--max-line-length=120]
This pre-commit config file will run hooks on every commit.
The first hook applies black
code formatting to the staged files. If there is a diff after the hook runs, meaning that the code was not formatted according to black’s guidelines, it will fail and we will need to stage the changes it made.
The second hook applies linting to our staged code with flake8
, a popular python code linter. This helps us catch things like typos before committing them.
pipenv shell
pre-commit install
pre-commit run
Enter the pipenv virtual environment and install the pre-commit
hooks to your local git repo.
you can confirm things are configured correctly by typing pre-commit run.
commit initial pre-commit config
git add .pre-commit-config.yaml
git commit -v
Commit your config. I always commit with the verbose -v flag so I can see what is staged for commit before actually making the commit.
It requires you to be familiar with a text-based editor like vim though, so if you aren’t comfortable with that, forget about the -v
flag for now and simply use:
git commit --message "Add pre-commit config"
Install Django, Create Project, and Configure Settings
install django and start project
pipenv install django
mkdir src
cd src
django-admin.py startproject webapp .
Install django and use the django-admin command to start a project called webapp
in a src directory.
install additional packages
pipenv install dj-database-url conversion psycopg2-binary django_command_overrides
Let’s install a few additional packages we will need for our next steps.
- We will use
dj-database-url
to configure our database conversion
is a really helpful package written by Roberto Aguilar that converts environment variables into python typespsycopg2-binary
is the driver that will allow us to work with a postgres databasedjango_command_overrides
implements a custom template when we eventually run ourstartapp
command
src/webapp/settings.py
import conversion
import dj_database_url
...
INSTALLED_APPS.append("django_command_overrides")
...
SECRET_KEY = os.environ["DJANGO_SECRET_KEY"]
DEBUG = conversion.convert_bool(os.environ.get("DJANGO_DEBUG", "False"))
...
# Database
DATABASES = {"default": {}}
database_info = dj_database_url.config()
if database_info:
DATABASES["default"] = database_info
Now open your settings.py file and add some configuration info.
- Derive your SECRET_KEY from an environment variable. This aligns with a best practice of never committing secrets to a git repository.
- Key django’s
DEBUG
mode off on an environment variable as well. Set the default toFalse
. Notice how we’re using theconversion
package to convert the stringFalse
into a python bool object. - Add django_command_overrides to INSTALLED_APPS so we can take advantage of a custom
startapp
command later on. - Use
dj_database_url
to define your database connection. This package derives an entire django database configuration from a singleDATABASE_URL
environment variable. I’ll show you what this environment variable should look like in a moment.
commit a pristine copy of your django project
cd ..
git add src/
git commit -v
Note that when we commit this, our black
pre-commit hook will apply black
code for matting to our staged files and fail because there’s now a diff between what is staged and what the files actually contain.
To fix this we need to re-stage our files, now correctly formatted by black
, for commit.
Now that we have added some basic configuration to our settings module, let’s make a pristine commit of our project.
Dockerize Application For Development and Utilize Dockerized Postgres
Dockerfile
FROM python:3.8 as base
RUN pip install pipenv
ENV PROJECT_DIR /usr/local/src/webapp
ENV SRC_DIR ${PROJECT_DIR}/src
COPY Pipfile Pipfile.lock ${PROJECT_DIR}/
WORKDIR ${PROJECT_DIR}
ENV PYTHONUNBUFFERED=1
FROM base as dev
# this is a dev image build, so install dev packages
RUN pipenv install --system --dev
COPY ./src ${SRC_DIR}/
WORKDIR ${SRC_DIR}
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Create a Dockerfile. Here we use a multi-stage build that generates a dev
image which we can use for local development.
.env
DJANGO_SECRET_KEY=local
DATABASE_URL=postgres://test:test@postgres/test
POSTGRES_DB=test
POSTGRES_USER=test
POSTGRES_PASSWORD=test
Create a .env
file and populate it. Our docker-compose file will take advantage of these variables in a moment.
Note the DATABASE_URL
environment variable which the dj_database_url
package will look for to build a database connection.
docker-compose.local.yml
version: "3.4"
services:
app:
build:
context: .
target: dev
working_dir: /opt/project/src
ports:
- 8000:8000
environment:
- DJANGO_SECRET_KEY
- DATABASE_URL
- PYTHONPATH=/opt/project
volumes:
- .:/opt/project
postgres:
image: postgres:9.6
volumes:
- pg-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
volumes:
pg-data:
driver: local
Create a docker-compose.local.yml
file that defines two services: a postgres service, and an app service.
The postgres service will run a containerized postgres instance for us that we can use like any other database with our django applicaiton.
The app service will build the image defined in our Dockerfile and inject the DJANGO_SECRET_KEY
and DATABASE_URL
environment variables.
Rather than using the application files in the docker image, we mount our project root into the container and set its src
as our working directory.
This allows us to edit and run files on the fly without rebuilding our image every time.
We add our new working directory to our python path via the PYTHONPATH
environment variable.
run local docker compose bash
docker-compose -f docker-compose.local.yml run --rm app bash
python manage.py startapp ebook_store
Now test that we can run everything as expected by getting a shell inside our container.To test that we can run our image locally as expected.
exit containerexit
confirm app was created
ls
ls src
Because we mounted in our local directory to the docker container we can see the files were created on our local machine.
commit Dockerfile, docker-compose.local.yml, and Pipfiles
git add Pipfile*
git commit -v
git add Dockerfile docker-compose.local.yml
git commit -v
To wrap up, commit your Pipfiles, Dockerfile, and docker compose configuration.
In summary, you now have a django project that is itself containerized for local development, talks to a containerized instance of postgres, and runs git pre-commit hooks to enforce code formatting and linting on every commit.
Nice work!