This is an example implementation of a python application applying concepts from Clean Architecture and SOLID principles.
- The repository classes are isolated behind interfaces, enforcing the Interface Segregation principle and the Inversion of Control design pattern
- The application frameworks are decoupled from the domain logic
- The storage layer is decoupled from the domain logic
This template provides out of the box some commonly used functionalities:
- API Documentation using FastAPI
- Async tasks execution using Celery
- Repository pattern for databases using SQLAlchemy and SQLAlchemy bind manager
- Database migrations using Alembic (configured supporting both sync and async SQLAlchemy engines)
- [TODO] Producer and consumer to emit and consume events using CloudEvents format on Confluent Kafka
The detailed documentation is available:
- Online on GitHub pages
- Offline by running
make docs
after installing dependencies withmake dev-dependencies
Create your GitHub repository using this template (The big green Use this template
button).
Optionally tweak name and authors in the pyproject.toml
file, however the metadata
are not used when building the application, nor are referenced anywhere in the code.
Using Docker:
make containers
: Build containersdocker compose run --rm dev make migrate
: Run database migrationsdocker compose up dev
: Run HTTP application with hot reloaddocker compose up celery-worker
: Run the celery workerdocker compose run --rm test
: Run test suite
Locally:
make migrate
: Run database migrationsmake install-dependencies
: Install requirementsmake dev-dependencies
: Install dev requirementsmake update-dependencies
: Updates requirementsmake migrate
: Run database migrationsmake dev
: Run HTTP application with hot reloadmake test
: Run test suite
make check
: Run tests, code style and lint checksmake fix
: Run tests, code style and lint checks with automatic fixes (where possible)
Python docker image tend to become large after installing the application requirements (the slim base is ~150 MB uncompressed), therefore it's important to spend efforts to minimise the image size, even if it produces a slightly more complex multistage Dockerfile.
The following setup makes sure the production image will keep to a minimal size ("only" 390MB):
- 150MB base image
- 165MB python installed dependencies
- 73MB poetry + updated pip
Using the following pipeline the "test" image is instead ~850MB, more than 400MB that would end up as a cost in traffic on each image pull.