WARNING: This is forked from njl/progcom by @njl and customized for PyCon Canada 2018.
The goal of this app is to provide a useful tool for asynchronous review of PyCon talk submissions. The program committee's job is extensive and daunting, and I'm trying to knock together a simple web app to allow the work to proceed as efficiently and effectively as I can. Requiring large groupings of busy professionals to come together at the same time to chat on IRC is hard, and doesn't feel very scalable. This is my first step toward understanding how to scale the whole thing.
The setup heavily uses Docker as a tool of choice.
- Install Docker
- Clone this repository.
- Run the migration by executing
make dev-db migration-pause migration
. (DB Password istest
) - Run
docker-compose -f docker/dev.yml up -d app
to start the service.
At the point, you can access to the app at http://localhost:4000/.
To reset everything (data), run
make dev-db-reset
.
- Disable any lines that trigger email delivery via SendGrid.
- For daily report, we use Slack notification (webhook) instead.
- The user approval process can be done manually by running:
update users set approved_on = CURRENT_TIMESTAMP where email = :email
withpsql
or semi-auto by executing a script. - Update
pull_updates.py
to decouple from the APIs fromus.pycon.org
as we are integrating with Papercall (with djangocon/papercall-api-import from @djangocon). (ETL)
For the ETL process, we write a script named etl_papercall_to_db.py
where it
borrows some parts of both djangocon/papercall-api-import and pull_updates.py
.
- Log into the server. (For dev, run
docker-compose -f docker/dev.yml exec app bash
or SSH to the server directly.) - Run
envdir docker-config python etl_papercall_to_db.py PAPERCALL_API_KEY
.
You will need an API key (
PAPERCALL_API_KEY
) generated by Papercall for your own account (apparently available for any account with any level of access).
The application picks up configuration from environment variables. I like to
use the envdir tool, but you can set them however you like. A complete set of
configuration values, reasonable for testing, are available in dev-config/
.
In production, you should add a SENTRY_DSN.
You can install envdir via brew install daemontools
on OS X, and apt-get install daemontools
on Ubuntu and Debian.
As configured by the values in dev-config
, the application connects to a local
postgresql database, with username, password, and database name 'test'.
The application uses a Postgresql database. If you're not familiar with setting
up Postgresql, I've included setup_db.sql
for you. Getting to the point where
you're able to execute those commands is going to depend on your system. If
you're on a Ubuntu-like system, and you've installed postgresql via something
like apt-get install postgresql
, you can probably run the psql
command via
something like sudo -u postgres psql
. On OSX, if you've installed postgresql
via brew, with something like brew install postgresql
, you can probably just
type psql
.
You can create the test database and test user via
psql template1 < setup_db.sql
.
The unit tests will create the tables for you, or you can do something like
psql -U test test < tables.sql
to create empty tables from scratch.
Make a virtualenv, pip install -r requirements.pip
. Run the application
locally via envdir dev-config ./app.py
, run the tests via
envdir dev-config py.test
.
You can fill the database up with lots of lorem ipsum nonsense by running the
script envdir dev-config ./fill_db_with_fakes.py
. You can then log in with
an email from the sequence user{0-24}@example.com
, and a password of abc123
.
[email protected]
is an administrator.
You'll need deploy-config in your root directory, which should have all the
appropriate secrets. From the application's root directory, you can run
ansible-playbook -i hosts deploy.yaml
.
The process runs in two rounds; the first is called "screening", and is basically about winnowing out talks. Talks which aren't relevant for PyCon, have poorly prepared proposals, or otherwise won't make the cut, get eliminated from consideration early. Talks aren't compared to one another; a low-ish bar is set, and talks that don't make it over the bar are removed.
The second part of the process is "batch". In batch, talks are moved into groups, and those groups are then reviewed one at a time, with a winner or two picked from every group. Some groups feel weak enough that no winners are picked.
To turn on Batch, echo 1 > dev-config/THIS_IS_BATCH
.
To disable feedback in screening, echo 1 > dev-config/CUTOFF_FEEDBACK
.