Skip to content

A Polaris service for managing patients' vital sign observations

License

Notifications You must be signed in to change notification settings

sensynehealth/polaris-observations-api

Repository files navigation

Polaris Observations API

Code style: black

The Observations API is part of the Polaris platform (formerly DHOS). This service stores information about patient vital sign observations, such as blood pressure, heart rate, temperature, oxygen saturation, oxygen therapy, ACVPU and respiratory rate.

Maintainers

The Polaris platform was created by Sensyne Health Ltd., and has now been made open-source. As a result, some of the instructions, setup and configuration will no longer be relevant to third party contributors. For example, some of the libraries used may not be publicly available, or docker images may not be accessible externally. In addition, CICD pipelines may no longer function.

For now, Sensyne Health Ltd. and its employees are the maintainers of this repository.

Setup

These setup instructions assume you are using out-of-the-box installations of:

You can run the following commands locally:

make install  # Creates a virtual environment using pyenv and installs the dependencies using poetry
make lint  # Runs linting/quality tools including black, isort and mypy
make test  # Runs unit tests

You can also run the service locally using the script run_local.sh, or in dockerized form by running:

docker build . -t <tag>
docker run <tag>

Documentation

This service originally formed part of the dhos-services-api but was split to its own services as part of ADR016

Endpoint Method Auth? Description
/running GET No Verifies that the service is running. Used for monitoring in kubernetes.
/version GET No Get the version number, circleci build number, and git hash.
/dhos/v2/observation_set POST Yes Create a new observation set, which has been scored. This endpoint may trigger the generation of an ORU HL7 message and a BCP PDF for SEND.
/dhos/v2/observation_set GET Yes Get a list of observations sets associated with one or more encounters (specified by UUID). An invalid encounter uuid will return an empty list.
/dhos/v2/observation_set/{observation_set_id} PATCH Yes Update an existing observation set with new or changed details.
/dhos/v2/observation_set/{observation_set_id} GET Yes Get an observation set by UUID
/dhos/v2/observation_set/latest GET Yes Get the most recently recorded observations set associated with a given encounter (specified by UUID). Multiple encounters may be given but only a single observation set is returned.
/dhos/v2/observation_set/latest POST Yes Get the most recently recorded observations set associated with each encounter. UUIDs are passed in the request body.
/dhos/v2/observation_set_search GET Yes Get a list of observation sets observation set, filtered by location UUIDs and by date range. Only observation sets recorded against the provided location, and during the provided date range, will be returned.
/dhos/v2/observation_set_search POST Yes Get a list of observation sets observation set, filtered by location UUIDs and by date range. Only observation sets recorded against the provided location, and during the provided date range, will be returned.
/dhos/v2/patient/{patient_id}/observation_set GET Yes Get a list of observations sets associated with a given patient (specified by UUID). Note: this will only work for obs sets created with a patient UUID, which doesn't happen for SEND.
/dhos/v2/observation_set/count POST Yes Return the number of observation sets associated with each of a list of encounter UUIDS.
/dhos/v2/observation_sets GET Yes Get a list of observations sets which have been modified after the specified date and time i.e modified_since=2020-12-30 will include an observation from 2020-12-30 00:00:00.000001
/dhos/v2/aggregate_obs POST Yes Refresh the data in the Aggregate observations set view
/dhos/v2/on_time_obs_stats POST Yes Get aggregate data for the number and % of observation sets recorded on time by location by risk
/dhos/v2/missing_obs_stats POST Yes Get aggregate data for the number and % of observation sets which have missing observations from observation sets by location
/dhos/v2/on_time_intervals POST Yes Get aggregate data for the number of observation sets grouped by interval of time taken relative to the expected time taken by risk category
/dhos/v2/observation_sets_by_month POST Yes Get aggregate data for the number of observation sets grouped by month
/dhos/v2/observation_sets_by_month GET Yes Get aggregate data for the number of observation sets grouped by location and month

Requirements

At a minimum you require a system with Python 3.9. Tox 3.20 is required to run the unit tests, docker with docker-compose are required to run integration tests. See Development environment setup for a more detailed list of tools that should be installed.

Deployment

All development is done on a branch tagged with the relevant ticket identifier. Code may not be merged into develop unless it passes all CircleCI tests. :partly_sunny: After merging to develop tests will run again and if successful the code is built in a docker container and uploaded to our Azure container registry. It is then deployed to test environments controlled by Kubernetes.

Testing

Unit tests

🔬 Either use make or run tox directly.

tox : Running make test or tox with no arguments runs tox -e lint,default

make clean : Remove tox and pyenv virtual environments.

tox -e debug : Runs last failed unit tests only with debugger invoked on failure. Additional py.test command line arguments may given preceded by --, e.g. tox -e debug -- -k sometestname -vv

make default (or tox -e default) : Installs all dependencies, verifies that lint tools would not change the code, runs security check programs then runs unit tests with coverage. Running tox -e py39 does the same but without starting a database container.

tox -e flask : Runs flask within the tox environment. Pass arguments after --. e.g. tox -e flask -- --help for a list of commands. Use this to create database migrations.

make help : Show this help.

make lint (or tox -e lint) : Run black, isort, and mypy to clean up source files.

make openapi (or tox -e openapi) : Recreate API specification (openapi.yaml) from Flask blueprint

make pyenv : Create pyenv and install required packages (optional).

make readme (or tox -e readme) : Updates the README file with database diagram and commands. (Requires graphviz dot is installed)

make test : Test using tox

make update (or tox -e update) : Updates the poetry.lock file from pyproject.toml

Integration tests

🔩 Integration tests are located in the integration-tests sub-directory. After changing into this directory you can run the following commands:

Issue tracker

🐛 Bugs related to this microservice should be raised on Jira as PLAT-### tickets with the component set to Locations.

Database migrations

Any changes affecting the database schema should be reflected in a database migration. Simple migrations may be created automatically:

$ tox -e flask -- db migrate -m "some description"

More complex migration may be handled by creating a migration file as above and editing it by hand. Don't forget to include the reverse migration to downgrade a database.

Configuration

  • DATABASE_USER, DATABASE_PASSWORD, DATABASE_NAME, DATABASE_HOST, DATABASE_PORT configure the database connection.
  • LOG_LEVEL=ERROR|WARN|INFO|DEBUG sets the log level
  • LOG_FORMAT=colour|plain|json configure logging format. JSON is used for the running system but the others may be more useful during development.

Database

Observations are stored in a Postgres database.

Database schema diagram

About

A Polaris service for managing patients' vital sign observations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published