Additional information and better logic for the acceptance test feature #1085
Labels
epic
Used for bigger pieces of work. Usually split between several issues.
status: Ready
An issue that is ready to be worked on.
tests
Anything related to our tests.
Milestone
The "acceptance test" has been created to inform the developer of the impact a PR on the datasets available on the Mobility Database, in order to:
The first version of this feature (built in PR #848, included in the 3.0.0 release) allows our users to:
acceptance_test_report
, containing the list of dataset ID's that contain an additional error, the error type responsible, and the number of occurrences of that error in each datasetcorrupted_sources_report
containing the list of corrupted dataset ID's (meaning we don't know how the PR impacts them)[acceptance test skip]
can be used to prohibit this test from running (if fixing a typo, working on documentation)Outcome
We evaluated that in order to provide developers with all the information they need, the following changes need to be added to this feature. This list will be updated as we gather more feedback on this feature.
How will this work?
This additional information could be shared by providing a Google Colab notebook with code snippets to run to get these analytics, or by changing the acceptance test report.
Note
The architecture of the Mobility Database is currently under reconsideration, and since this feature depends on it, we will wait until a decision has been made (February 2022) before moving forward with these changes.
The text was updated successfully, but these errors were encountered: