Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PIXL pipeline: make integration tests more realistic #205

Closed
4 of 5 tasks
skeating opened this issue Nov 11, 2023 · 9 comments
Closed
4 of 5 tasks

PIXL pipeline: make integration tests more realistic #205

skeating opened this issue Nov 11, 2023 · 9 comments
Milestone

Comments

@skeating
Copy link

skeating commented Nov 11, 2023

Definition of Done / Acceptance Criteria

Testing

New integration test should cover the whole pipeline.

Documentation

No response

Dependencies

Access to GAE05 UCLH-Foundry/the-rolling-skeleton#86

Details and Comments

From UCLH-Foundry/the-rolling-skeleton#54

Current integrations tests are in test, where data/test.dcm is only checked to have reached orthanc anon, but the image is never persisted because it doesn't have the correct dicom tags for it to be successfully processed.

We should replace shell scripts with pytests so that we can have consistent testing framework and make it easier to start testing other aspects e2e.

@jeremyestein
Copy link
Contributor

Regarding replacing the shell scripts, I quite liked how we did it for Emap hoover:
A docker-compose.fake_services.yml file defining the fake services required for the tests, which you could rebuild and re-run as needed independently from running the actual tests. Then a separate test runner in your IDE. This makes it much quicker to run and re-run the tests (takes seconds) without bringing the required services up and down unnecessarily (can take many minutes).

I think we already have the former in the form of pixl_ehr/tests/docker-compose.yml and others.

I'd like to have a more general discussion on how our tests should be structured and when they should be run (full system tests for each PR commit seems kind of excessive).

@jstutters
Copy link
Contributor

test.dcm refers to an image added to the fake VNA by test_processing.add_image_to_fake_vna(). add_image_to_fake_vna is getting CT_small.dcm from pydicom's test data. CT_small.dcm has the CT modality.

test/resources/Dicom{1,2}.dcm have the DX modality so they are potentially suitable for use in full system tests.

@jstutters
Copy link
Contributor

test.dcm removed in commit ad83c16.

@jstutters
Copy link
Contributor

Test of orthanc-raw storage management would appear to require fake DICOM data (unless anyone has a better idea?) so I'm back to working on that today.

@milanmlft
Copy link
Member

Test of orthanc-raw storage management would appear to require fake DICOM data (unless anyone has a better idea?) so I'm back to working on that today.

Yeah I think it's worth actually checking if the recycling works as we expect, i.e. oldest files being removed in favour of newer ones. So ideally we would have a few DICOM datasets (3?) to test the recycling, where the storage limit is set so that only 2 can be retained, and then check if the oldest ones gets removed, the middle gets retained and the newest gets added.

@stefpiatek
Copy link
Contributor

Yeah I think it's worth actually checking if the recycling works as we expect, i.e. oldest files being removed in favour of newer ones. So ideally we would have a few DICOM datasets (3?) to test the recycling, where the storage limit is set so that only 2 can be retained, and then check if the oldest ones gets removed, the middle gets retained and the newest gets added.

Veering into testing a library's own code territory here. Though have tested in production with a 100 mb and its doing as we'd expect!

@jstutters
Copy link
Contributor

#278 tests the recycling and adds some hopefully useful functionality for generating test DICOMs that we can make use of elsewhere.

@milanmlft
Copy link
Member

Think we can close this one, no?

@stefpiatek
Copy link
Contributor

Agreed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants