Skip to content

Latest commit

 

History

History
70 lines (47 loc) · 3.3 KB

README.md

File metadata and controls

70 lines (47 loc) · 3.3 KB

Welcome to PathFlowAI

Version Documentation

A Convenient High-Throughput Workflow for Preprocessing, Deep Learning Analytics and Interpretation in Digital Pathology

Published in the Proceedings of the Pacific Symposium for Biocomputing 2020, Manuscript: https://psb.stanford.edu/psb-online/proceedings/psb20/Levy.pdf

Install

First, install openslide. Note: may need to install libiconv and shapely using conda. Will update with more installation information, please submit issues as well.

pip install pathflowai
install_apex

Usage

pathflowai-preprocess -h
pathflowai-train_model -h
pathflowai-monitor -h
pathflowai-visualize -h

See Wiki for more information on setting up and running the workflow. Please submit feedback as issues and let me know if there is any trouble with installation and I am more than happy to provide advice and fixes.

Author

👤 Joshua Levy

🤝 Contributing

Contributions, issues and feature requests are welcome!
Feel free to check issues page.

Figures from the Paper

1

Fig. 1. PathFlowAI Framework: a) Annotations and whole slide images are preprocessed in parallel using Dask; b) Deep learning prediction model is trained on the model; c) Results are visualized; d) UMAP embeddings provide diagnostics; e) SHAP framework is used to find important regions for the prediction

2

Fig. 2. Comparison of PathFlowAI to Preprocessing WSI in Series for: a) Preprocessing time, b) Storage Space, c) Impact on the filesystem. The PathFlowAI method of parallel processing followed by centralized storage saves both time and storage space

3

Fig. 3. Segmentation: Original (a) Annotations Compared to Predicted (b) Annotations; (c) Pathologist annotations guided by the classification model

4

Fig. 4. Portal Classification Results: a) Darker tiles indicate a higher assigned probability of portal classification, b) AUC-ROC curves for the test images that estimate overall accuracy given different sensitivity cutoffs, c) H&E patch (left) with corresponding SHAP interpretations (right) for four patches; the probability value of portal classification is shown, and on the SHAP value scale, red indicates regions that the model attributes to portal prediction, d) Model trained UMAP embeddings of patches colored by original portal coverage (area of patch covered by portal) as judged by pathologist and visualization of individual patches