-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crop dataset by optic disc? #7
Comments
@SamuelMarks, you are welcome. |
@seva100 Hmm, maybe it's easier if I explain the pipeline I'm envisioning:
So my question is, using your network + tools, how do I do the first two steps? |
@SamuelMarks this is, of course, a good approach. You can take validation indices for e.g. RIM-ONE v.3 that I used (generated as Also note that the provided pretrained networks can perform much worse on images from different / biased domains, as a small number of training samples was used for all networks. If you'd like to try predicting the diagnosis, I can recommend this paper as a very decent paper about CNNs for glaucoma diagnosis, which extract more relevant features for glaucoma than pure CDR. |
@seva100 Actually I am still a little confused about how to set this up. Can you provide a worked example? - I.e.: given dataset of fundus images in PS: Will have the Python 3 compatibility PR to you sometime today. |
@SamuelMarks If you use hdf5 datasets I provide and they reside in import os
import skimage
from skimage.transform import resize
import h5py
from imageio import imsave
h5f = h5py.File(os.path.join("folder_dir", "RIM-ONE v3.hdf5"))
images = h5f['RIM-ONE v3/512 px/images']
disc_locations = h5f['RIM-ONE v3/512 px/disc_locations']
# disc_locations[i] contains (min_i, min_j, max_i, max_j)
output_res = (256, 256, 3)
for i in range(len(images)):
min_i, min_j, max_i, max_j = disc_locations[i]
cropped = images[i, min_i:max_i, min_j:max_j]
resized = resize(cropped, output_res)
imsave(os.path.join('new_folder_dir', '{}.png'.format(i)), resized) If you are only provided with the ground truth masks or masks predicted by some network, you can for example use the following function to get the bounding boxes from OD masks: import numpy as np
from operator import attrgetter
import skimage
from keras import backend as K
def bboxes_from_disc_masks(Y, gap=1.0 / 25.0):
"""Accepts:
* Y -- numpy array of shape (N, H, W, C) if TF ordering is enabled, or (N, C, H, W) is enabled,
with binary values --- masks of OD
* gap --- float; percentage of image side length, which will be left as a margin from each side.
Returns:
* disc_locations -- numpy array of shape (Y.shape[0], 4) --- inferred locations of OD, where
disc_locations[i] contains (min_i, min_j, max_i, max_j)
"""
disc_locations = np.empty((Y.shape[0], 4), dtype=np.float64)
for i in range(Y.shape[0]):
if K.image_dim_ordering() == 'th':
disc = Y[i, 0]
h, w = Y.shape[2:4]
else:
disc = Y[i, ..., 0]
h, w = Y.shape[1:3]
labeled = skimage.measure.label(disc)
region_props = skimage.measure.regionprops(labeled)
if len(region_props) == 0: # no component found (this can theoretically happen due to the data augmentation)
disc_locations[i][0] = 0
disc_locations[i][1] = 0
disc_locations[i][2] = 1
disc_locations[i][3] = 1
else:
component = max(region_props, key=attrgetter('area')) # there should be only 1 component,
# so this is a safety measure
gap_i = int(float(gap) / h)
gap_j = int(float(gap) / w)
disc_locations[i][0] = max(component.bbox[0] - gap_i, 0) / float(h)
disc_locations[i][1] = max(component.bbox[1] - gap_j, 0) / float(w)
disc_locations[i][2] = min(component.bbox[2] + gap_i, h - 1) / float(h)
disc_locations[i][3] = min(component.bbox[3] + gap_j, w - 1) / float(w)
return disc_locations I hope this answers your question. |
Thanks, so @seva100 if I had no ground truths, this would still [nominally] work, for cropping out the optic nerve? (e.g.: only using ground-truths that you provided in RIM-ONE) For example, taking this fundus image from wikipedia and cropping such that only the optic disc + 20px remain |
@SamuelMarks if you don't have the ground truth information, you either need to crop the images by hand or extract bounding boxes from the output of a segmentation model. |
So this isn't a segmentation model then? |
This repository contains segmentation models, if you ask about this |
Let me rephrase: can I use this network to automatically crop out the optic nerve (+20px) from my fundus photo dataset? [Given that my fundus photo dataset is wholly unannotated in this respect] |
Yes, you can, sure |
@SamuelMarks Hello, I am also trying to crop the fundus image around the optic disk area. I tried to use the pre-trained model RIM-ONE v3, but I could not load it successfully. How did you do the cropping? Could you please provide some code for reference? |
@xqhuang123 You'll want to tag @seva100 |
Thanks for open-sourcing this.
Noticed you had this line in one of your notebooks:
How about automating the crop of images by optic disc? - Was thinking to experiment with showing slightly more (like 15% extra), then to do another pass through all the images to make them equal pixel height and width.
The text was updated successfully, but these errors were encountered: