-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question regarding dataset #22
Comments
Hi, thanks for your interest in our work! I think the issue comes from data loading. It seems that for some reason, your data has been loaded as 90 x 90 x 1 x 15, not really 90 x 90 x 3 x 5 as you specified. The padding failed when there is only one slice in each volume. After it gets loaded in the correct shape, I think the code can work. However, I do have concerns about the final denoising quality though, given the number of slices is too small (only 3). Examples used in the paper usually have > 30 slices in each volume. |
I appreciate the prompt response. I think my original data is 90x90x5x3 - evolution time x magnetic field. I understand that my data is not the same , I was wondering of any suggestions I could get to tailor it to my implementation, should I preprocess my data in a way to be accepted for torch for instance - batch,channel, height, width or is there anything else I could look into? I know that slicing is only 5 and that this might hinder results,but I am trying to tailor that ML method and compare it to Patch2Self which worked very well after some initial preprocessing! :) I guess one question I would like to ask is within the hardi sample , how many images are there? In my case its a single image, but I want to go through multiple gaussian-noise level images with nii.gz format? from curses import raw import matplotlib from torch.utils.data import Dataset class MRIDataset(Dataset):
if name == "main":
2.2.1 None |
Has that dataset be applied to another type of MRI?
The MRI I am working on is with dimensions 90,90,3,5 - magnetic field x evolution time?
I tried to combine them as I did for Patch2Self which seemed to work.
I get this error :
export CUDA_VISIBLE_DEVICES=0
24-03-10 19:49:07.449 - INFO: [Phase 1] Training noise model!
Loaded data of size: (90, 90, 1, 15)
Traceback (most recent call last):
File "/Users/dolorious/Desktop/MLMethods/DDM2/train_noise_model.py", line 42, in
train_set = Data.create_dataset(dataset_opt, phase)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dolorious/Desktop/MLMethods/DDM2/data/init.py", line 30, in create_dataset
dataset = MRIDataset(dataroot=dataset_opt['dataroot'],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dolorious/Desktop/MLMethods/DDM2/data/mri_dataset.py", line 35, in init
self.raw_data = np.pad(raw_data.astype(np.float32), ((0,0), (0,0), (in_channel//2, in_channel//2), (self.padding, self.padding)), mode='wrap').astype(np.float32)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dolorious/.pyenv/versions/3.12.0/lib/python3.12/site-packages/numpy/lib/arraypad.py", line 819, in pad
raise ValueError(
ValueError: can't extend empty axis 3 using modes other than 'constant' or 'empty'
(ddm2) dolorious@bigDPotter DDM2 %
The text was updated successfully, but these errors were encountered: