Fold "all" vs 5-folds in nnUnet #599
-
Hello, I was wondering what would be the difference in running 5 fold cross-validation vs running "all" as your fold(https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/common_questions.md#do-i-need-to-always-run-all-u-net-configurations) in nnUnet in terms of reporting scores. Are the scores generated by running "all" basically just the training scores, since it takes all patients as your training data? In general, I have 3 classes with 30 ish patients, but one of the class is present in 15 patients only.So when I run 5 fold cross validation models,the class in minority is not represented well in validation for each of the models. Would it be better to have a single "all fold" model in that case? Thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
when running the 5 fold CV you get scores for the respective validation sets. My recommendation is to always use the 5 fold cross-validation + have a heldout test set. Use the 5 models from the cross-validation as an ensemble to predict the test set |
Beta Was this translation helpful? Give feedback.
when running the 5 fold CV you get scores for the respective validation sets.
If you set fold='all' then you train and validate on all training cases, so there are no scores you can report. In that case you need a holdout test or holdout validation set to report scores.
My recommendation is to always use the 5 fold cross-validation + have a heldout test set. Use the 5 models from the cross-validation as an ensemble to predict the test set