Replies: 1 comment 1 reply
-
I have the same question. In other, non self-configuring models I have met this need by means of weighted cross-entropy in the loss function. In its self-configuration, does nnUNet count the class pixels in the labels? If not, is there otherwise a way to give cross-entropy weights? Thank you, Bill |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello.
My dataset is class-imbalanced; one of my foreground classes is severely under-represented compared to the other two. Here is a typical class histogram:
After training several configurations and ensembling, the confusion matrix of the model's predictions reflects the imbalance. The model performs well on the better-represented classes.
I've looked at the trainer variant nnUNetTrainer_probabilisticOversampling but as my batch size is 2 I'm wondering whether it can help. Is there a way to get the trainer to increase oversampling of the under-represented class?
Thanks,
Daryl.
Beta Was this translation helpful? Give feedback.
All reactions