Re-implementation of U-Net[Ronneberger, O.(MICCAI15)] in PyTorch.
In unet.py
, while UNetVannila
follows the paper's architecture(i.e. convolution layers don't have padding.), UNet
has convolutions which use padding.
In this repo, we use UNet
mainly.
- Tackle with "Shaded White removing problem"
- Can't identify shaded white parts of items.
- Higher Brightness degree?
- Dilated Conv?
- test.py
- get_dataset
- get_dataloader
- get_model
- get_optimizer
- get_scheduler
- Trainer
- MSELoss
- BCE+Dice Loss(from kaggle)
- Messed up with Loss values.
- BCE-only works well. So, dice loss degrades the performance.
- Lessen the number of parameters(3rd place solution uses 8M UNet)
- Works well.(not better.)
- Double Check duplciation between train and test data.
- Using diffrent ids b/w train and test data.
- BCELoss
- In the literature, every paper uses bce rather than mse. We stick to this.
- Add More DAs
- Rotation(45)
- Color Jitter(Brightness/Contreast/Saturation/Hue)
- Gaussian Blur?
- From ConvBlock to ResBlock (in bottleneck.)
- Ref:DeepResUNet
- Use pre-act resblock for all blocks.(not only bottleneck)
- Ref:DeepResUNet
- Add Random Grayscale to catch the shape of the objects
- Not good...?
- Quantitative Evaluation.(Dice)