-
Dear MONAI team, @Nic-Ma @yiheng-wang-nv I don't know if someone else already discussed this issue, but the same model with the same parameters runs with Python script 7 times slower compared to Jupyter Notebook. For example, in my case on Jupyter Notebook runtime for one epoch is 0.4 minute, but runtime of Python Scrip via Command Prompt takes around 2.8 minutes. Just to let you know that Python scripts for "evaluation" and "inference" runs exactly with the same runtime as with Jupyter Notebook, so the issue is only with training script. Can you please take a look at my Python script for training and let me know what is going wrong: I saw there is "EarlyStop handler" in MONAI which allows to stop the training if there is no improvement in dice score. For your 3D-Unet model what is the optimal number of "patience" to stop the model on the correct epoch? It can be between 10 and 50, but I assume you know better which number is the optimal. Look forward to hearing from you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
The issue is fixed by modifying the DataLoading part, changing from Dataset to CacheDataset. train_ds = CacheDataset(data=train_files, transform=train_transforms, cache_rate=1.0, num_workers=0) val_ds = CacheDataset(data=val_files, transform=val_transforms, cache_rate=1.0, num_workers=0) |
Beta Was this translation helpful? Give feedback.
The issue is fixed by modifying the DataLoading part, changing from Dataset to CacheDataset.
train_ds = CacheDataset(data=train_files, transform=train_transforms, cache_rate=1.0, num_workers=0)
train_loader = DataLoader(train_ds, batch_size=2, shuffle=True, num_workers=0)
val_ds = CacheDataset(data=val_files, transform=val_transforms, cache_rate=1.0, num_workers=0)
val_loader = DataLoader(val_ds, batch_size=1, num_workers=0)