You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I created a diachronic landcover map using the RGB model, and it works perfectly for my needs. Thanks again for that tool!
Now, I’d like to fine-tune the base model to make it recognize parking lots. The output I’m aiming for is a binary map (parking/not-parking).
I’m guessing I need training, validation, and test datasets to reuse the base model. How can I create these datasets easily? Should I use the Odeon tool, which seems to have this functionality? If so, should my ground truth include the original classes—like reusing the original datasets and adding parking data over the existing landcover (which I assume would be “road”)? Or should the ground truth consist only of binary maps? (yes I'm a bit lost on this point)
Additionally, if I use the fine-tuning functionality of your tool, will the output be a binary map (parking-lot/not-parking-lot), or will it add a new class to the 19 existing ones?
Thanks in advance for any hints on this!
Thomas
The text was updated successfully, but these errors were encountered:
How to build the new supervision dataset is up to you. The current code for fine-tuning initializes the weights from a pre-trained model. If the number of classes does not match the pre-trained model, it modifies only the segmentation head to the desired number of classes specified in the config file.
For example, if you have supervision patches with binary information, you can use the following configuration:
paths:
ckpt_model_path: patch/to/the/pretrained/model
tasks:
train: True
train_tasks:
init_weights_only_from_ckpt: True
resume_training_from_ckpt: False
use_weights: True
classes: # k = value in MSK : v = [weight, name]
1: [1, 'parking_lots]
2: [1, 'other']
Hope this helps, let me know if you need further assistance !
Hi, it’s me again,
I created a diachronic landcover map using the RGB model, and it works perfectly for my needs. Thanks again for that tool!
Now, I’d like to fine-tune the base model to make it recognize parking lots. The output I’m aiming for is a binary map (parking/not-parking).
I’m guessing I need training, validation, and test datasets to reuse the base model. How can I create these datasets easily? Should I use the Odeon tool, which seems to have this functionality? If so, should my ground truth include the original classes—like reusing the original datasets and adding parking data over the existing landcover (which I assume would be “road”)? Or should the ground truth consist only of binary maps? (yes I'm a bit lost on this point)
Additionally, if I use the fine-tuning functionality of your tool, will the output be a binary map (parking-lot/not-parking-lot), or will it add a new class to the 19 existing ones?
Thanks in advance for any hints on this!
Thomas
The text was updated successfully, but these errors were encountered: