You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to help here. I'm ready to start playing with custom losses for segformers. We simply need to make our own version of TFSegformerForSemanticSegmentation with a new loss, or loss options passed as arguments, correct?
I'm looking at our existing Dice loss function and seeing if it is consistent with the existing class and loss function call. key differences:
from_logits=True, reduction="none". Our Dice comes "already reduced", using (tf.reduce_sum(y_true_f) + tf.reduce_sum(y_pred_f) + smooth) . But that's ok - the mask reduction occurs over the batch
loss is called using loss = self.hf_compute_loss(logits=logits, labels=labels), so we need a function that can take those inputs
Our function requirs 'nclasses', which would need to be passed to the new class
see this discussion re: specifying a Loss (and/or metric) for Segformer model:
huggingface/transformers#22092
These lines of code provide an upsampling template: https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/segformer/modeling_tf_segformer.py#L793-L811
The text was updated successfully, but these errors were encountered: