How to evaluate distribution based metrics after the training is finished? #319
-
Hi, Thank you |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 7 replies
-
Hi, @Ghaleb-alnakhlani. Your first question is not quite clear to me. DB metrics are used to compute the distance between two distributions of data. In your case, these two distributions are presumably defined by two sets of images: generated and real ones. To compute some DB metric score (e.g. FID), you first need to compute features from these images either using your own feature extractor or one of the extractors that are provided in the library. For the latter, each DB metric has the compute_feats methods designed to help you to do that. After that, you can pass these features to the metric class to get the resulting score as shown in the example script. This can be done regardless of training status. Regarding your second question, we do not provide any visualisations and we do not have any plans to provide this functionality in any forceable future. |
Beta Was this translation helpful? Give feedback.
Hi, @Ghaleb-alnakhlani.
Your first question is not quite clear to me. DB metrics are used to compute the distance between two distributions of data. In your case, these two distributions are presumably defined by two sets of images: generated and real ones. To compute some DB metric score (e.g. FID), you first need to compute features from these images either using your own feature extractor or one of the extractors that are provided in the library. For the latter, each DB metric has the compute_feats methods designed to help you to do that. After that, you can pass these features to the metric class to get the resulting score as shown in the example script. This can be done regardless of…