You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 12, 2024. It is now read-only.
What is it about testing the same abnomal image with a trained model that gives different results(AUC) each time? How can I solve this problem? And, When a normal sample were put into the abnomal folder for test, the test results is also bad.
The text was updated successfully, but these errors were encountered:
I found that in the testing code (lib/model.py, line 175), the netowork is not turned into the evaluation mode (missing self.netg.eval() and self.netd.eval()). As a result, the model will update the mean and var in BatchNorm2d layers.
I think that is the reason why the testing results change in each testing time.
Additionally, I found that the missing of net.eval() will cause higher performance, as the model can learn something from the testing set. So I think this bug can lead to unconvincing results.
What is it about testing the same abnomal image with a trained model that gives different results(AUC) each time? How can I solve this problem? And, When a normal sample were put into the abnomal folder for test, the test results is also bad.
The text was updated successfully, but these errors were encountered: