Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation results are inconsistent during training and after saving the trained model #405

Open
YUXIN-commit opened this issue Dec 18, 2024 · 1 comment

Comments

@YUXIN-commit
Copy link

Hi MONAI Team,
First of all, thumbs up for your great work. I am facing the issue of inconsistent results of the SwinUNETR model while evaluating during the training and testing phases.
During training, I am getting good results but when I saved the pre-trained SwinUNETR model I am getting very bad results. I didn't change anything, just download your GitHub repo and trained the SwinUNETR model but the results are inconsistent. Here I am uploading the screenshots when I have trained the model for 100 epochs. Please help me to figure out the issue. Waiting for your response. Thank you in advance.

training results:
d3ac2c7b01903615d7987a069c2c835

when I saved the pre-trained SwinUNETR model I am getting very bad results:
d9db8c911b79b18400c2aaccba17d61

@YUXIN-commit
Copy link
Author

I encountered another issue, which is 'Too many false positives on test-data' when using test.py. Has anyone encountered this problem before? Could you please let me know how to resolve it? "
Let me know if you'd like me to refine it further!

As shown in the image:
d67c008739e5a688ce1fbe6dafdd19b

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant