You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi authors,
i‘ve only tested the performance of cutblur once by using python main.py --model CARN --augs cutblur --alpha 0.7 --dataset RealSR --scale 4 --camera all --dataset_root ./input/RealSR/ --ckpt_root ./pt/RealSR/cutblur/ --save_result --save_root ./output/RealSR/cutblur/.
the obtained result is 28.89 which is lower than the result 29.00 in the paper.
therefore, i would like to know if 29.00 is the average of multiple models tested.
The text was updated successfully, but these errors were encountered:
In the RealSR dataset, we have observed that early stopping is crucial for the best performance. The overfitting seems to worsen when the network is bigger (e.g. EDSR, RCAN) and when not using data augmentation, obviously.
Is the 28.89 PSNR of the "best"? or "the last" result? By the way, all the performance reports are measured a single time.
hi authors,
i‘ve only tested the performance of cutblur once by using
python main.py --model CARN --augs cutblur --alpha 0.7 --dataset RealSR --scale 4 --camera all --dataset_root ./input/RealSR/ --ckpt_root ./pt/RealSR/cutblur/ --save_result --save_root ./output/RealSR/cutblur/
.the obtained result is 28.89 which is lower than the result 29.00 in the paper.
therefore, i would like to know if 29.00 is the average of multiple models tested.
The text was updated successfully, but these errors were encountered: