You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed in Slack, having option to ignore cache can be useful sometimes.
In particular, if I run model someone already run, the result is returned from cache. However, the runtime isn't fetched (or cached?). Then, it is still shown in leaderboard. In case the original run is in leaderboard it is duplicated, otherwise it appears only once but without runtime. Both seem to me undesirable.
Probably, cache should be attached to a repository, and there should be a way of either clearing it or ignoring it for a run, for example with "Re-run without cache" button.
The text was updated successfully, but these errors were encountered:
Randl
added a commit
to Randl/TResNet
that referenced
this issue
May 4, 2020
Same problem if build fails half-way, see for example https://sotabench.com/user/EvgeniiZh/repos/Randl/ECANet#latest-results
The models which were measured in first build do not have runtime associated with them because it was measured in build which later failed.
A simple band-aid would be caching runtime too (that probably would require to invalidate existing caches tho)
As discussed in Slack, having option to ignore cache can be useful sometimes.
In particular, if I run model someone already run, the result is returned from cache. However, the runtime isn't fetched (or cached?). Then, it is still shown in leaderboard. In case the original run is in leaderboard it is duplicated, otherwise it appears only once but without runtime. Both seem to me undesirable.
Probably, cache should be attached to a repository, and there should be a way of either clearing it or ignoring it for a run, for example with "Re-run without cache" button.
The text was updated successfully, but these errors were encountered: