This repository has been archived by the owner on Jul 7, 2023. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Thanks to pycharm's unused variable highlighting, I saw that tensor2tensor's revnet model has a pretty serious problem in the
bottleneck = False
code path.Notice the value of
net
on line 115 is unused. (Line 117 stomps on the prior value ofnet
.)So this is an insidious bug. Due to how subtle the problem is, no one's noticed it for ~3.5 years. It correctly created all of the variables, but then incorrectly used
x
as input in the middle of the block.tensor2tensor's
revnet_38_cifar
andrevnet_110_cifar
models usehparams.bottleneck = False
, which means they've been affected by this bug. If tensor2tensor released official versions of those models, you'll need to retrain them from scratch, or remove them. Merging this PR will break those trained models, since they were trained with the incorrect code (and thus rely on the incorrect behavior for inferencing).