Softmax and log-softmax no longer applied in models. #239
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Softmax and log-softmax no longer applied in models; they are now applied in the evaluation and training scripts. This was done by using
nn.CrossEntropyLoss
rather thannn.NLLLoss
.Class predictions are still valid when using max/argmax of logits rather than probabilities, so we can use logits for evaluation accuracy and IoU.
Furthermore I've change the decoders so that rather than using the
use_softmax
flag to determine if we are in inference mode, we apply the interpolation if thesegSize
parameter is provided; only done in inference in your code. Also, the decoders now return a dict with the'logits'
key giving the predicted logits and the'deepsup_logits'
key giving logits for deep supervision, when using deep supervision decoders.The motivation for this is that some uses of semantic segmentation models require losses other than softmax/log-softmax as used in supervised training. Moving this out of the model classes make them useful in a wider variety of circumstances. Specifically I want to test a PSPNet in my semi-supervised work here: https://github.com/Britefury/cutmix-semisup-seg. I use a variety of unsupervised loss functions, hence preferring that models output logits that can be processed in a variety of ways.