Adding capability to choose device other than cpu and fixing/generalize #96
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Reference Issues/PRs
What does this implementation fix?
Run on GPU: Modern architectures are so deep and computation extensive, that running it only on CPU may always result in out of memory error. In this implementation, I have changed grad-cam, guided-backprop, integrated-gradient, layer-activation-with-guided-backprop, score-cam, and vanilla-backprop to be able to initialize with desired device ids as per our will. And then the model forward function and backward gradient calculations can be done on GPU devices. If nothing is provided at the time of initialization, the code simply follows the existing.
Introduction of forward hook: When we make models in Pytorch it is not essential that model class and the forward function contain the same layers. For example, often designers don't keep Flatten and Concat functions out of the class, And only perform them in forward section. So in Cam extractors, forward_pass_on_convolutions function looping over the layer of the model until the desired layer is reached may not depict the correct functionality of the forward function of the model. Hence we just added forward hook of the desired conv layer and let the model complete its forward pass without intervention. This way we can get the conv layer output as well as the correct output of the entire model in way more general way without the dependency of the forward function.
Other comments