You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These changes will make it easier to port over pretrained weights for the models from PyTorch.
Right now, GoogLeNet matches the implementation in the paper, which does not use batch normalisation. However, torchvision uses batch normalisation layers after the convolution layers in both the inception block and the stem. It also uses bias = false for the convolution layer and eps = 0.001 for the batch normalisation layer. We could decide if similar to VGG, we can have a toggle for the batch normalisation in the model. (closed by Tweak GoogLeNet to match the torchvision implementations #205)
A toggle would be a good idea so that the paper variant is still easily available.
theabhirath
changed the title
Tweak GoogLeNet and InceptionV3 to match the torchvision implementations
Tweak GoogLeNet and Inception family to match the torchvision implementations
Aug 11, 2022
Hi, this issue seems like a good place to start contributing.
function convolution(kernel_size, inplanes, outplanes, batchNorm; kwargs...)
if batchNorm
return basic_conv_bn(kernel_size, inplanes, outplanes; kwargs...)
else
return Conv(kernel_size, inplanes => outplanes; kwargs...)
end
end
would implementing such a function and the calling it like this convolution((7, 7), inchannels, 64, batchNorm; stride = 2, pad = 3, bias = bias)
be a good way to refactor code for the particular issue?
Or do you have something else in mind?
We already have support for turning off the batch norm here. The remaining task on this issue it to update the code for GoogLeNet to use that functionality.
These changes will make it easier to port over pretrained weights for the models from PyTorch.
bias = false
for the convolution layer andeps = 0.001
for the batch normalisation layer. We could decide if similar to VGG, we can have a toggle for the batch normalisation in the model. (closed by Tweak GoogLeNet to match the torchvision implementations #205)bias = false
for the convolution layer andeps = 0.001
for the batch normalisation layer. (should be closed by Implementation of EfficientNetv2 and MNASNet #198)The text was updated successfully, but these errors were encountered: