We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks author for this great project. I have a question about how absolute position embedding is applied as shown in the code
self.pos_embed = nn.Parameter(torch.zeros(1, channels, 14, 14))
https://github.com/huawei-noah/Efficient-AI-Backbones/blob/master/vig_pytorch/vig.py#L110
since it is initialized as zero and have not updated during the training.
The text was updated successfully, but these errors were encountered:
It is used in
Efficient-AI-Backbones/vig_pytorch/vig.py
Line 140 in 9172762
It will be updated during training.
Sorry, something went wrong.
No branches or pull requests
Thanks author for this great project. I have a question about how absolute position embedding is applied as shown in the code
https://github.com/huawei-noah/Efficient-AI-Backbones/blob/master/vig_pytorch/vig.py#L110
since it is initialized as zero and have not updated during the training.
The text was updated successfully, but these errors were encountered: