-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hi, what 'convq_layer' means in net_pruner.py and net_skipper.py? #22
Comments
Hello, those scripts are deprecated without usages for any purposes. The LOWERED_CCNMM |
@wenwei202 thanks for your help. cifar10_full_ssl_200000.caffemodel sparsity
inference times (batch_size=32)
Why?so Why the inference time is much more when conv_mode: LOWERED_CCNMM, and I can not see the inference time cuts down when using cifar10_full_ssl_200000.caffemodel? |
To duplicate the results, please refer here on how I measured speed. I only counted the time of matrix-matrix multiplication and excluded all of others. For example, in cpu mode, the lowering process |
@wenwei202 thanks for your help. I get it now. |
hi,
In my opinion there should be some python scripts that can remove all-ZEROs-weights filters(row sparsity) directly to accelerate GPU inferences without any CPU subroutines, so are the net_pruner.py and net_skipper.py used for that? Or Can you give me some advises?
and I can not figure out what 'convq_layer' and 'convq_param_key' means in net_pruner.py and net_skipper.py, for example there obvioursly do not exit 'conv1q' key in src_net.params.
Thanks a lot for your help!
`
`
The text was updated successfully, but these errors were encountered: