Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using any regularization causes ever increasing loss #34

Open
aradar opened this issue Nov 17, 2018 · 5 comments
Open

using any regularization causes ever increasing loss #34

aradar opened this issue Nov 17, 2018 · 5 comments

Comments

@aradar
Copy link

aradar commented Nov 17, 2018

Issue summary

Hi @wenwei202,

I'am currently trying to train a sparse network through SSL. But I have some big issues getting the training to converge. As soon as I add any kind of regularization (L1, L2, your SSL) the loss increases and the training diverges. This even happens if I set the weight_decay to something like 0.0000001.

The following log shows the behavior when trying to train the resnet baseline example from your cifar10 readme.

./examples/cifar10/train_script.sh 0.1 0.00001 0.0 0.0 0.0 0 template_resnet_solver.prototxt

I1117 11:30:51.336390   896 solver.cpp:348] Iteration 0, Testing net (#0)
I1117 11:30:52.452332   896 solver.cpp:415]     Test net output #0: accuracy = 0.1
I1117 11:30:52.452364   896 solver.cpp:415]     Test net output #1: loss = 87.3365 (* 1 = 87.3365 loss)
I1117 11:30:52.624837   896 solver.cpp:231] Iteration 0, loss = 3.50511
I1117 11:30:52.624869   896 solver.cpp:247]     Train net output #0: loss = 3.50511 (* 1 = 3.50511 loss)
I1117 11:30:52.624882   896 sgd_solver.cpp:106] Iteration 0, lr = 0.1
I1117 11:30:52.653563   896 solver.cpp:260]     Total regularization terms: 2504.25 loss+regular. : 2507.76
I1117 11:31:22.397892   896 solver.cpp:231] Iteration 200, loss = 1.52217
I1117 11:31:22.398046   896 solver.cpp:247]     Train net output #0: loss = 1.52217 (* 1 = 1.52217 loss)
I1117 11:31:22.398053   896 sgd_solver.cpp:106] Iteration 200, lr = 0.1
I1117 11:31:22.443342   896 solver.cpp:260]     Total regularization terms: 2.1337e+09 loss+regular. : 2.1337e+09
I1117 11:31:52.203909   896 solver.cpp:231] Iteration 400, loss = 1.31369
I1117 11:31:52.203939   896 solver.cpp:247]     Train net output #0: loss = 1.31369 (* 1 = 1.31369 loss)
I1117 11:31:52.203946   896 sgd_solver.cpp:106] Iteration 400, lr = 0.1
I1117 11:31:52.249099   896 solver.cpp:260]     Total regularization terms: 7.16458e+09 loss+regular. : 7.16458e+09

Do you know by any chance what could cause this behavior? Or how I could fix this?

Steps to reproduce

Training any net with enabled regularization.

Your system configuration

Operating system: Ubuntu 16.04 or Arch
Compiler: gcc5.4 (Ubuntu) and gcc5.5 (Arch)
CUDA version (if applicable): 8.0
CUDNN version (if applicable): 5
BLAS: Atlas
Python or MATLAB version (for pycaffe and matcaffe respectively): 3.5 (Ubuntu) 3.6 (Arch)

@wenwei202
Copy link
Owner

This is a little weird! Are you able to train the baseline without any regularization? Caffe is relatively old, and you should consider switch to others like pytorch.

@aradar
Copy link
Author

aradar commented Nov 17, 2018

Yeah that's really weird. Training without regularization leads to a useful loss and a kinda good accuracy (~90%). But I honestly don't understand why one of the normal regularization methods would cause this behavior. I am currently fine tuning this baseline with SSL and see where this goes. It also started with a really high loss (e+12) and is currently working its way down (e+10).

I know that caffe is getting old but I am currently working on my bachelor thesis where I am comparing interesting sparsification methods and I think SSL is a really interesting approach based on the fact that you don't need specialized hardware to get a acceleration from it.

@wenwei202
Copy link
Owner

@aradar Thanks for having interest in SSL. SSL can be easily applied to the frameworks supporting autograd, such as tensorflow and pytorch. You just need to add group Lasso regularization to the cross entropy and that's it. We have an RNN code for your reference.

@aradar
Copy link
Author

aradar commented Nov 18, 2018

Thank you for the reference code! I will look into implementing it myself for tensorflow.

But wouldn't I also have to implement sparse convolution ops for tensorflow to also get the speedup on a normal GPU and CPU?

@wenwei202
Copy link
Owner

@aradar if you remove structures (such as filters and channels), then you won't have to. You will just need to create a smaller DNN with the learned structures (such as fewer filters and channels) and initialize them by non-zeros.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants