-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unpenalized logistic regression #47
Comments
If @tomMoral what's our policy between 3 separate repos (L1, L2, no pen) or a single one with some solvers skipping configs? |
Not quite, the solution is just not unique. Then, a desirable property of the coefficients is to have minimum L2 norm among all solutions. But that is hardly ever promised by any solver. |
hence there is no minimizer in that case (see discussion above equation 2 in https://arxiv.org/pdf/1710.10345.pdf). Edit: it's still a very interesting problem |
Thanks for the linked literature.
A good implementation will reach a loss very close to zero, e.g. I got about n_samples * 1e-10 for the 20 news dataset. |
Yes, the objective converges to 0, but the iterates are diverging (there is no minimizer, ie the argmin is empty and the min is an inf) |
I'm transferring the issue to benchmark_logreg_l2 as it seems to be less directly linked to benchopt. |
PR welcome :)
… Message ID: ***@***.***>
|
The problem sets of logistic regression all have a penalty. It would be very interesting, at least to me, to add the zero penalty case.
Note: For n_features > n_samples, like the 20 news dataset, this is real fun (from an optimization point of view).
The text was updated successfully, but these errors were encountered: