-
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port over rule changes from Flux #38
Comments
Also, to be clear, |
This can be closed, 3rd bullet point closed by FluxML/Flux.jl#1868 instead of here. |
How are these supposed to be used now? Trying the Flux builtin |
Have you seen https://fluxml.ai/Flux.jl/stable/training/optimisers/#Scheduling-Optimisers? Basically, use ParameterSchedulers.jl. The Flux docs are actually just out of date, because we added functionality to ParameterSchedulers which makes this easier. http://fluxml.ai/ParameterSchedulers.jl/dev/tutorials/optimizers/ should cover everything. |
Sorry for crossposting since this probably doesn't belong here, but I don't know which package exactly is responsible. optstate = Flux.setup!(Optimisers.OptimisersChain(Optimisers.AdamW(), ParameterSchedulers.Exp(1,1)))
# so that i can use update!(optstate, model, grad[1]) I tried many combinations of |
That code doesn't look right in many ways: |
|
Thank you for pointing me to the complex schedulers. I indeed didn't see that before. Edit: Moved this problem to FluxML/ParameterSchedulers.jl#61 |
InvDecay
andExpDecay
parts)The text was updated successfully, but these errors were encountered: