-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hyperparameter Tuning #228
Comments
Great question (thanks for all your high-quality feedback lately). As I understand it, you're essentially using We need to better support building out a set of learners over a grid of hyperparemeter values, and it's been on the todo list for far too long (see, e.g., #2). For now, this is still a DIY thing unfortunately. Having enumerated such a grid for each learner, I think you would be better off just "concatenating" the grids together to form your continuous SL library instead of doing the two stage SL. If you wanted to enforce sparsity in set of learners selected by SuperLearner, you could do so by adjusting your metalearner with an appropriate constraint. It's worth having a worked example of this, so i'll plan to add one |
Ah, so I'm actually using
So some distinctions: random search instead of grid search, and training using discrete models at a time, rather than the sl3 framework. I think a read a paper somewhere indicating that a
Hope that's helpful! I also tuned the |
I'm curious -- as I haven't seen built-in hyperparameter optimization functionality in
sl3
, is there a recommended way to go about doing that? Right now I'm essentially usingcaret
to tune, then taking thebest_tune
of every model I fit and plopping those arguments into the appropriatemake_learner
call. Any plans to build this intosl3
, or is the workflow I'm describing essentially the recommended move?The text was updated successfully, but these errors were encountered: