Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tuning module #109

Open
N-Wouda opened this issue Nov 11, 2022 · 2 comments
Open

Tuning module #109

N-Wouda opened this issue Nov 11, 2022 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@N-Wouda
Copy link
Owner

N-Wouda commented Nov 11, 2022

See also #85. Most ALNS heuristics have a ton of (hyper)parameters. This happens more or less naturally, as may things can be tweaked - in operators, call-backs, but also in the core accept/stop/operator selection classes.

We can help users with tuning. The ML community has a lot of this already, with e.g. keras-tuner, ray.tune, etc. Those are used by a lot of people, apparently with some success. We could have a look at how these work, and take some of the good ideas for our own.

@N-Wouda N-Wouda added the enhancement New feature or request label Nov 11, 2022
@N-Wouda N-Wouda self-assigned this Nov 11, 2022
@N-Wouda
Copy link
Owner Author

N-Wouda commented Nov 11, 2022

Or should we just interface with one or more of those existing tuning packages?

@N-Wouda
Copy link
Owner Author

N-Wouda commented May 12, 2024

For now at least document how to tune a bit better, and provide some pointers to places that I know did this reasonably well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant