You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The -x flag sets a trade-off between "exploration level" (hence computing time) and solution quality one can hope to reach. It actually acts on two different factors:
Spawning more or less parallel searches (those run totally independently for now, and are based on various predefined heuristic tunings).
Pushing each of the searches further by adding steps where we incrementally rework bigger parts of the solution in order to get out of a local search minima.
At the end of the day, only the search yielding the best solution is "used", so running 32 searches in parallel (-x 5) is somehow a waste of resources. We've been doing it this way so far because of course we don't know beforehand which of the heuristic tunings will lead to the best solution, and it works ™.
It would be interesting to explore how we could narrow down the number of searches during optimization, only keeping the most promising ones depending on the context.
My hopes here is to be able to reduce "unproductive" computing times, which could bring faster convergence, or an intensification of the search on more promising leads without increasing the overall computing effort.
The text was updated successfully, but these errors were encountered:
Closing as this is somehow stale. In particular deciding what searches to run based on heuristic solutions does not really seem to make sense: those contain arbitrary biases so the first local search descent does a random re-shuffling of which are the "best".
Also we have other more immediate leads for improvements, e.g. #874.
The
-x
flag sets a trade-off between "exploration level" (hence computing time) and solution quality one can hope to reach. It actually acts on two different factors:At the end of the day, only the search yielding the best solution is "used", so running 32 searches in parallel (
-x 5
) is somehow a waste of resources. We've been doing it this way so far because of course we don't know beforehand which of the heuristic tunings will lead to the best solution, and it works ™.It would be interesting to explore how we could narrow down the number of searches during optimization, only keeping the most promising ones depending on the context.
My hopes here is to be able to reduce "unproductive" computing times, which could bring faster convergence, or an intensification of the search on more promising leads without increasing the overall computing effort.
The text was updated successfully, but these errors were encountered: