You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once we've reached a local minima, we try jumping around in the solution space by removing n jobs from each route, then re-inserting them heuristically, then rerunning a local search phase. Here n depends on the exploration level and current depth stage.
While the overall principle of ruining and re-creating a solution once in a while has proven its worth, the current implementation has major drawbacks.
n is between 1 and 5. This was deemed enough at the time of implementation while testing on Solomon 100-jobs instances, but does not make sense on much bigger instances. The number of removed tasks should probably be proportional to the instance size.
The current process removes exactly the same number of tasks in all routes. It would probably be much more efficient to remove "overall best candidates" (to be defined). This way some routes may stay virtually untouched while others could be mostly destroyed.
The current way the best candidates per route are found is somehow expensive, especially when applied several times in a row.
Those statements bring more questions than answers, but I really feel we could make a better use of the search depth if we improve the current logic.
The text was updated successfully, but these errors were encountered:
Once we've reached a local minima, we try jumping around in the solution space by removing
n
jobs from each route, then re-inserting them heuristically, then rerunning a local search phase. Heren
depends on the exploration level and current depth stage.While the overall principle of ruining and re-creating a solution once in a while has proven its worth, the current implementation has major drawbacks.
n
is between 1 and 5. This was deemed enough at the time of implementation while testing on Solomon 100-jobs instances, but does not make sense on much bigger instances. The number of removed tasks should probably be proportional to the instance size.Those statements bring more questions than answers, but I really feel we could make a better use of the search depth if we improve the current logic.
The text was updated successfully, but these errors were encountered: