Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Directions #85

Closed
N-Wouda opened this issue Jun 20, 2022 · 6 comments
Closed

Directions #85

N-Wouda opened this issue Jun 20, 2022 · 6 comments

Comments

@N-Wouda
Copy link
Owner

N-Wouda commented Jun 20, 2022

[partially based on https://doi.org/10.1016/j.cor.2022.105903, thanks @leonlan]

  • Less parameters/tuning. Can we learn some of these? See also Bandit weight scheme #73.
  • Does adaptive anything even make sense? What about regular LNS?
  • How to select which operators work well together/which operators to keep? Can we use that to guide the actual heuristic? Or even just offer that as statistics after iterating?
@leonlan
Copy link
Collaborator

leonlan commented Jun 21, 2022

Some random thoughts related to the question: how good is LNS compared to ALNS?

  • Christiaens and Vanden Berghe (2020) show that an LNS using a 1) slack-inducing string removal destroy operator and 2) greedy insert with blinks repair operator obtains state-of-the-art results on many variants of VRP.

  • In the scheduling literature, LNS goes under the name of Iterated Greedy, see Stützle and Ruiz (2018). It is currently the state-of-the-art for the permutation flow shop problem and parallel machine scheduling.

  • In Stützle and Ruiz (2018), two interesting conclusions are drawn in Chapter 4 where they perform numerical experiments with IG on the permutation flow shop problem:

    As a conclusion from this study, the most significant factor is the local search and the NEH reconstruction. Most other factors have less importance.

    • Here, the NEH reconstruction refers to greedy insertion using a specific ordering of the unassigned jobs (in short, largest total processing time first). Morever, many iterated greedy papers also show that the local search seems to be the most important aspects of IG in order to obtain SOTA scheduling results.
    • Local-search post-optimization seems to not play such an important role in VRPs. E.g., Christiaens and Vanden Berghe (2020) do not use local search and François et al. (2019) mentions that the local search procedure only yields tiny improvements. Also the original ALNS papers by Ropke and Pisinger don't use local search.
  • There's little to no research on adaptive IG/ALNS in the literature. Even when multiple repair/destroy operators are considered, studies often test all possible combinations of a single destroy and repair operator.

@N-Wouda
Copy link
Owner Author

N-Wouda commented Oct 18, 2022

There's also this paper about the A in ALNS: Turkes et al. 2021 (not sure if we've linked to this thing before). I read this as "it's probably not that beneficial in general, since it also adds complexity".

@N-Wouda
Copy link
Owner Author

N-Wouda commented Nov 8, 2022

We now have SISR as part of the CVRP example. We can add another example doing LNS with $\alpha$-UCB for a job shop problem later on: that ticks the IG box, and shows we're not just a one-trick-ALNS-pony.

Another good direction might be to offer more diagnostics. Can we, for example, help users somehow with tuning parameters/providing tools to efficiently tune an ALNS instance?

@leonlan
Copy link
Collaborator

leonlan commented Nov 9, 2022

There's 3 "parameter groups" that we might want to tune in ALNS:

  • ALNS itself (the parameters such as destroy rate, which destroy and repair operators)
  • The operator selection scheme
  • The acceptance criterion

It would be nice to have a tune module that does some of the following:

  • Given the space of parameters, return a sampled instance/configuration.
    • E.g., suppose we have ALNS with 2 destroy operators and 2 repair operators. tune.alns should return the $n$ configurations of ALNS with a sampled combination of those destroy/repair operators.
    • E.g., suppose we use RecordToRecordTravel. tune.accept should return $n$ sampled configurations/instances of RRT.

A simple workflow for tuning the acceptance criteria would look as follows:

alns = make_alns(...)
init = ...
select = ...
stop = ...

data = []
for idx, accept in tune.accept(RecordToRecordTravel, parameter_space, sampling_method):
    res = alns.iterate(init, select, accept, stop)
    data[idx] = res.best_state.objective()

# Best configuration
print(np.argmin(data))

This could be extended to tuning ALNS and operator selection schemes as well. I don't have much experience tuning so I don't know exactly how the tuning interface should look like.

@N-Wouda
Copy link
Owner Author

N-Wouda commented Nov 9, 2022

We probably shouldn't invent our own half-baked solution for this. The ML community has a lot of this already, with e.g. keras-tuner, ray.tune, etc. Those are used by a lot of people, apparently with some success. At some later point it could pay off to see how they work, and whether we can do something similar in terms of interface for our code.

@N-Wouda
Copy link
Owner Author

N-Wouda commented Nov 19, 2022

I'm closing this issue because tuning is now in #109, and the other ideas from last summer have (for the most part) already been implemented.

@N-Wouda N-Wouda closed this as completed Nov 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants