-
Hi, I have learned Halide for a few weeks, but I do not know how to generate training data randomly for training cost model. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 3 replies
-
The supplemental material for that paper is the branch "standalone_autoscheduler". The random pipeline generator is here: https://github.com/halide/Halide/tree/standalone_autoscheduler/apps/random_pipeline |
Beta Was this translation helpful? Give feedback.
-
@abadams Just curious, any plans to merge in standalone_autoscheduler or standalone_autoscheduler_gpu? I would guess if I can get adams2019 trained on my hardware it would beat any of my hand done schedules. (Also, thanks for all of the hard work to create/maintain Halide!) @minjac if you do figure out how to train the adams2019, please let me know the details :-) I've got some armv7 and aarch64 hardware that I've been wanting to train on, but couldn't figure out how to get the training working. |
Beta Was this translation helpful? Give feedback.
-
Adams2019 was merged! It's in src/autoschedulers. If you have specific apps you want to schedule, I'd just use it in autotuning mode rather that worrying about doing a big training job. |
Beta Was this translation helpful? Give feedback.
-
@abadams thanks. I guess I was a bit confused since it looks like the random_pipeline stuff didn't get merged in. Unfortunately, my pipeline is using the python bindings, and I wasn't able to get it work with autotune_loop.sh (since it expects a generator). |
Beta Was this translation helpful? Give feedback.
-
@abadams I am revisiting random_pipeline code and scripts with the goal of obtaining a new set of weights for the cascade lakes machines with AVX512 extensions. I am also trying to figure out what would be the best HL_MACHINE_PARAMS triple for these machines should be. For the last one, I am currently picking 8,36608000,40, since these machines have 8 cores, 36608K of L3 cache and the 40 balance parameter is just from the previous settings. It might be worth it to perform a grid search similar to the one done in generate_master_autotuned.sh to find the best setting for the balance parameter. As for the methodology of obtaining a new set of weights, it seems like there is no script for that. There is a script (autotune_loop.sh) to tune a single random pipeline and get updated weights that way. There are also scripts (e.g. bench_100.sh) to generate and build random pipelines aw well as benchmark those pipelines and extract results from those runs. But there is no script to create a set of weights by retraining the model weights on all the random pipelines. What would be the right way to approach this? How was the set of Adams2019 baseline.weights obtained? If I want to recreate this set of weights, how many random pipelines should I generate? How deep should those pipelines be? What should the initial set of weights be? Would random weights be a good start? What should be the retraining procedure? |
Beta Was this translation helpful? Give feedback.
The supplemental material for that paper is the branch "standalone_autoscheduler". The random pipeline generator is here: https://github.com/halide/Halide/tree/standalone_autoscheduler/apps/random_pipeline