Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

[CI/CD] Run all benchmarks as (unit)tests on every PR #2806

Closed
2 tasks
bkontur opened this issue Jul 2, 2023 · 6 comments · Fixed by #2846
Closed
2 tasks

[CI/CD] Run all benchmarks as (unit)tests on every PR #2806

bkontur opened this issue Jul 2, 2023 · 6 comments · Fixed by #2846

Comments

@bkontur
Copy link
Contributor

bkontur commented Jul 2, 2023

We regenerate benchmarks (well, from time to time) - for releases, when something big is coming to the Cumulus..), also we dont regenerate them on PRs to master or for every companion, companions just check if benchmarks compile.

But the consequence is that we could have non-working benchmarks in master.
Good example is: #2712 (where we want to pass with all benchmarks with new bench bot)
#2727
#2765
#2777
everything was fixed and meantime there was some PR which again broke previously-working asset-hub-westend https://gitlab.parity.io/parity/mirrors/cumulus/-/jobs/3099790


Issues/Solutions:

  • add pipeline to the CI for every PR - run all benchmarks as (unit)tests
    • we dont want performance here, only just verify that benchmarking functions work and new PR does not break anything (--steps=1 and --repeat=1)
    • we dont want to regenerate weight files
    • we dont want additional CI overhead (as low as possible)
      • if this could run fast without need to compile polkadot-parachain again or maybe reuse some previous one from CI (with --features=runtime-benchmarks)
  • is there a way how to run any/all benchmarking function(s) as unit tests? without need to compile the whole:
    cargo build --profile=production --bin=polkadot-parachain --features=runtime-benchmarks`
    
    • can we reuse bench bot for that?
    • does benchmark framework support that?
@joepetrowski
Copy link
Contributor

add pipeline to the CI for every PR - run all benchmarks as (unit)tests

👍

@bkontur
Copy link
Contributor Author

bkontur commented Jul 3, 2023

@bkchr @ggwpez guys, wdyt? do you have any suggestions?

@bkchr
Copy link
Member

bkchr commented Jul 3, 2023

Benchmarks in general already support running themselves as part of cargo test, as shown here.

In Polkadot we also run all benchmarks in some "short setup": https://github.com/paritytech/polkadot/blob/04d2c187e46290f6f3002e76a1a5575bca68f5be/scripts/ci/gitlab/pipeline/short-benchmarks.yml#L4-L27

@ggwpez
Copy link
Member

ggwpez commented Jul 3, 2023

We should indeed run the benches quickly as Basti linked above. The unit tests themselves dont run with the real runtimes.
If you want to make the release process less painful we should indeed run them with --steps=2 --repeat=1.

@joepetrowski
Copy link
Contributor

If you want to make the release process less painful we should indeed run them with --steps=2 --repeat=1.

You mean in the CI tests? Maybe we mean the same thing, but just in case, what makes the release difficult is benchmarks compiling/passing CI but then failing assertions when actually run at release time. So we should check that they still work, but still do the full steps/repeat for the release.

@bkontur
Copy link
Contributor Author

bkontur commented Jul 10, 2023

@bkchr thank you, I prepared the same "short-benchmark" for Cumulus and it is working: #2846

@jsidorenko
and it also catched the problem with asset-hub-westend which I reported on element: https://gitlab.parity.io/parity/mirrors/cumulus/-/jobs/3157956 (yes, fix is ready, we just need to update substrate ref)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants