Skip to content
This repository has been archived by the owner on Jul 23, 2021. It is now read-only.

Automated benchmarking in CI #208

Open
Methuselah96 opened this issue Dec 8, 2020 · 3 comments
Open

Automated benchmarking in CI #208

Methuselah96 opened this issue Dec 8, 2020 · 3 comments

Comments

@Methuselah96
Copy link

Methuselah96 commented Dec 8, 2020

We need to do some performance testing before we release anything to make sure that we haven't regressed in any of the changes we've made. It would also be great if there was a way to do this through CI. In the original repo, it looked like this was done by Lee just saying that something doesn't look fast.

@Methuselah96 Methuselah96 added this to the 4.0 milestone Dec 8, 2020
@jdeniau
Copy link

jdeniau commented Dec 8, 2020

It seems like there are performance tests according to CONTRIBUTING.md but as you said, they seemed to be triggered manually.

@Methuselah96
Copy link
Author

Methuselah96 commented Dec 8, 2020

Yeah, you're right there are already some manual benchmarking tools.

One option (for the upcoming publish) would be to make sure those benchmarks cover the code that we've modified and run them on our changes to make sure we haven't regressed. I don't feel comfortable with releasing what we have so far without doing this step.

But ideally I would like the benchmarking to be done in CI so that we can catch any regressions before the PR gets merged.

@Methuselah96 Methuselah96 changed the title Performance testing Automated benchmarking in CI Dec 8, 2020
@Methuselah96 Methuselah96 removed this from the 4.0 milestone Dec 13, 2020
@bdurrer
Copy link

bdurrer commented Feb 10, 2021

We discussed automated performance tests and the involved problems on Slack. Because all CI/CD runners are (to a certain extent) virtual, it's hard to compare performance.

So the solution I came up with is to take an unused RasperryPi as test runner. It has defined hardware, no interferring software and therefore the results are reproducible.
My idea is to use github api hooks, run a custom check on every PR which compares the benchmark times against a baseline (e.g. last release or master) and let the runner add the performance diff info add to the PR.

So far I am only 50% done. I'll add the runner code as a repo of immutable-js-oss when it is ready

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants