-
-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add automatic benchmarks to Github Actions #103
base: freeze-feat-andreas
Are you sure you want to change the base?
Add automatic benchmarks to Github Actions #103
Conversation
Hi can you point your pr to freeze feat Andreas |
Since it was Andreas who introduced me to flamegraph and suggested to have a testing suite, let me just ping them to get their opinion on it too. CC: @AndreasBackx |
This would be a great direction imo. I agree that running this on CI/CD might not be super accurate, though I would say let's give it a shot and see how stable those measurements are for now. This would definitely be better than nothing. I haven't got much experience with benchmarking tools out there, though I would again focus on what's free and simply experiment. We can make this a non-blocking signal in CI/CD or elsewhere. Maybe make a bot automatically post the information on a PR. Though even simply making this easy to run already is hugely impactful. Would be nice to see benchmarks for a combination of screen resolutions, scalings, and amount of them. I'd say a permutation of the following would already be plenty:
This might be too many permutations though we can look at the most important ones. Also would be interesting to see this for maybe 8K (7680x4320) or 4K super ultrawides (7680x2160). Finally, would be nice to have this easily visible when making PRs. I don't know if GitHub actions makes it easy to automatically give some feedback or not. |
Hi again!
Just pushed a workflow to automatically comment the benchmark results on PRs. Currently it will comment on every PR sync (new commit, force push, etc), so I'm not sure if it would be too spammy, but we can adjust that later if needed. I'm also working on the other test cases you suggested :-) |
Also, I spent way longer than I'd like to admit trying to figure out a way to get the bot to post the |
GitHub has a native CDN right? I believe it's under the usercontent subdomain. Can we just keep the svg data in memory and pass it to the cdn? |
It does, but AFAIK it only works when you manually add a file through the browser, similarly to this (I don't think we can reproduce this in CI/CD since it would at the very least require user authentication). I just had the idea of uploading the SVG to a gist (since it's just code anyway) and then pointing to it in the PR comment. Let's see if this works. :D Also, had to fix a silly permission error that caused the last actions run to fail. |
Feel free to contact me on discord. I'm willing to buy some hosting to make this easier for you. Don't want to spam GitHub and potentially get blocked. |
This is a WIP attempt at implementing automatic benchmarks in the project's CI/CD pipeline (#45). This should prove useful for catching potential performance regressions in the future.
I tried to achieve this by launching a headless
sway
session (usingWLR_BACKENDS=headless
) with a config that creates a virtual, headless output thatwayshot
can capture. We then runwayshot
to take a screenshot and use flamegraph to plot a heatmap of the functions inwayshot
that used the most CPU time, and also run a few benchmarks with hyperfine. The results of both are then saved as artifacts for later inspection.Currently there are three main issues:
flamegraph
plot is currently missing some symbols (probably due to misconfigured kernel options in the runner's VM)Here is an example of a successful CI/CD run with these changes, as well as the
flamegraph
/hyperfine
artifacts produced: https://github.com/nukelet/wayshot/actions/runs/8261924814I'd like to get some feedback on whether this looks like a step in the right direction or if another approach would be better.