Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature-request: Extend Benchmarking #37

Open
BobGneu opened this issue Sep 9, 2022 · 16 comments
Open

feature-request: Extend Benchmarking #37

BobGneu opened this issue Sep 9, 2022 · 16 comments
Labels
enhancement New feature or request

Comments

@BobGneu
Copy link
Contributor

BobGneu commented Sep 9, 2022

Does this project have specific goals around performance that are maintained or monitored over time?

Something like what Deno does, where performance would be tracked and maintained over time.

Recently I have found this github actions based tool that seems to be in the right vein, though its tied to gh-pages.

I am definitely down put together a PR for it if there is interest. I will need some input into what would be of the most benefit to benchmark though.

@Prozi
Copy link
Owner

Prozi commented Sep 10, 2022

hello thanks for suggestions

i will try to connect https://github.com/Prozi/detect-collisions#benchmark with some changes with what you linked

@Prozi
Copy link
Owner

Prozi commented Nov 9, 2022

@Prozi
Copy link
Owner

Prozi commented Nov 9, 2022

I tried reading about what you pasted but had no success after first try - it seems so complicated

have you tried using this tool maybe and can provide some help if needed?

@BobGneu
Copy link
Contributor Author

BobGneu commented Nov 14, 2022

I have, and it only works well with github pages.

The trick is that the build agent is only there to

  1. Pull down the previous run
  2. Append the latest
  3. Truncate the results to the window of results you are interested in
  4. Commit & push the results back into the gh-pages branch

In looking at the stress script I think the better route would be to define some boundaries/cases and then build out the suites to support them:

  1. Insert 100 bodies, Non Overlapping
  2. Insert 100 bodies, Overlapping
  3. Update 100 bodies, Non Overlapping
  4. Update 100 bodies, Overlapping
  5. Remove 100 bodies, Non Overlapping
  6. Remove 100 bodies, Overlapping

Repeat the above for each shape type, then mixed

  • What are the expectations for the ray casts?
  • What is the upper bound for the system and how many entities it should be supporting collisions with?
  • Are you open to leveraging something like tinybench to help offload the processing/statistical side of this?

I think I have some time tomorrow and may be able to get you a PR to help set the direction for this, assuming my comments above align with the vision for the library. I have sync'd up my fork and will try to get you a PR later in the afternoon, assuming time permits.

@Prozi
Copy link
Owner

Prozi commented Nov 15, 2022

What are the expectations for the ray casts?

TBH the goal was they should just work, on all types of bodies, which can be seen in tank demo https://prozi.github.io/detect-collisions/demo/

What is the upper bound for the system and how many entities it should be supporting collisions with?

I would say based on benchmark that a rough 1500 constantly moving bodies to keep 60 fps updates

Are you open to leveraging something like tinybench to help offload the processing/statistical side of this?

yes

I think I have some time tomorrow and may be able to get you a PR to help set the direction for this, assuming my comments above align with the vision for the library. I have sync'd up my fork and will try to get you a PR later in the afternoon, assuming time permits.

I would love a merge request with such changes

@BobGneu
Copy link
Contributor Author

BobGneu commented Nov 17, 2022

Alrighty!

I am going to take a stab at it in the morrow. Started nosing around already and I think my schedule this week is open enough to get this moving.

@BobGneu
Copy link
Contributor Author

BobGneu commented Nov 22, 2022

Now that we have the baseline in here, is there a segment you would like to focus our efforts on?

@Prozi
Copy link
Owner

Prozi commented Nov 22, 2022

Now that we have the baseline in here, is there a segment you would like to focus our efforts on?

hello

I think the speed of testing for

  • collision between convex and non-convex polygons can be tested (we are already testing convex by circle, triangle, and box I believe, we could reuse the non-convex polygon from tests - tank demo - or use another one)
  • static and non-static bodies, like does adding X static bodies influence collision detection, how does inserting another X? (creating body with isStatic flag option or setting it later will make an object static)
  • using zero and nonzero paddings, how does it influence the speed when using N bodies
  • updating bodies - this is a very important one in my view, the slowest part of the system is reinsertion to rbush tree (the second slowest I believe is sat collision checking - but I dont think we can go much faster) the reinsertion happens when an object is updateBody'd and its new bbox has moved outside it's bbox with padding

those are from the top of my head that could use a proper benchmark

@Prozi Prozi added the enhancement New feature or request label Nov 22, 2022
@Prozi
Copy link
Owner

Prozi commented Nov 22, 2022

also thinking of:

  1. moving from circleci to github workflows altogether
  2. merging the stress test benchmark (npm run benchmark) into the workflow (separate workflow? benchmark workflow?)
  3. running tests in separate job as a github workflow

what are your opinions?

@Prozi
Copy link
Owner

Prozi commented Nov 22, 2022

maybe divide the benchmarks into long running and fast running, put both on different workflows/pipelines

and add the fast running to pre commit hook

??

those insertion benchmarks and collision ones are fast ones I guess and the stress benchmark (which can be improved) is long running because it has 10x 1000ms scenarios (we can make them shorter but even then it will be 10 x N ms and results for FPS measuring for ms < 1000 are not precise)

@Prozi Prozi changed the title Benchmarking & Performance Goals feature-request: Extend Benchmarking Jan 30, 2024
@bfelbo
Copy link

bfelbo commented Apr 10, 2024

This library looks awesome. Appreciate the streamlined focus on doing one thing and doing it well.

Have you done any benchmark comparisons between this library and more general physics libraries like Rapier, Jolt, matter.js, Planck.js?

@Prozi
Copy link
Owner

Prozi commented May 31, 2024

This library looks awesome. Appreciate the streamlined focus on doing one thing and doing it well.

Have you done any benchmark comparisons between this library and more general physics libraries like Rapier, Jolt, matter.js, Planck.js?

no but basically if they don't use webasm I doubt something what implements THE SAME THING + physics would be any faster than my library

@Prozi
Copy link
Owner

Prozi commented May 31, 2024

also I have more features than some because of:

  • padding
  • offset
  • rotation
  • scale
  • polygons
  • raycasting
  • concave and convex polygons both

@bfelbo thanks

@Prozi
Copy link
Owner

Prozi commented Jul 17, 2024

hey wassup @BobGneu long time no writing together

do u still like/use the lib?

should we add some benchmarks or close this issue - what u suggest?

did u see github actions/circleci benchmarks?

have any remarks? opinion? thanks!!!

@Prozi
Copy link
Owner

Prozi commented Jul 17, 2024

This library looks awesome. Appreciate the streamlined focus on doing one thing and doing it well.

Have you done any benchmark comparisons between this library and more general physics libraries like Rapier, Jolt, matter.js, Planck.js?

@bfelbo would you like to open a MergeRequest with example of such benchmark?

@bfelbo
Copy link

bfelbo commented Jul 18, 2024

I wouldn't be knowledgable enough to do this properly, but appreciate you still thinking about benchmarking :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants