-
Notifications
You must be signed in to change notification settings - Fork 33.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async & Performance Chapter 6 "Repetition" section #1283
Comments
I've learned some stuff as a result of you asking this question, so thanks!
|
It means use the same technique everywhere, assuming that in all cases the net effect on that larger scale will always follow directly from the smaller narrow observed effect from the single benchmark. |
It certainly can happen, but it's statistically less likely to happen if you have a longer amount of time that a test may be being repeated across. Even those fluctuations get averaged out a bit. And if it's a fixed amount of time that establishes a statistical likelihood of some confidence level, then you know (mathematically) that it's more reliable to trust.
The point being made is that simply increasing the number of iterations isn't the most mathematically (statistically) sound way to increase the confidence of a result. Rather, the amount of time for the test to run is the better variable. You should understand that I'm merely recounting perspectives from others' work on this topic, not trying to precisely and mathematically lay out a case. Their authority, and rigorous work, is the authority here, not my (in)ability to prove the case in my writing. |
Much of my observations in my text about the math/statistics behind these techniques is coming from the work @jdalton (and others) did on Benchmark.js. I can't quite find/cite the information I used in my writing -- some blog post somewhere, I'm sure -- but actually their source code itself has a number of such citations behind the math/statistics. There are several such comments in there. Hopefully those will help if you want to explore the math deeper. |
I couldn't agree more about an edit to that specific paragraph (or section), both vocabulary edits and perhaps some clarifications. |
Yes, I promise I've read the Contributions Guidelines (please feel free to remove this line).
I've been having lots of trouble with 3 paragraphs in this section, IMO, the whole section is too dense and short.
This paragraph:
Is the usage of the word "credulity" here a typo ? "credulity" is a synonym of "naivety" and "gullibility", I think the word is supposed to be "credibility" ?
What does "apply that conclusion repeatedly" really mean ? Does it mean "when you increase the number of iterations, the skew will increase proportionally" ?
The next paragraph:
A) Something could have intervened with the engine or system during that specific test.
B) The engine could have found a way to optimize your isolated case.
C) Perhaps your timer was not precise enough, therefore you got an inaccurate result.
Using the second pattern, I believe problem A is still there, something could intervene with the engine causing an iteration of the test to take longer, therefore affecting the end result. I can't see why the engine can't find a way to optimize your tests with this pattern as well if it could do so with the first pattern, how much is it really different anyways ?
The second problem could solve problem C, but only if the time to repeat across is a multiple of the timer's precision (e.g 30ms for a timer with 15ms precision). Otherwise you would have to increase the time to repeat across to improve accuracy, and couldn't this be achieved by increasing the number of iterations in pattern A as well ?
And then the next paragraph:
I wonder what the maths behind this part is:> A 15ms timer is pretty bad for accurate benchmarking; to minimize its uncertainty (aka "error rate") to less than 1%, you need to run your each cycle of test iterations for 750ms. A 1ms timer only needs a cycle to run for 50ms to get the same confidence.
My brain is roaming in uncharted territory, so please excuse me if I am saying things that don't make any sense, but trust me, I have tried to comprehend this section for quite a while and this is my best so far, and it is all purely theoretical. I understand the basic message of this part of the book: "Use a library to benchmark your JavaScript code, it isn't as easy as it looks".
The text was updated successfully, but these errors were encountered: