You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I got this running on a project that typically scores anywhere from a 500 to a 1100 Speed Index (with Dulles Chrome selected in the Web interface of WPT).
Running it via this task seems to produce widely varying results, and so far, I've yet to reproduce a speed index lower than 1100, after 5 tries (again, I'm using 'Dulles:Chrome' for the location option). I'm not sure if this is a bug or a network fluctuation issue. Could be nothing at all. Figured I'd note it.
More broadly, with this test failing frequently on an unchanged codebase that sometimes passes our budget (say, a 1000 speed index) but often fails, I wonder how to include it in our workflow. Running it post-commit seems like an option, maybe with some help from Jenkins to report back the most recent runs (ala https://github.com/scottjehl/picturefill#picturefill)?
Maybe having something like this in our readme.md would be a good way to work this in? "Last time the performance budget was checked, this project was over budget. Details | Run again"
Anyway we're curious how you envision including this into your workflow.
Great work, once again. This is awesome.
The text was updated successfully, but these errors were encountered:
I haven't noticed the varying results from the web interface, but I'll run a few more tests to make sure nothing is amiss.
So workflow—it depends. :)
In it's harshest form, I envision it as pass/fail. Don't meet the budget, the site/app doesn't get deployed. Using the public instance of WPT, that would mean you've got a staging environment somewhere that you're testing against before deploying to where ever you need to. With a private instance, you would have a little more flexibility.
In this use case, I think the best bet is to up the number of runs to help reduce the chance of wildly off-base results. That means the task takes longer, but the improved accuracy would justify it in this case, I think.
Pairing with a CI could also work. Of course reporting results (as in your example) means the enforcement is a little less strict, but I think still holds people accountable enough.
Sorry I'm not more definitive here. A) I'm still experimenting with it on a few projects to see how to maximize the value we get out of it (yay for eating your own dogfood!) and B) I think the right answer is "what works best for your team/project/situation".
That being said, I totally agree—I should write up a few examples in readme.md to give folks a few ideas for how to incorporate this into their workflow.
Hey Tim,
So I got this running on a project that typically scores anywhere from a 500 to a 1100 Speed Index (with Dulles Chrome selected in the Web interface of WPT).
Running it via this task seems to produce widely varying results, and so far, I've yet to reproduce a speed index lower than 1100, after 5 tries (again, I'm using
'Dulles:Chrome'
for the location option). I'm not sure if this is a bug or a network fluctuation issue. Could be nothing at all. Figured I'd note it.More broadly, with this test failing frequently on an unchanged codebase that sometimes passes our budget (say, a 1000 speed index) but often fails, I wonder how to include it in our workflow. Running it post-commit seems like an option, maybe with some help from Jenkins to report back the most recent runs (ala https://github.com/scottjehl/picturefill#picturefill)?
Maybe having something like this in our readme.md would be a good way to work this in? "Last time the performance budget was checked, this project was over budget. Details | Run again"
Anyway we're curious how you envision including this into your workflow.
Great work, once again. This is awesome.
The text was updated successfully, but these errors were encountered: