Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestions for how to work this into a dev workflow? #6

Open
scottjehl opened this issue May 22, 2014 · 1 comment
Open

Suggestions for how to work this into a dev workflow? #6

scottjehl opened this issue May 22, 2014 · 1 comment

Comments

@scottjehl
Copy link
Collaborator

Hey Tim,

So I got this running on a project that typically scores anywhere from a 500 to a 1100 Speed Index (with Dulles Chrome selected in the Web interface of WPT).

Running it via this task seems to produce widely varying results, and so far, I've yet to reproduce a speed index lower than 1100, after 5 tries (again, I'm using 'Dulles:Chrome' for the location option). I'm not sure if this is a bug or a network fluctuation issue. Could be nothing at all. Figured I'd note it.

More broadly, with this test failing frequently on an unchanged codebase that sometimes passes our budget (say, a 1000 speed index) but often fails, I wonder how to include it in our workflow. Running it post-commit seems like an option, maybe with some help from Jenkins to report back the most recent runs (ala https://github.com/scottjehl/picturefill#picturefill)?

Maybe having something like this in our readme.md would be a good way to work this in? "Last time the performance budget was checked, this project was over budget. Details | Run again"

Anyway we're curious how you envision including this into your workflow.

Great work, once again. This is awesome.

@tkadlec
Copy link
Owner

tkadlec commented May 28, 2014

Mr. Scott!

I haven't noticed the varying results from the web interface, but I'll run a few more tests to make sure nothing is amiss.

So workflow—it depends. :)

In it's harshest form, I envision it as pass/fail. Don't meet the budget, the site/app doesn't get deployed. Using the public instance of WPT, that would mean you've got a staging environment somewhere that you're testing against before deploying to where ever you need to. With a private instance, you would have a little more flexibility.

In this use case, I think the best bet is to up the number of runs to help reduce the chance of wildly off-base results. That means the task takes longer, but the improved accuracy would justify it in this case, I think.

Pairing with a CI could also work. Of course reporting results (as in your example) means the enforcement is a little less strict, but I think still holds people accountable enough.

Sorry I'm not more definitive here. A) I'm still experimenting with it on a few projects to see how to maximize the value we get out of it (yay for eating your own dogfood!) and B) I think the right answer is "what works best for your team/project/situation".

That being said, I totally agree—I should write up a few examples in readme.md to give folks a few ideas for how to incorporate this into their workflow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants