-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic reporting #53
Basic reporting #53
Conversation
Co-authored-by: Dave Lockhart <[email protected]>
5d90a5b
to
b93a9ef
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -114,6 +115,21 @@ function extractTestPartsFromName(name) { | |||
}; | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the code that's saving additional information about each test so that the reporter can ask for it later. As you can see, because there are so many return paths, having to set it from 4 different places isn't ideal. Hopefully some future refactoring can resolve this!
|
||
} | ||
|
||
function flattenResults(session, browserData, fileData) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test results are nested by "suite", which we don't really care about in the report. This flattens them all (putting " > " in between suites) and converts things into the format that we want.
const json = JSON.stringify(data, (_key, val) => { | ||
if (val instanceof Map) return [...val.values()].sort((a, b) => a.name.localeCompare(b.name)); | ||
return val; | ||
}, '\t'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resulting data looks like this:
"files": [
{
"name": "test/browser/element.vdiff.js",
"numFailed": 15,
"tests": [
{
"name": "element-different > default",
"numFailed": 3,
"results": [
{
"name": "Chromium",
"duration": 81,
"error": "Image does not match golden. 5061 pixels are different.",
"passed": false,
"info": {
"golden": {
"path": "test/browser/element-different/golden/chromium/default.png",
"height": 164,
"width": 644
},
"new": {
"path": "test/browser/element-different/fail/chromium/default.png",
"height": 164,
"width": 644
},
"diff": "test/browser/element-different/fail/chromium/default-diff.png"
}
}
]
}
]
}
]
"browsers": [
{
"name": "Chromium",
"numFailed": 11,
"version": "115"
},
{
"name": "Firefox",
"numFailed": 11,
"version": "113"
},
{
"name": "Webkit",
"numFailed": 11,
"version": "16"
}
]
@@ -29,8 +30,7 @@ async function clearDir(updateGoldens, path) { | |||
} else { | |||
await Promise.all([ | |||
rm(join(path, PATHS.FAIL), { force: true, recursive: true }), | |||
rm(join(path, PATHS.PASS), { force: true, recursive: true }), | |||
rm(join(path, 'report.html'), { force: true }) | |||
rm(join(path, PATHS.PASS), { force: true, recursive: true }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The report stuff is going to handle its own cleanup and will no longer sprinkle report.html
files everywhere anyway.
const inputDir = join(__dirname, 'report'); | ||
const reportDir = join(rootDir, PATHS.VDIFF_ROOT, 'report'); | ||
const tempDir = join(reportDir, 'temp'); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't like data.json
being stored in source control, so I've re-jigged this to create .vdiff/report/temp
where everything gets copied to, Rollup runs from there and dumps stuff into .vdiff/report/*
and then the temp
gets removed.
@@ -6,6 +6,7 @@ | |||
"repository": "https://github.com/BrightspaceUI/testing.git", | |||
"scripts": { | |||
"lint": "eslint . --ext .js", | |||
"start": "web-dev-server --root-dir ./.vdiff --open ./report/", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we wrap the runner, can we expose something like this so it doesn't need to be added to every package.json
for the consumers? Or maybe we run this at the end of a test run 🤔?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking the same thing but forgot to mention it -- yes! We either do it automatically for visual-diff runs that have more than 1 failure (but not in CI), or add a --open-report
flag or something (or both).
<style> | ||
@import url('https://fonts.googleapis.com/css2?family=Roboto:wght@400;700&display=swap'); | ||
body { | ||
font-family: 'Roboto', sans-serif; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha still using a google font but just not Lato? 😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha, didn't want to think about styling anything at this point. We can do Lato, although since we can't actually use any Daylight things from core we're not tied to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay in looking at this! Overall setup looks good to me, I didn't worry too much about the details since it's still in flux and we want to do some refactoring to tidy stuff up. Raised a couple concerns
<option value="oneUpOriginal" ?selected="${this._mode === 'oneUpOriginal'}">One-up (original)</option> | ||
<option value="oneUpNew" ?selected="${this._mode === 'oneUpNew'}">One-up (new)</option> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I really understand this terminology, I assume this is what Percy uses?
max-width: 100%; | ||
} | ||
.diff-container { | ||
background: repeating-conic-gradient(#cccccc 0% 25%, #ffffff 0% 50%) 50% / 20px 20px; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be changing, but one thing I noticed poking around was that the images are scaling to fit the width of the whole window, making it really hard to tell what the size was. This background just fills what's left, so something like this makes it look like the screenshots needed to be resized when they didn't:
I'm assuming we'll eventually size the images to the exact size they are?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah agreed, and I already have this fixed in the branch I'm working in to add filtering.
I think the behaviour we want is for them to scale to fit the width of the window, but grow no larger than the actual size of the images. That way if you have a small viewport (or a huge image), it'll zoom out to fit everything. I think that works well most of the time, but if it ends up being a problem in certain scenarios we could always have a "show actual size" button or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that sounds good, it's kinda hard to tell in this state with these contrived tests. I suspect we'll still be tweaking the reports as we go through and convert core
, to see what pieces are actually useful and what's missing, like you did with all the helper commands.
start({ config }) { | ||
rootDir = config.rootDir; | ||
}, | ||
stop({ sessions }) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work with --watch
? I think I expected this in onTestRunFinished
or reportTestFileResults
, but looking at those again they seem to run for each test file which is no good 🤔. But I think ideally in watch
mode you want a new report after each run
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok so I tried this out, and stop
is not called at all in watch mode. Which kind of jives with their documentation for what it's supposed to do:
Called once when the test runner stops. This can be used to write a test report to disk for regular test runs.
They call out "regular test runs" which I think implies "not watch runs". I think I'm OK with not writing a report when watching, since you can focus on particular files, go into manual mode and all kinds of other stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, when I was dev-ing in watch mode before reports were a thing I didn't really miss them because you're filtered down to just the tests you're working on and trying to quickly iterate and check out the resized
or diff
pngs.
But looking at the docs again, onTestRunFinished
might give us what we'd need, it does have sessions
. Does it run only in watch mode and not in regular mode, or would putting the same functionality there end up running twice in regular mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
onTestRunFinished
runs in both, and in watch mode it re-runs each time you change a file or focus on a particular file. I actually originally had all this in there, but moved it up into stop
to avoid having to think about watch mode and reports with partial sessions. I think if it's something we want to support, it's definitely possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, you'd only have the last session that ran passed to onTestRunFinished
, but data for all the sessions that have been running and you'd probably want to create a full report still to be accurate. Yeah, I don't mind --watch
clearing the report and not creating a new one, and worrying about supporting it later if we find we want it.
🎉 This PR is included in version 0.14.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
First pass at generating a simple visual-diff HTML report.
Going to be adding a lot more after this, including: filtering, iterating, remembering settings, nicer UI & styles, ability to see all tests for a file, different tabs for the different browsers, and so on.