Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic reporting #53

Merged
merged 7 commits into from
Jul 6, 2023
Merged

Basic reporting #53

merged 7 commits into from
Jul 6, 2023

Conversation

dlockhart
Copy link
Member

First pass at generating a simple visual-diff HTML report.

Going to be adding a lot more after this, including: filtering, iterating, remembering settings, nicer UI & styles, ability to see all tests for a file, different tabs for the different browsers, and so on.

.gitignore Outdated Show resolved Hide resolved
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This component renders out the results for a particular browser for a particular test. They can be stacked or iterated or whatever we want later:

Screenshot 2023-07-04 at 12 23 24 PM

@@ -114,6 +115,21 @@ function extractTestPartsFromName(name) {
};
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the code that's saving additional information about each test so that the reporter can ask for it later. As you can see, because there are so many return paths, having to set it from 4 different places isn't ideal. Hopefully some future refactoring can resolve this!


}

function flattenResults(session, browserData, fileData) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test results are nested by "suite", which we don't really care about in the report. This flattens them all (putting " > " in between suites) and converts things into the format that we want.

const json = JSON.stringify(data, (_key, val) => {
if (val instanceof Map) return [...val.values()].sort((a, b) => a.name.localeCompare(b.name));
return val;
}, '\t');
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The resulting data looks like this:

"files": [
	{
		"name": "test/browser/element.vdiff.js",
		"numFailed": 15,
		"tests": [
			{
				"name": "element-different > default",
				"numFailed": 3,
				"results": [
					{
						"name": "Chromium",
						"duration": 81,
						"error": "Image does not match golden. 5061 pixels are different.",
						"passed": false,
						"info": {
							"golden": {
								"path": "test/browser/element-different/golden/chromium/default.png",
								"height": 164,
								"width": 644
							},
							"new": {
								"path": "test/browser/element-different/fail/chromium/default.png",
								"height": 164,
								"width": 644
							},
							"diff": "test/browser/element-different/fail/chromium/default-diff.png"
						}
					}
				]
			}
		]
	}
]
"browsers": [
	{
		"name": "Chromium",
		"numFailed": 11,
		"version": "115"
	},
	{
		"name": "Firefox",
		"numFailed": 11,
		"version": "113"
	},
	{
		"name": "Webkit",
		"numFailed": 11,
		"version": "16"
	}
]

@dlockhart dlockhart marked this pull request as ready for review July 4, 2023 17:02
@dlockhart dlockhart requested a review from a team as a code owner July 4, 2023 17:02
@@ -29,8 +30,7 @@ async function clearDir(updateGoldens, path) {
} else {
await Promise.all([
rm(join(path, PATHS.FAIL), { force: true, recursive: true }),
rm(join(path, PATHS.PASS), { force: true, recursive: true }),
rm(join(path, 'report.html'), { force: true })
rm(join(path, PATHS.PASS), { force: true, recursive: true })
Copy link
Member Author

@dlockhart dlockhart Jul 4, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The report stuff is going to handle its own cleanup and will no longer sprinkle report.html files everywhere anyway.

const inputDir = join(__dirname, 'report');
const reportDir = join(rootDir, PATHS.VDIFF_ROOT, 'report');
const tempDir = join(reportDir, 'temp');

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't like data.json being stored in source control, so I've re-jigged this to create .vdiff/report/temp where everything gets copied to, Rollup runs from there and dumps stuff into .vdiff/report/* and then the temp gets removed.

@@ -6,6 +6,7 @@
"repository": "https://github.com/BrightspaceUI/testing.git",
"scripts": {
"lint": "eslint . --ext .js",
"start": "web-dev-server --root-dir ./.vdiff --open ./report/",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we wrap the runner, can we expose something like this so it doesn't need to be added to every package.json for the consumers? Or maybe we run this at the end of a test run 🤔?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking the same thing but forgot to mention it -- yes! We either do it automatically for visual-diff runs that have more than 1 failure (but not in CI), or add a --open-report flag or something (or both).

<style>
@import url('https://fonts.googleapis.com/css2?family=Roboto:wght@400;700&display=swap');
body {
font-family: 'Roboto', sans-serif;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha still using a google font but just not Lato? 😛

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha, didn't want to think about styling anything at this point. We can do Lato, although since we can't actually use any Daylight things from core we're not tied to it.

Copy link
Contributor

@svanherk svanherk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay in looking at this! Overall setup looks good to me, I didn't worry too much about the details since it's still in flux and we want to do some refactoring to tidy stuff up. Raised a couple concerns

Comment on lines +109 to +110
<option value="oneUpOriginal" ?selected="${this._mode === 'oneUpOriginal'}">One-up (original)</option>
<option value="oneUpNew" ?selected="${this._mode === 'oneUpNew'}">One-up (new)</option>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I really understand this terminology, I assume this is what Percy uses?

max-width: 100%;
}
.diff-container {
background: repeating-conic-gradient(#cccccc 0% 25%, #ffffff 0% 50%) 50% / 20px 20px;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be changing, but one thing I noticed poking around was that the images are scaling to fit the width of the whole window, making it really hard to tell what the size was. This background just fills what's left, so something like this makes it look like the screenshots needed to be resized when they didn't:
image

I'm assuming we'll eventually size the images to the exact size they are?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah agreed, and I already have this fixed in the branch I'm working in to add filtering.

I think the behaviour we want is for them to scale to fit the width of the window, but grow no larger than the actual size of the images. That way if you have a small viewport (or a huge image), it'll zoom out to fit everything. I think that works well most of the time, but if it ends up being a problem in certain scenarios we could always have a "show actual size" button or something.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that sounds good, it's kinda hard to tell in this state with these contrived tests. I suspect we'll still be tweaking the reports as we go through and convert core, to see what pieces are actually useful and what's missing, like you did with all the helper commands.

src/server/visual-diff-reporter.js Show resolved Hide resolved
start({ config }) {
rootDir = config.rootDir;
},
stop({ sessions }) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work with --watch? I think I expected this in onTestRunFinished or reportTestFileResults, but looking at those again they seem to run for each test file which is no good 🤔. But I think ideally in watch mode you want a new report after each run

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok so I tried this out, and stop is not called at all in watch mode. Which kind of jives with their documentation for what it's supposed to do:

Called once when the test runner stops. This can be used to write a test report to disk for regular test runs.

They call out "regular test runs" which I think implies "not watch runs". I think I'm OK with not writing a report when watching, since you can focus on particular files, go into manual mode and all kinds of other stuff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, when I was dev-ing in watch mode before reports were a thing I didn't really miss them because you're filtered down to just the tests you're working on and trying to quickly iterate and check out the resized or diff pngs.

But looking at the docs again, onTestRunFinished might give us what we'd need, it does have sessions. Does it run only in watch mode and not in regular mode, or would putting the same functionality there end up running twice in regular mode?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

onTestRunFinished runs in both, and in watch mode it re-runs each time you change a file or focus on a particular file. I actually originally had all this in there, but moved it up into stop to avoid having to think about watch mode and reports with partial sessions. I think if it's something we want to support, it's definitely possible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, you'd only have the last session that ran passed to onTestRunFinished, but data for all the sessions that have been running and you'd probably want to create a full report still to be accurate. Yeah, I don't mind --watch clearing the report and not creating a new one, and worrying about supporting it later if we find we want it.

@dlockhart dlockhart merged commit ff072d0 into main Jul 6, 2023
1 check passed
@dlockhart dlockhart deleted the dlockhart/basic-reporting branch July 6, 2023 21:43
@ghost
Copy link

ghost commented Jul 6, 2023

🎉 This PR is included in version 0.14.0 🎉

The release is available on:

Your semantic-release bot 📦🚀

@ghost ghost added the released label Jul 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants