Because GTFS data consumers and producers rely on the validator it is important to know if a pull request introduces a breaking change (i.e. the proposed validator declares existing valid datasets invalid). If this step is skipped, newly declared invalid datasets could be rejected by GTFS data consumers (e.g. Transit App, Google Maps) which could lead to public transit systems disappearing from their interface which means that riders would no longer be able to access the trip information they are used to getting on these platforms.
- The reference validator is defined as the latest version of the validator available on the (master branch) of this repository.
- The proposed validator is defined as the version of the validator that results from the changes introduced in the pull request that is proposed.
- The acceptance criteria (mentioned in the diagram below) is defined as the impact that a pull request has on datasets: does this pull request disrupt a large quantities of datasets? If yes, the pull request should be flagged as introducing breaking changes or rejected, if no then the pull request can be safely merged to the
master
branch.
For the latest version of all GTFS datasets from the MobilityDatabase, the validation report from both the proposed and the reference validator are compared. An acceptance test report is generated: it quantifies for each agency/dataset the number of new errors (as defined here) that have been introduced.
The logic for this process is defined in acceptance_test.yml
.
This workflow:
- packages the
output-comparator
module; - packages the proposed version of the validator;
- downloads the version of the reference validator that is on the
master
branch; - defines a matrix of urls (fetched from the MobilityDatabase) that will be used in the further validation process;
On each of these urls:
- the reference version of the validator is executed and the validation report is output as JSON (under
reference.json
); - the proposed version of the validator is executed and the validation report is output as JSON (under
latest.json
).
At the end of execution of the two aforementioned steps for every url in the matrix, all the validation
reports are gathered in a single folder (output
) and compared - the percentage of newly invalid datasets
is output to the console. The final acceptance test report is output at acceptance_report.json
.
It includes a summary of both new error types and dropped error types. It also contains a list of
"corrupted" sources: sources that could not be taken into account while generating the acceptance test
report because of I/O errors, or missing file.
To finish with, a comment that sums up the acceptance test result is issued on the PR.
Sample outputs:
acceptance_report.json
{
"newErrors": [
{
"noticeCode": "first_notice_code",
"affectedSourcesCount": 2,
"affectedSources": [
{
"sourceId": "source-id-1",
"sourceUrl": "url to the latest version of the dataset issued by source-id-1",
"noticeCount": 4
},
{
"sourceId": "source-id-2",
"sourceUrl": "url to the latest version of the dataset issued by source-id-2",
"noticeCount": 6
}
]
},
{
"noticeCode": "second_notice_code",
"affectedSourcesCount": 1,
"affectedSources": [
{
"sourceId": "source-id-5",
"sourceUrl": "url to the latest version of the dataset issued by source-id-5",
"noticeCount": 5
}
]
},
{
"noticeCode": "third_notice_code",
"affectedSourcesCount": 1,
"affectedSources": [
{
"sourceId": "source-id-2",
"sourceUrl": "url to the latest version of the dataset issued by source-id-2",
"noticeCount": 40
}
]
},
{
"noticeCode": "fourth_notice_code",
"affectedSourcesCount": 3,
"affectedSources": [
{
"sourceId": "source-id-1",
"sourceUrl": "url to the latest version of the dataset issued by source-id-1",
"noticeCount": 40
},
{
"sourceId": "source-id-3",
"sourceUrl": "url to the latest version of the dataset issued by source-id-3",
"noticeCount": 15
},
{
"sourceId": "source-id-5",
"sourceUrl": "url to the latest version of the dataset issued by source-id-5",
"noticeCount": 2
}
]
}
],
"droppedErrors": [
# Same schema as `newErrors`
],
"corruptedSources": {
"corruptedSources": [
"source-id-1",
"source-id-2"
],
"sourceIdCount": 1245,
"aboveThreshold": false,
"corruptedSourcesCount": 2,
"maxPercentageCorruptedSources": 2
}
}
Where each source id value come from the MobilityDatabase: they are a unique property used to identify each source of data.
The source id can be used to find all datasets versions of a source on the MobilityDatabase for the sakes of debugging or exploration.
We follow this process:
- Provide code changes by creating a new PR on the GitHub repository;
- The acceptance test pipeline will run each time code is pushed on the newly created branch; except if the keyword
[acceptance test skip]
is included in the commit message.
- Download all validation reports from the artifact listed for the specific GitHub run;
- One can verify that the count of validation report (1 per source) matches the number of sources announced by the GitHub PR comment
- Select a sample of validation reports and compare them manually. MobilityData uses an internal tool to do so. We will open source it in the future.