You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I submit a pull request to a project with flaky code, I don't want the system to mask the issue by re-running tests. Instead, if a test fails during the job, I expect the process to stop immediately, report an error, and prevent further progress. I believe that flaky behavior should be treated as an error, rather than something that can be recovered from through retries.
To ensure code quality, the system should not attempt to "recover" from failures by re-running tests. Instead, it should report failures as they occur and block the merge of pull requests that exhibit flaky behavior. By doing so, we can ensure that only stable, reliable code is accepted into the project.
The text was updated successfully, but these errors were encountered:
Hello! I can relate to that opinion, but I fail to see the relevancy for Prow project. Prow itself contains no retest functionality.
Some organizations using Prow choose to deploy additional tooling that retests failed jobs, but that is orthogonal to Prow (except that some choose to use e.g. a Prow periodic job to implement it). Kubernetes project uses a commenter-using periodic that posts /retest on PRs with failed jobs. OpenShift has a custom retester component which would be suitable for Prow but would always be an optional component.
So it seems that the appropriate place to voice your concerns would be the team that makes these decisions in whatever org you are concerned with. In kubernetes that would be SIG Testing.
When I submit a pull request to a project with flaky code, I don't want the system to mask the issue by re-running tests. Instead, if a test fails during the job, I expect the process to stop immediately, report an error, and prevent further progress. I believe that flaky behavior should be treated as an error, rather than something that can be recovered from through retries.
To ensure code quality, the system should not attempt to "recover" from failures by re-running tests. Instead, it should report failures as they occur and block the merge of pull requests that exhibit flaky behavior. By doing so, we can ensure that only stable, reliable code is accepted into the project.
The text was updated successfully, but these errors were encountered: