Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add long-term reference results to ctests. #153

Open
BenjaminTJohnson opened this issue Jul 26, 2024 · 0 comments
Open

Add long-term reference results to ctests. #153

BenjaminTJohnson opened this issue Jul 26, 2024 · 0 comments
Assignees

Comments

@BenjaminTJohnson
Copy link
Contributor

BenjaminTJohnson commented Jul 26, 2024

Presently, CRTMv3 (and previous versions) ctests look for an existing reference file for the ctest. If the reference file doesn't exist, It creates a new reference file, then it compares any subsequent ctests against that reference file. This is fine for "local" or short term development evaluation, however, for longer term consistency and evaluation of changes, some changes might end up being missed.

This issue creates a new standard for CRTM development, namely that we (CRTM team) identifies an appropriate "fundamental" reference for each ctest. Nominally this would be tied to a well-tested release.

Let's just say we develop a new set of reference files for each semi-major (3.x) release, because minor releases should not affect the structure of the results output.

In this effort, we will create a reference set for the following releases:
v2.3.0
v2.4.0
v3.0.0
v3.1.0
v3.2.0 (not created / released yet).

I will retroactively update ctests in release branches for each of these releases. Each new release (even minor releases) adds new tests, perhaps we should identify, instead, a common core of "reference" tests that remain relatively unchanged from version to version? The one breaking change here would be the switch to using netCDF by default for the results storage (in v3.2).

I also noticed that the "reference" values will fail on numerical matching if compiled in "RELEASE" and compared against a "DEBUG" build, and vice-versa. Also, there are minor numerical differences (1e-11 or smaller) in ctests between gfortran and ifort using RELEASE build. This suggests that we either (a) have different reference sets based on build type and compiler, OR (b) we loosen the matching tolerance. In release mode, gfortran signals underflow on the tests that exhibit the differences, so this might be a fixable bug, just need to identify where underflow is occurring (which should be handled in a new issue).

@BenjaminTJohnson BenjaminTJohnson self-assigned this Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant