Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure in test_miri.py::test_against_test_data #256

Open
Witchblade101 opened this issue Jun 7, 2022 · 6 comments
Open

Failure in test_miri.py::test_against_test_data #256

Witchblade101 opened this issue Jun 7, 2022 · 6 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@Witchblade101
Copy link
Collaborator

________________________________________________________ test_against_test_data ________________________________________________________

siaf = <pysiaf.Siaf object Instrument=miri >, verbose = False

def test_against_test_data(siaf=None, verbose=False):
    """MIRI test data comparison.

    Mean and RMS difference between the instrument team computations
    and the pysiaf computations are computed and compared against
    acceptable thresholds.

    """
    if siaf is None:
        # Try to use pre-delivery-data since this should best match the source-data. If no data there, use PRD data
        try:
            pre_delivery_dir = os.path.join(JWST_DELIVERY_DATA_ROOT, instrument)
            siaf = Siaf(instrument, basepath=pre_delivery_dir)
        except OSError:
            siaf = Siaf(instrument)

    else:
        # safeguard against side-effects when running several tests on
        #  a provided siaf, e.g. setting tilt to non-zero value
        siaf = copy.deepcopy(siaf)


    x_test, y_test, v2_test, v3_test = mirim_siaf_testdata.siaf_testdata()

    aperture_name = 'MIRIM_FULL'
    aperture = siaf[aperture_name]

    # v2_pysiaf, v3_pysiaf = aperture.det_to_tel(x_test, y_test)
    # x_pysiaf, y_pysiaf = aperture.tel_to_det(v2_test, v3_test)
    v2_pysiaf, v3_pysiaf = aperture.sci_to_tel(x_test, y_test)
    x_pysiaf, y_pysiaf = aperture.tel_to_sci(v2_test, v3_test)

    t = Table([x_test-x_pysiaf, y_test-y_pysiaf, v2_test-v2_pysiaf, v3_test-v3_pysiaf],
              names=('delta_x', 'delta_y', 'delta_v2', 'delta_v3'))

    if verbose:
        print('')
        t.pprint(max_width=-1)

    absolute_tolerance = 0.04
  assert_allclose(x_test, x_pysiaf, atol=absolute_tolerance)

E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0.04
E
E Mismatched elements: 15 / 15 (100%)
E Max absolute difference: 1.840769
E Max relative difference: 0.01433061
E x: array([326.13, 693.5 , 516.5 , 953.18, 681.75, 409.81, 137.65, 928.52,
E 658.11, 387.37, 116.34, 904.64, 634.88, 365. , 94.77])
E y: array([327.841595, 695.124727, 518.257992, 954.734981, 683.374363,
E 411.606429, 139.490769, 930.03035 , 659.758194, 389.192884,
E 118.031463, 906.054437, 636.468316, 366.667649, 95.986934])

test_miri.py:75: AssertionError
________________________________________________________ test_against_test_data ________________________________________________________

@Witchblade101
Copy link
Collaborator Author

Are these errors expected, and we need to change the tolerances, or is something off with the new values?

@Witchblade101 Witchblade101 added the help wanted Extra attention is needed label Jun 7, 2022
@Witchblade101
Copy link
Collaborator Author

MIRI failures begin in PRDOPSSOC-51, which includes changes from JWSTSIAF-210

@drlaw1558
Copy link
Contributor

I don't understand this; I ran the test_miri.py script when I ran generate_miri.py and didn't encounter these errors in comparing against MIRI test data. The values in the error log above look like outdated values from a previous version of pysiaf, how was this test being run? Did it use the new values in mirim_siaf_testdata.py that were updated with the PR?

@Witchblade101
Copy link
Collaborator Author

That's what we're trying to figure out. We started getting all these errors showing up preparing to due a pysiaf release following the procedure at https://innerspace.stsci.edu/pages/viewpage.action?pageId=107324779 starting at "After the PRD is Released."
The tests pass fine initially, but as soon as we install the actual PRD data the tests start failing.

@Witchblade101
Copy link
Collaborator Author

I may have fixed it, but I'm too pessimistic to declare victory just yet...

@drlaw1558
Copy link
Contributor

At a guess I'd say that the MIRI source_data directory hasn't been updated (it got merged to siaf_updates, but doesn't look like it was yet merged to master?). The source_data directory contains test cases specific to the updated XML, so if one updates without the other it will throw errors like this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants