-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use common beam instead of beam averaging #656
base: master
Are you sure you want to change the base?
Conversation
@keflavich -- Computing the mask to remove fully-masked channels is a bit of a problem: https://github.com/radio-astro-tools/spectral-cube/pull/656/files#diff-8f799a9c2fa6fd38a2278d484cd416efR595. It triggers a full read in of the data (into memory for non-dask) and is super slow. I'm wondering if the default should be to avoid loading the whole cube mask and just to use the |
@astrofrog - Have you seen this |
I’ll try and take a look shortly! |
@astrofrog Fixed it (somehow)! I think a file wasn't being closed when the test failed. |
Tests on travis are giving an Here's the failure I get locally:
The non-varying resolution cube isn't getting the mask set at an earlier stage. But the test with a VRSC is fine. |
@astrofrog @keflavich -- ready for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some minor changes.
I just want to make sure: the default behavior is not to convolve to a common beam, is it? I want to be sure we're not triggering massive convolution operations without warning the user, since that tends to crash computers.
No, no auto trigger of a whole cube convolution. An error is raised if the beams deviate too much from the common beam, similar to what we already have with the median/avg beam. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall! A question below about API, in particular I think it would be cleaner to return a common beam when it is computed than store it in a stateful way on the cube.
This will also definitely need an entry in the changelog! 😆
in the cube. To do this, we can use | ||
:meth:`~spectral_cube.VaryingResolutionSpectralCube.set_common_beam`:: | ||
|
||
cube.set_common_beam() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason why we need to make a stateful change on the cube and set common_beam
? We could instead do something like:
common_beam = cube.find_common_beam()
cube.convolve_to(common_beam)
? Having it stateful means it could get out of sync if e.g. some channels are masked and so on. In general we might want to avoid stateful changes to cubes if we can?
With find_common_beam
above one could even find a common beam with different settings and compare them without having to make a stateful change each time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only issue is that the common beam operation can be slow for cubes with many (>1e3) channels. And baked into the checks for deviations is the need to check for variation from the common beam, where we had the average beam before. That's our only check (currently) for whether to ignore small beam changes for operations along the spectral axis.
Allowing a stateful change could also work if we force the beam masking to not be stateful (which I think it can be now)
…ine a common_beam for a VRSC
Co-authored-by: Adam Ginsburg <[email protected]>
Co-authored-by: Adam Ginsburg <[email protected]>
Co-authored-by: Adam Ginsburg <[email protected]>
Co-authored-by: Thomas Robitaille <[email protected]>
…mask for consistency
…be deconvolved; add that option in our tests here for failure cases
a81ad9f
to
5206ff4
Compare
|
Discussion today: @e-koch will try splitting up this PR to avoid state-change related issues. Maybe need to remove median(beams) operation |
Remove beam averaging and force operations to use the common beam. This depends on improvements to the common beam algorithm in radio-beam (radio-astro-tools/radio-beam#81). Addresses #615.
Remove current usage of beam averaging
[ ] Generalize identification of bad beams to difference in areas instead of deviation from median.Bad beams are outliers. The current implementation should be fine in most cases.Toggle to enforce common beam operations vs. allow small deviations.
Cache common beam somewhere ifgoodbeam_mask
doesn't changeAdded aVRSC.common_beam
property that can be recomputed with(e.g., if you changeVRSC.set_common_beam
goodbeam_mask
, alter the common beam kwarg inputs, etc)Add beam area checking for DaskVaryingResSpectralCube
Use
VRSC.goodbeams_mask
by default for which beams to mask inVRSC.set_common_beam
. The prior default ismask=compute
which triggers a full read in of the cube mask.Cache a version of
cube.mask.any(axis=(1,2))
. See point directly above.SpectralCube.unmasked_channels
expand docs on new changes + how beams are handled in general
Added
VRSC.common_beam
that is set by enablingcompute_commonbeam=True
on read (SpectralCube.read
). This will compute the common beam for the cube at its state on read in.Key changes
VRSC.average_beams
forVRSC._compute_common_beam
, which returns a common beamradio_beam.Beam
. On read in the propertyVRSC.common_beam
can be set by enablingcompute_commonbeam=True
inSpectralCube.read
.VRSC.average_beams
raises a DeprecationWarning and passes the original args toVRSC.set_common_beam
VRSC.goodbeams_mask
forVRSC.set_common_beam
(mask='goodbeams'
). Previous default wasmask='compute'
which combinesgoodbeams_mask
and the full cube mask, triggering a full read in of the data.VRSC.strict_beam_match
which enforces exact beam matching (VRSC.beam_threshold = 0.0
). This will raise errors for any spectral operation with a message to convolve to the same beam first.SpectralCube.unmasked_channels
which cachescube.mask.include().any(axis=(1,2))
. The latter requires the whole mask be read in/computed and is quite slow for big cubes.cube.unmasked_channels
is for checking which channels are fully masked.