Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unify detection coadd methods #207

Open
pmelchior opened this issue Aug 28, 2020 · 9 comments
Open

unify detection coadd methods #207

pmelchior opened this issue Aug 28, 2020 · 9 comments
Labels

Comments

@pmelchior
Copy link
Owner

We build our own coadds for source initialization. The two methods are largely identical, so we should clarify how they are different or unify them.

@pmelchior
Copy link
Owner Author

In addition, we should canonize the use of deconvolution coadds for initialization of all sources bright enough to pull it off. For everything else, the new compact initialization from #204 should be best, unless we know something about the sources, e.g. that it is a star.

@fred3m
Copy link
Collaborator

fred3m commented Oct 12, 2020

I'd like to suggest a slight twist to this issue, although I would also be fine with making it a separate issue if you two don't feel like they are related enough. We talked about this a while back, but I'd like to see the whole initialization module become more modular. There's a lot of duplicate code, and code blocks that can be useful for users who might want to do their own initialization but don't want to have to copy and paste much of our initialization code.

For example, we now have two different methods for initializing the SED, and users might have their own custom methods (especially if they have color priors). So one change along these lines would be to call it init_extended_source_morphology and take the SED as an input parameter, as well as a parameter to specify whether or not the morphology should be trimmed.

@pmelchior
Copy link
Owner Author

I think a modularization is sensible but not trivial. I'm also not sure who many people would use it.

The problem I see is that initialization of multiple parameters are usually interdependent. They should also respect the parameter constraints. It's hard to ensure those things and break up the init sequences into modular steps.

That said, I'm reviewing the steps we take, especially around the choice of coadds (the scope of this issue).

@pmelchior
Copy link
Owner Author

After review I think that modularizing the init methods is doable. I'll describe this in another issue.

As for the detection coadd, here's my proposal.:

  • We compute the deconvolved image in Observation.match or after match has been called, and store the result as a member in `Observation.
  • We also have to propagate a noise field, with the same variance as Observation.image, through the deconvolution to determine the effective noise level in the deconvolved image. We store that noise level in Observation as well.
  • We can then combine this, across multiple observations if needed, with the source spectrum and build the optimal SNR detection image as in initialization.build_sed_coadd. This operation is fast and can be done for every source independently.
  • That detection image then defines the initial morphology.

Doing so allows us to

  • put the deconvolution and its result where it all the necessary pieces are (namely in Observation)
  • remove coadd from the parameter list of the source constructors
  • codify best practice of source initialization in the deconvolved frame

@pmelchior
Copy link
Owner Author

I have implemented the deconvolution coadd as default detection image for extended source inits. This provides much better initial logL and reduces the number of iterations.

There are two dangling ends, both of which could use input from @herjy:

  • build_initialization_coadd implements starlet filtering to remove largest scales. What's the use case for that, and should we enable/fold it in with the deconvolution coadd?
  • The new detection coadd method simply reads out Observation.prerender_images, which is build duing match(). To enable the same thing for LowResObservation, do you recommend that we interpolate to high-res and divide out the diff kernel?

@herjy
Copy link
Collaborator

herjy commented Nov 1, 2020

  • I don't remember doing that and I'm not super confident about that strategy. It helps cutting off the low frequencies, but in a weird way that may not correspond to deconvolution. I would rather not unless we feel that our initial models could use some more compactness. The issue is that with the noise exploding in the deconvolution, we don't want to lose power in the
    signal.
  • I think it is the less incorrect thing to do yes. I'm a little worried about interpolation artefacts potentially messing with the division, but I think I did similar things before that did not blow up. It's worth trying.

@pmelchior
Copy link
Owner Author

  • Can I remove build_initialization_coadd then?
  • As for deconvolution after interpolation, I'm trying it but it looks wrong. I'm also puzzled that the diff kernel has more pixels than the observation itself when both are up-sampled to the model frame. I'll push my code, but it would help if you could have a look.

@herjy
Copy link
Collaborator

herjy commented Nov 2, 2020

  • If it's taken care of somewhere else then yes.
  • I will, it is possible that there is a mismatch in the shapes due to some padding or what not.

pmelchior added a commit that referenced this issue Nov 5, 2020
properly use deconv detection image (#207); hacks for frame.bbox.origin (#219)
@pmelchior
Copy link
Owner Author

  • Re build_initialization_coadd: if you are not using it, and don't even think it's a good idea, can I remove it? I would rather do it now than to port it to the new deconvolution init.
  • As for the deconvolution in LRObservation: hold off for now, I'm experimenting with an alternative, more stable approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants