Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the function of confidence_based_inlier_selection too #12

Open
JokingYi opened this issue Jul 19, 2021 · 9 comments
Open

Comments

@JokingYi
Copy link

hi @cavalli1234 @Dawars

Thanks for sharing the code, I'm marvelous about the result of the code but also get some questions especially in the function of confidence_based_inlier_selection.

in the formula (4) of the paper,
image
Could you explain more about the second equation? What does the R stands for? Why Ep can be substituted by the next formula?

And I found it hard to map the formula to the corresponding code as indicated by the pic below.
image
I supposed it's the place you apply the formula? Or if I'm wrong please tell me.

Thank you in advance

@cavalli1234
Copy link
Owner

Hi @JokingYi,

thank you for your comments!
Regarding the confidence formula, the represents the set of all residuals , thus is the number of residuals available overall. The other is instead the radius of the sampling region on the second image. P is the number of positive samples (i.e. the ones with error less than ).

The reason for the expectation to be this formula is the following: this expectation on the positive samples is taken under the hypothesis that all samples are outliers, and outliers here like in some previous works are assumed to have uniformly distributed residuals in the sampling circle which has radius . Therefore, the probability of a single outlier to have error less than by chance is , and assuming all residuals to be independent the total positive P count behaves like a binomial with samples, having each probability to count. Hence the formula from the expectation of a binomial.

Regarding the code, I realize it's quite hard to read! For this reason I just pushed some commits where I added some documentation, improved readability of some parts of the code, and refactored some computations to make them align more easily with the paper without changing the end result (this refactor includes the code referring to this formula!). The code snippet you point at is just above the actual computation of , and is used to sort residuals and downweight repeated/zero residuals to make their statistics to align better with the uniformity and independency assumptions on outliers. Just after that there is the computation of the confidence (you need to pull changes to see it more clearly as in the previous version of the code I was computing an equivalent measure to the formula on the paper which might be a bit harder to link to it)

Let me know if this makes it clearer!

@JokingYi
Copy link
Author

JokingYi commented Jul 19, 2021

Thanks for your detailed explanation. :)

But I find it's hard to understand the following line, cause I'm not very professional, maybe some background knowledge are needed which I didn't get it. :

Therefore, the probability of a single outlier to have error less than r_k by chance is r_k^2/R_2^2,

And about the commited code you mentioned, I didn't find any update in the master branch or I mistaked something?

@cavalli1234
Copy link
Owner

Hi @JokingYi,

sure! let me elaborate on that line:

consider that the residual r of an outlier is a vector containing the errors both in x and y direction. Now assume that, given an outlier correspondence, this error vector can be any vector with length less than R_2 with uniform probability (which is our assumption on the outlier error distribution). Now, the probability of having error less than r_k is the probability that the residual vector r is inside the circle with radius r_k. As you have uniform probability to sample a point anywhere in the R_2-radius circle, the probability to sample one inside the r_k-radius circle is equal to the ratio of the areas of such circles. So you get r_k^2 \pi / R_2^2 \pi = r_k^2 / R_2^2

on the new documentation, the relevant commits were a bit older than today! so Github shows the old commit time even though they were pushed only today from my local repository. I double checked that the online version is the updated one!

@JokingYi
Copy link
Author

Thanks you very much, understand it finally!

Ok I'll check it out. Thanks again.

And if you don't mind I shall leave this issue unclosed cause I may get some doubts later, so I can comment here directly?

@cavalli1234
Copy link
Owner

Happy to help!

Sure, you can leave it open as long as you need

@JokingYi
Copy link
Author

hi @cavalli1234, I'm here again..

image

still puzzled about the function, as indicated by the image. First why the 'sorted_res_sqr' and the 'progressive_inl_rates' are comparable, and why a point can be determined to be a inlier in this way?

Second, why 'too_perfect_fits' is used with totally different effect, while in the first place 'inlier_weights[too_perfect_fits]' are asigned zero and latter '!too_perfect_fits' it's asigned zero.

Hope you can elaborate about these. Thanks.

@JokingYi
Copy link
Author

Another question in function select_seeds

image

As you said the lower the score the better, but why the higher one is selected.
And it's turns out I get more matches if I change it to something like below (however with more time loss)
im1scorescomp = scores1.unsqueeze(1) < scores1.unsqueeze(0) im1bs = (torch.any(im1neighmap & im1scorescomp & mnn.unsqueeze(0), dim=1)) & mnn & (scores1 < 0.8 ** 2)

So I wonder if there are something wrong about my logic?

@JokingYi
Copy link
Author

image

And this one....

What does the eigenvals of the matrix mean and what happened here?
Sorry, I'm a beginner, so I know some questions might be stupid for you. Thanks for answering.

@JokingYi
Copy link
Author

Another question in function select_seeds

image

As you said the lower the score the better, but why the higher one is selected.
And it's turns out I get more matches if I change it to something like below (however with more time loss)
im1scorescomp = scores1.unsqueeze(1) < scores1.unsqueeze(0) im1bs = (torch.any(im1neighmap & im1scorescomp & mnn.unsqueeze(0), dim=1)) & mnn & (scores1 < 0.8 ** 2)

So I wonder if there are something wrong about my logic?

ok, after I visualized the seeds I see what's wrong about my logic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants