-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the function of confidence_based_inlier_selection too #12
Comments
Hi @JokingYi, thank you for your comments! The reason for the expectation to be this formula is the following: this expectation on the positive samples is taken under the hypothesis that all samples are outliers, and outliers here like in some previous works are assumed to have uniformly distributed residuals in the sampling circle which has radius . Therefore, the probability of a single outlier to have error less than by chance is , and assuming all residuals to be independent the total positive P count behaves like a binomial with samples, having each probability to count. Hence the formula from the expectation of a binomial. Regarding the code, I realize it's quite hard to read! For this reason I just pushed some commits where I added some documentation, improved readability of some parts of the code, and refactored some computations to make them align more easily with the paper without changing the end result (this refactor includes the code referring to this formula!). The code snippet you point at is just above the actual computation of , and is used to sort residuals and downweight repeated/zero residuals to make their statistics to align better with the uniformity and independency assumptions on outliers. Just after that there is the computation of the confidence (you need to pull changes to see it more clearly as in the previous version of the code I was computing an equivalent measure to the formula on the paper which might be a bit harder to link to it) Let me know if this makes it clearer! |
Thanks for your detailed explanation. :) But I find it's hard to understand the following line, cause I'm not very professional, maybe some background knowledge are needed which I didn't get it. :
And about the commited code you mentioned, I didn't find any update in the master branch or I mistaked something? |
Hi @JokingYi, sure! let me elaborate on that line: consider that the residual r of an outlier is a vector containing the errors both in x and y direction. Now assume that, given an outlier correspondence, this error vector can be any vector with length less than R_2 with uniform probability (which is our assumption on the outlier error distribution). Now, the probability of having error less than r_k is the probability that the residual vector r is inside the circle with radius r_k. As you have uniform probability to sample a point anywhere in the R_2-radius circle, the probability to sample one inside the r_k-radius circle is equal to the ratio of the areas of such circles. So you get r_k^2 \pi / R_2^2 \pi = r_k^2 / R_2^2 on the new documentation, the relevant commits were a bit older than today! so Github shows the old commit time even though they were pushed only today from my local repository. I double checked that the online version is the updated one! |
Thanks you very much, understand it finally! Ok I'll check it out. Thanks again. And if you don't mind I shall leave this issue unclosed cause I may get some doubts later, so I can comment here directly? |
Happy to help! Sure, you can leave it open as long as you need |
hi @cavalli1234, I'm here again.. still puzzled about the function, as indicated by the image. First why the 'sorted_res_sqr' and the 'progressive_inl_rates' are comparable, and why a point can be determined to be a inlier in this way? Second, why 'too_perfect_fits' is used with totally different effect, while in the first place 'inlier_weights[too_perfect_fits]' are asigned zero and latter '!too_perfect_fits' it's asigned zero. Hope you can elaborate about these. Thanks. |
Another question in function select_seeds As you said the lower the score the better, but why the higher one is selected. So I wonder if there are something wrong about my logic? |
ok, after I visualized the seeds I see what's wrong about my logic |
hi @cavalli1234 @Dawars
Thanks for sharing the code, I'm marvelous about the result of the code but also get some questions especially in the function of confidence_based_inlier_selection.
in the formula (4) of the paper,
Could you explain more about the second equation? What does the R stands for? Why Ep can be substituted by the next formula?
And I found it hard to map the formula to the corresponding code as indicated by the pic below.
I supposed it's the place you apply the formula? Or if I'm wrong please tell me.
Thank you in advance
The text was updated successfully, but these errors were encountered: