-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
what's depth/normal supervision ? #1
Comments
Thanks for your reply! The D(r) and N(r) are ground truth depth and normal maps. As described in "dataset" paragraph of Sec.7, we test our method on our synthetic dataset and some real dataset. For synthetic dataset, the ground truth depth and normal maps are readily available from the renderer, while for real dataset, ground truths are predicted similar to MonoSDF. As for MonoSDF, we share the same source of depth and normal maps as ours in supervision to ensure fairness. We will make our synthetic dataset publicly available soon. See our supplementary material for dataset details. |
Thanks. Additional questions.
|
|
Thanks.
|
Empirically, bubble loss has robustness against false positives to some extent, because of the smooth step in training (Sec.6). In our ablation studies, we add noises to our depth maps to simulate false positives. However, in contrast, bubble loss will indeed be impacted by false negatives. For example, if the depth map provided from dataset misses a chair leg, our method may struggle in reconstructing the chair leg. We leave this as a future work. |
yes, thats why we want to see the impact of depth map in real dataset. Though the abalation study has synthetic noise on depth maps, it is hard to compare with noise introduced by offshelf depth estimation models. Ground truth depth is not available in practice. |
btw, how to generate the reconstruction result in fig 1? just want to learn to generate textured meshes. |
|
Close this issue due to inactivity. Re-open it if you have further questions. |
Hi there, a follow-up question: for real data (from [26] and [40]), are the depth maps acquired by rasterizing the provided mesh, or from off-the-shelf depth estimation models (e.g. DPT in MonoSDF)? And do those depth maps have correct absolute scale, or they are ambiguous in scale/shift and you will need to a shift/scale invariant depth loss? Also can you explain what are the 1+3 real scenes from [26] and [40]? At the bottom of the website of [40], they only list one scene by them (Living Room2, which I cannot find download link to) and two from Free-viewpoint (Living Room1 and Sofa)? |
Real data's depths are estimated from MVS tools, so the depth maps have correct absolute scale and does not need scale shifting. The real scenes of [26] and [40] are all calibrated by the authors of [40] (for the living room scene from [26], the re-calibration provides more precise depth and camera compared to the original version) in their experiments. They haven't made their dataset public yet, and I've been asking them for permissions to release their data I used in this repository. |
That's great news! Looking forward to the release of the real scene with calibrated depth. Also wondering if it is possible to release the tools to get dense MVS depth for those scenes (and third-party scenes)? |
They used CapturingReality to calibrate their scenes (reported in their paper), but I am not quite familiar with this field :) |
Thanks! Also just to confirm, in order to get depth/normal maps on real scenes, did you rasterize with their provided mesh and poses? Scenes from Free-viewpoint do not include depth maps or normal maps; they only provide meshes and poses. |
Depth map can be directly acquired from the MVS tools, or a rasterization is also OK I think |
Yeah I agree. Just want to confirm which option was used in real scene experiments in I^2-SDF? Did you use semi-dense MVS depth by feeding images into a MVS pipeline, or rasterized depth/normals with the provided mesh? W.r.t. to monocular depth (e.g. DPT depth in MonoSDF), I don't see the current code of I^2-SDF supporting scale/shift-invariant depth loss. I will try out DPT depth/normals with bubble loss but I am not sure whether things will just automatically work out if I plug in DPT depth/normals, or changes need to be made to the losses. |
I current haven't tested I^2-SDF on monocular depths. All depth maps I used in my experiments are absolute depths. By the way, I will release the real data I used these two days maybe. |
Hi, i was confused about the statement that the bubble loss breaks the stable status of the converged SDF field so far. Why can't we merge the bubble step and smooth step into one step? |
Hi,
Great work. I have questions regarding equation 9) in the paper.
What are the depth/normal supervision ? specifically, what are D(r) and N(r) in 11) and 12) respectively.
Thanks
The text was updated successfully, but these errors were encountered: