Skip to content

Official repository for CVPR 2021 paper "Differentiable Diffusion for Dense Depth Estimation from Multi-view Images"

License

Notifications You must be signed in to change notification settings

brownvc/diffdiffdepth

Repository files navigation

Differentiable Diffusion for Dense Depth Estimation from Multi-view Images

Numair Khan1, Min H. Kim2, James Tompkin1
1Brown, 2KAIST
CVPR 2021

Citation

If you use this code in your work, please cite our paper:

@article{khan2021diffdiff,
  title={Differentiable Diffusion for Dense Depth Estimation from Multi-view Images
  author={Numair Khan, Min H. Kim, James Tompkin},
  journal={Computer Vision and Pattern Recognition},
  year={2021}
}

Code in pytorch_ssim is from https://github.com/Po-Hsun-Su/pytorch-ssim

Included in this PyTorch code

  • A differentiable Gaussian splatting algorithm based on radiative transport that models scene occlusion.
  • A differentiable implementation of Szeliski's SIGGRAPH 2006 paper Locally Adaptive Hierarchical Basis Preconditioning for solving partial differential equations (PDEs) iteratively with very few steps.
  • A tile-based diffusion solver that combines the previous two approaches to scale to large images efficiently.

Running the Code

Environment Setup

The code has been tested with Python3.6 using Pytorch=1.5.1.

The provided setup file can be used to install all dependencies and create a conda environment diffdiffdepth:

$ conda env create -f environment.yml

$ conda activate diffdiffdepth

Multiview Stereo

To run the code on multi-view stereo images you will first need to generate poses using COLMAP. Once you have these, run the optimization by calling run_mvs.py:

$ python run_mvs.py --input_dir=<COLMAP_project_directory> --src_img=<target_img> --output_dir=<output_directory>

where <target_img> is the name of the image you want to compute depth for. Run python run_mvs.py -h to view additional optional arguments.

Example usage:

$ python run_mvs.py --input_dir=colmap_dir --src_img=img0.png --output_dir=./results

Lightfield Images

Input sparse point clouds are derived from our light field depth estimation work code here: https://github.com/brownvc/lightfielddepth

Troubleshooting

We will add to this section as issues arise.

About

Official repository for CVPR 2021 paper "Differentiable Diffusion for Dense Depth Estimation from Multi-view Images"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages