Leslie Ching Ow Tiong*,1, Dick Sigmund*,2, Andrew Beng Jin Teoh†,3
1Korea Institute of Science and Technology, 2AIDOT Inc., 3Yonsei University
*These authors contributed equally
†Corresponding author
This repository contains the source code for the paper 3D-C2FT: Coarse-to-fine Transformer for Multi-view 3D Reconstruction, which is accepted by ACCV 2022.
We use the ShapeNet and Multi-view Real-life datasets, which are available as follows:
We tested the codes with:
- PyTorch 1.12.0 with and without GPU under Ubuntu 18.04 and Anaconda3 (Python 3.8 and above)
- PyTorch 1.10.2 with and without GPU under Windows 10 and Anaconda3 (Python 3.7 and above)
- PyTorch with CPU under MacOS 12.0 (M1) and Anaconda3 (Python 3.7 and above)
- Run the code
eval.py
with the given configuration in config.py
$ python eval.py
The pretrained model is available at here:
This work is an open-source under MIT license.
@InProceedings{3DC2FT_2022_ACCV,
author = {Tiong, Leslie Ching Ow and Sigmund, Dick and Teoh, Andrew Beng Jin},
title = {3D-C2FT: Coarse-to-fine Transformer for Multi-view 3D Reconstruction},
booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)},
month = {December},
year = {2022},
pages = {1438-1454}
}