Skip to content

Latest commit

 

History

History
255 lines (170 loc) · 15.2 KB

README.md

File metadata and controls

255 lines (170 loc) · 15.2 KB

Instant Neural Graphics Primitives

Ever wanted to train a NeRF model of a fox in under 5 seconds? Or fly around a scene captured from photos of a factory robot? Of course you have!

Here you will find an implementation of four neural graphics primitives, being neural radiance fields (NeRF), signed distance functions (SDFs), neural images, and neural volumes. In each case, we train and render a MLP with multiresolution hash input encoding using the tiny-cuda-nn framework.

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
arXiv:2201.05989 [cs.CV], Jan 2022
Project page / Paper / Video / BibTeX

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Requirements

  • An NVIDIA GPU; tensor cores increase performance when available. All shown results come from an RTX 3090.
  • A C++14 capable compiler. The following choices are recommended and have been tested:
    • Windows: Visual Studio 2019
    • Linux: GCC/G++ 7.5 or higher
  • CUDA v10.2 or higher and CMake v3.21 or higher.
  • (optional) Python 3.7 or higher for interactive bindings. Also, run pip install -r requirements.txt.
  • (optional) OptiX 7.3 or higher for faster mesh SDF training. Set the environment variable OptiX_INSTALL_DIR to the installation directory if it is not discovered automatically.

If you are using Linux, install the following packages

sudo apt-get install build-essential git python3-dev python3-pip libopenexr-dev libxi-dev \
                     libglfw3-dev libglew-dev libomp-dev libxinerama-dev libxcursor-dev

We also recommend installing CUDA and OptiX in /usr/local/ and adding the CUDA installation to your PATH. For example, if you have CUDA 11.4, add the following to your ~/.bashrc

export PATH="/usr/local/cuda-11.4/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-11.4/lib64:$LD_LIBRARY_PATH"

Compilation (Windows & Linux)

Begin by cloning this repository and all its submodules using the following command:

$ git clone --recursive https://github.com/nvlabs/instant-ngp
$ cd instant-ngp

Then, use CMake to build the project: (on Windows, this must be in a developer command prompt)

instant-ngp$ cmake . -B build
instant-ngp$ cmake --build build --config RelWithDebInfo -j 16

If the build fails, please consult this list of possible fixes before opening an issue.

If the build succeeds, you can now run the code via the build/testbed executable or the scripts/run.py script described below.

If automatic GPU architecture detection fails, (as can happen if you have multiple GPUs installed), set the TCNN_CUDA_ARCHITECTURES enivonment variable for the GPU you would like to use. The following table lists the values for common GPUs. If your GPU is not listed, consult this exhaustive list.

RTX 30X0 A100 RTX 20X0 TITAN V / V100 GTX 10X0 / TITAN Xp GTX 9X0 K80
86 80 75 70 61 52 37

Interactive training and rendering

This codebase comes with an interactive testbed that includes many features beyond our academic publication:

  • Additional training features, such as extrinsics and intrinsics optimization.
  • Marching cubes for NeRF->Mesh and SDF->Mesh conversion.
  • A spline-based camera path editor to create videos.
  • Debug visualizations of the activations of every neuron input and output.
  • And many more task-specific settings.
  • See also our one minute demonstration video of the tool.

NeRF fox

One test scene is provided in this repository, using a small number of frames from a casually captured phone video:

instant-ngp$ ./build/testbed --scene data/nerf/fox

Alternatively, download any NeRF-compatible scene (e.g. from the NeRF authors' drive). Now you can run:

instant-ngp$ ./build/testbed --scene data/nerf_synthetic/lego/transforms_train.json

For more information about preparing datasets for use with our NeRF implementation, please see this document.

SDF armadillo

instant-ngp$ ./build/testbed --scene data/sdf/armadillo.obj

Image of Einstein

instant-ngp$ ./build/testbed --scene data/image/albert.exr

To reproduce the gigapixel results, download, for example, the Tokyo image and convert it to .bin using the scripts/image2bin.py script. This custom format improves compatibility and loading speed when resolution is high. Now you can run:

instant-ngp$ ./build/testbed --scene data/image/tokyo.bin

Volume renderer

Download the nanovdb volume for the Disney cloud, which is derived from here (CC BY-SA 3.0).

instant-ngp$ ./build/testbed --mode volume --scene data/volume/wdas_cloud_quarter.nvdb

Python bindings

To conduct controlled experiments in an automated fashion, all features from the interactive testbed (and more!) have Python bindings that can be easily instrumented. For an example of how the ./build/testbed application can be implemented and extended from within Python, see ./scripts/run.py, which supports a superset of the command line arguments that ./build/testbed does.

If you'd rather build new models from the hash encoding and fast neural networks, consider the tiny-cuda-nn's PyTorch extension.

Happy hacking!

Frequently asked questions (FAQ)

Q: How can I run instant-ngp in headless mode?

A: Use ./build/testbed --no-gui or python scripts/run.py. You can also compile without GUI via cmake -DNGP_BUILD_WITH_GUI=off ...

Q: Does this codebase run on Google Colab?

A: Yes. See this example by user @myagues. Caveat: this codebase requires large amounts of GPU RAM and might not fit on your assigned GPU. It will also run slower on older GPUs.

Q: Is there a Docker container?

A: Yes. We bundle a Visual Studio Code development container, the .devcontainer/Dockerfile of which you can also use stand-alone.

Q: How can I edit and train the underlying hash encoding or neural network on a new task?

A: Use tiny-cuda-nn's PyTorch extension.

Q: How can I save the trained model and load it again later?

A: Two options:

  1. Use the GUI's "Snapshot" section.
  2. Use the Python bindings load_snapshot / save_snapshot (see scripts/run.py for example usage).

Q: Can this codebase use multiple GPUs at the same time?

A: No. To select a specific GPU to run on, use the CUDA_VISIBLE_DEVICES environment variable. To optimize the compilation for that specific GPU use the TCNN_CUDA_ARCHITECTURES environment variable.

Q: What is the coordinate system convention?

A: See this helpful diagram by user @jc211.

Q: The NeRF reconstruction of my custom dataset looks bad; what can I do?

A: There could be multiple issues:

  • COLMAP might have been unable to reconstruct camera poses.
  • There might have been movement or blur during capture. Don't treat capture as an artistic task; treat it as photogrammetry. You want *as little blur as possible* in your dataset (motion, defocus, or otherwise) and all objects must be *static* during the entire capture. Bonus points if you are using a wide-angle lens (iPhone wide angle works well), because it covers more space than narrow lenses.
  • The dataset parameters (in particular aabb_scale) might have been tuned suboptimally. We recommend starting with aabb_scale=16 and then decreasing it to 8, 4, 2, and 1 until you get optimal quality.
  • Carefully read our NeRF training & dataset tips.

Q: Why are background colors randomized during NeRF training?

A: Transparency in the training data indicates a desire for transparency in the learned model. Using solid background colors, the model can minimize its loss by simply predicting the background colors, rather than transparency (zero density). By randomizing the background colors, the model is forced to learn zero density to let the randomized colors "shine through".

Q: How to mask away NeRF training pixels (e.g. for dynamic object removal)?

A: For any training image xyz.* with dynamic objects, you can provide a dynamic_mask_xyz.png in the same folder. This file must be in PNG format, where non-zero pixel values indicate masked-away regions.

Troubleshooting compile errors

Before investigating further, make sure all submodules are up-to-date and try compiling again.

instant-ngp$ git submodule sync --recursive
instant-ngp$ git submodule update --init --recursive

If instant-ngp still fails to compile, update CUDA as well as your compiler to the latest versions you can install on your system. It is crucial that you update both, as newer CUDA versions are not always compatible with earlier compilers and vice versa. If your problem persists, consult the following table of known issues.

Problem Resolution
CMake error: No CUDA toolset found / CUDA_ARCHITECTURES is empty for target "cmTC_0c70f" Windows: the Visual Studio CUDA integration was not installed correctly. Follow these instructions to fix the problem without re-installing CUDA. (#18)
Linux: Environment variables for your CUDA installation are probably incorrectly set. You may work around the issue using cmake . -B build -DCMAKE_CUDA_COMPILER=/usr/local/cuda-<your cuda version>/bin/nvcc (#28)
CMake error: No known features for CXX compiler "MSVC" Reinstall Visual Studio & make sure you run CMake from a developer shell. (#21)
Compile error: undefined references to "cudaGraphExecUpdate" / identifier "cublasSetWorkspace" is undefined Update your CUDA installation (which is likely 11.0) to 11.3 or higher. (#34 #41 #42)
Compile error: too few arguments in function call Update submodules with the above two git commands. (#37 #52)
Python error: No module named 'pyngp' It is likely that CMake did not detect your Python installation and therefore did not build pyngp. Check CMake logs to verify this. If pyngp was built in a different folder than instant-ngp/build, Python will be unable to detect it and you have to supply the full path to the import statement. (#43)

If you cannot find your problem in the table, please feel free to open an issue and ask for help.

Thanks

Many thanks to Jonathan Tremblay and Andrew Tao for testing early versions of this codebase and to Arman Toorians and Saurabh Jain for the factory robot dataset. We also thank Andrew Webb for noticing that one of the prime numbers in the spatial hash was not actually prime; this has been fixed since.

This project makes use of a number of awesome open source libraries, including:

  • tiny-cuda-nn for fast CUDA networks and input encodings
  • tinyexr for EXR format support
  • tinyobjloader for OBJ format support
  • stb_image for PNG and JPEG support
  • Dear ImGui an excellent immediate mode GUI library
  • Eigen a C++ template library for linear algebra
  • pybind11 for seamless C++ / Python interop
  • and others! See the dependencies folder.

Many thanks to the authors of these brilliant projects!

License and Citation

@article{mueller2022instant,
    title = {Instant Neural Graphics Primitives with a Multiresolution Hash Encoding},
    author = {Thomas M\"uller and Alex Evans and Christoph Schied and Alexander Keller},
    journal = {arXiv:2201.05989},
    year = {2022},
    month = jan
}

Copyright © 2022, NVIDIA Corporation. All rights reserved.

This work is made available under the Nvidia Source Code License-NC. Click here to view a copy of this license.