Skip to content

This script is a Python Blender script for generating a multi-focus and misaligned image dataset.

Notifications You must be signed in to change notification settings

PeimingCHEN/WHU-MFM-Dataset

Repository files navigation

WHU-MFM Dataset

This script is a Python Blender script for generating a multi-focus and misaligned image dataset, which contains scenes' depth and defocus maps, objects' masks, camera parameters, and blur and all-in-focus images.

Two datasets with different resolutions, 960 * 720 and 480 * 360, are available for download from [ Baidu Yun: axke] and [ Google Drive]. The 960 * 720 version is divided into three separate ZIP files. You can download the corresponding version according to the name of the compressed package.

Installation

  • Blender 2.93 or above, downloadable here.
  • Download the repository:
    git clone https://github.com/PeimingCHEN/WHU-MFM-Dataset
  • Install the dependencies in Blender's Python environment:
    pip install -r requirement.txt
    for example, in Linux system, you can use
    cd blender-2.93.1-linux-x64/2.93/python/bin $ ./python3.9 -m pip install -r requirement.txt

Getting Started

  • Step 1: We need a few assets to start the rendering. 1000 cultural heritage meshes have been collected from Sketchfab and kept them in mesh.zip. Besides, background.blend is a Blender file which contains 5 diverse photographic backgrounds and 5 different professional photography lights. If you want to download other meshes from Sketchfab, you can download them in .gltf format, and use unpack.py and pre-process.py scripts provided by us for batch decompression and preprocessing to adjust the models' location and size.

    • a) Object meshes.

    • b) Background meshes.

    • c) Environment lighting.

    • d) Camera positions.

  • Step 2: Set dataset related parameters in dataset_create.py.
    At the bottom of dataset_create.py, you can set relevant parameters as needed, including the number of meshes, the number of rendered scenes, the number of focus stacks in a scene, virtual camera parameters, result output path, etc.
    For more details, please refer to the code notes and our paper.

  • Step 3: Run the rendering code.
    ./blender -b background.blend -P dataset_create.py

Result

The dataset we produced includes two resolutions, 960 * 720 and 480 * 360 respectively. Each resolution renders a total of 3000 scenes for training and testing, each of them contains 5 blurred RGB images, which have different focal planes and camera deviations, and corresponding 5 defocus maps, 5 depth maps, 5 masks of objects, 5 all-in-focus images (taken without using DOF), and camera matrix of each camera pose.
Example scene:

  • Focal stack defocus images.
  • Corresponding all-in-focus images.
  • Corresponding Defocus map.
  • Corresponding Depth map.
  • Corresponding mask of object.

Note: For saving time, we use the Cycle rendering engine and turn on NVIDIA Optix for rendering. Please set it according to your running device.

About

This script is a Python Blender script for generating a multi-focus and misaligned image dataset.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages