Official implementation for the paper:
Learning to Generate 3D Shapes from a Single Example
Rundi Wu, Changxi Zheng
Columbia University
SIGGRAPH Asia 2022 (Journal Track)
Prerequisites:
- python 3.9+
- An Nvidia GPU
Install dependencies with conda:
conda env create -f environment.yml
conda activate ssg
or install dependencies with pip:
pip install -r requirement.txt
# NOTE: check https://pytorch.org/ for pytorch installation command for your CUDA version
We provide pretrained models for all shapes that are used in our paper. Download all of them (about 1G):
bash download_models.sh
or download each model individually, e.g.,
bash download_models.sh ssg_Acropolis_r256s8
We provide a simple GUI demo (based on Open3D) that allows quick shape generation with a trained model. For example, run
python gui_demo.py checkpoints/ssg_Acropolis_r256s8
(Recorded on a Ubuntu 20.04 with an NVIDIA 3090 GPU. Also tested on a Window 11 with an NVIDIA 2070 GPU.)
To randomly generate new shapes, run
python main.py test --tag ssg_Acropolis_r256s8 -g 0 --n_samples 10 --mode rand
The generated shapes will be saved in .h5
format, compatible with the training data.
Specify --resize
to change the spatial dimensions. For example, --resize 1.5 1.0 1.0
generates shapes whose size along x-axis are 1.5 times larger than original.
Specify --upsample
to construct the output shape at a higher resolution. For example, --upsample 2
gives in 2 times higher resolution.
For interpolation and extrapolation between two randomly generated samples, run
python main.py test --tag ssg_Acropolis_r256s8 -g 0 --n_samples 5 --mode interp
To quickly visualize the generated shapes (of .h5
format), run
python vis_export.py -s checkpoints/ssg_Acropolis_r256s8/rand_n10_bin_r1x1x1 -f mesh --smooth 3 --cleanup
--smooth
specifies Laplacian smoothing iterations. --cleanup
keeps only the largest connected component.
Specify --export obj
to export meshes in obj
format.
We list the sources for all example shapes that we used: data/README.md. Most of them are free and you can download accordingly.
To construct the training data (voxel pyramid) from a mesh, we rely on binvox.
After downloading it, make sure you change BINVOX_PATH in voxelization/voxelize.py
to your path to excetuable binvox.
Then run our script
cd voxelization
python voxelize.py -s {path-to-your-mesh-file} --res 128 --n_scales 6 -o {save-path.h5}
# --res: finest voxel resolution. --n_scales: number of scales.
The processed data will be saved in .h5
format.
TBA: release preprocessed data?
To train on the processed h5 data, run
python main.py train --tag {your-experiment-tag} -s {path-to-processed-h5-data} -g {gpu-id}
By default, the log and model will be saved in checkpoints/{your-experiment-tag}
.
We provide code for evaluation metrics LP-IoU, LP-F-score, SSFID and Div.
SSFID relies on a pretrained 3D shape classifier. Please download it from here and put Clsshapenet_128.pth
under evaluation
folder.
To perform evaluation, we first randomly generate 100 shapes, e.g.,
python main.py test --tag ssg_Acropolis_r128s6 -g 0 --n_samples 100 --mode rand
Then run the evalution script to compute all metrics, e.g.,
cd evaluation
# ./eval.sh {generated-shapes-folder} {reference-shape} {gpu-ids}
./eval.sh ../checkpoints/ssg_Acropolis_r128s6/rand_n100_bin_r1x1x1 ../data/Acropolis_r128s6.h5 0
See evaluation
folder for evalution scripts for each individual metric.
We also provide a baseline, SinGAN-3D, as described in our paper. To use it, simply specify --G_struct conv3d
when training the model. Pretrained models are also provided (begin with "singan3d").
We provide code and configurations for rendering figures in our paper.
We rely on Blender and BlenderToolbox.
To use our rendering script, make sure have them installed and change the corresponding paths (BLENDER_PATH and BLENDERTOOLBOX_PATH in render/blender_utils.py
).
Then run
cd render
python render_script.py -s {path-to-generated-shapes-folder} -c {render-config-name} --smooth 3 --cleanup
See render/render_configs.json
for saved rendering configs.
We develop some part of this repo based on code from SinGAN, ConSinGAN, DECOR-GAN and BlenderToolbox. We would like to thank their authors.
@article{wu2022learning,
title={Learning to Generate 3D Shapes from a Single Example},
author={Wu, Rundi and Zheng, Changxi},
journal={ACM Transactions on Graphics (TOG)},
volume={41},
number={6},
articleno={224},
numpages={19},
year={2022},
publisher={ACM New York, NY, USA}
}