3D Shape Generation Baselines in PyTorch.
- Hack of DataParallel for balanced memory usage
- More Models WIP
- Configurable model parameters
- Customizable model, dataset
- 💎 Polygonal Mesh
- 👾 Volumetric
- 🎲 Point Cloud
- 🎯 Implicit Function
- 💊 Primitive
- 🏞 RGB Image
- 📡 Depth Image
- 👾 Voxel
- 🎲 Point Cloud
- 🎰 Unconditional Random
- Chamfer Distance
- F-score
- IoU
- 💎 Pixel2Mesh
- 🎯 DISN
- 👾 3DGAN
- 👾 Voxel Based Method
- 🎲 PointCloud Based Method
- Ubuntu 16.04 / 18.04
- Pytorch 1.3.1
- CUDA 10
- conda > 4.6.2
Using Anaconda to install all dependences.
conda env create -f environment.yml
CUDA_VISIBLE_DEVICES=<gpus> python train.py --options <config>
CUDA_VISIBLE_DEVICES=<gpus> python predictor.py --options <config>
- custom scheduler for
training/inference
loop, add code inscheduler
and inherit base class. - custom model in
models/zoo
- custom config options in
utils/config
- custom dataset in
datasets/data
- Chamfer Distance
- Input: RGB Image
- Representation: Mesh
- Output: Mesh camera-view
- Input: RGB Image
- Representation: SDF
- Post-processing: Marching Cube
- Output: Mesh camera-view
- Input: Random Noise
- Representation: Volumetric
- Output: Voxel
Our work is based on the codebase of an unofficial pixel2mesh framework. The Chamfer loss code is based on ChamferDistancePytorch.
Official baseline code
-
DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction
-
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images
-
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
Please follow the License of official implementation for each model.