Xiaoyu Zhan · Jianxin Yang · Yuanqi Li · Jie Guo · Yanwen Guo · Wenping Wang
This repository contains the official Pytorch implementation for Semantic Human Mesh Reconstruction with Textures.
Start from creating a conda environment.
git clone https://github.com/ZhanxyR/SHERT.git
cd SHERT
conda create -n shert python=3.8
conda activate shert
Follow Pytorch.
We recommend to use Pytorch >= 2.0
(lowest tested on 1.13
).
Important
Please install a specific version of Open3D manually to avoid any problem.
(We will fix the bugs to adapt to higher versions later.)
pip install open3d==0.10.0
pip install -r requirements.txt
Follow Pytorch3D. We recommend to build from source code.
The version we used is v0.7.6
, while the lower versions should also be applicable.
Note
If you have any trouble in building the package, you could just set the refine_iter
to 1
in corresponding config.yaml
to avoid using Pytorch3D.
(e.g. ./examples/demo_scan/config.yaml
)
Selective to dowload data/smplx
, data/face
, examples/*
, and save/ckpt
from NJU-Box or Google-Drive.
Dowload SMPL-X v1.1 Models
(Male, Female, Neutral) from SMPL-X and put them to data/models
.
The completed structure should be like:
|-- SHERT
|-- data
|-- cameras
|-- masks
|-- smplx
|-- face
|-- models
|-- smplx
|-- SMPLX_*.npz
|-- examples
|-- demo_image_w_gt_smplx
|-- demo_image
|-- demo_scan
|-- save
|-- ckpt
|-- inpaint.pth # For mesh completion
|-- refine.pth # For mesh refinement
|-- texture_local # For texture inpainting
|-- texture_global # For texture repainting
- The whole processes include two steps:
reconstruction
andtexture inpainting
.
⚡ Run quick_demo
to test reconstruction
in given resources. The results will be saved to ./examples/$subject$/results
.
# Use ECON-pred mesh and fitted smplx.
python -m apps.quick_demo
# Use THuman scan and fitted smplx.
python -m apps.quick_demo -e scan
# Given only the image and predict all inputs with ECON.
python -m apps.quick_demo -e image
🖥️ For texture inpainting
, we provide a client script and a server script that enables you to run diffusion model on remote server.
The client script will create a webui using Gradio, which can be accessed by http://localhost:7860
.
Note
If you run the client in a new environment, some corresponding dependencies should be reinstalled.
When you first use inpainting, it will download the pretrained diffusion checkpoints from Huggingface.
# Run server
python -m apps.texture_rpc_server
# Run remote client
python -m apps.texture_client -i <server.ip>
# Run local client
python -m apps.texture_client -i localhost
This work was supported by the National Natural Science Foundation of China (No. 62032011) and the Natural Science Foundation of Jiangsu Province (No. BK20211147).
There are also many powerful resources that greatly benefit our work:
- ICON
- ECON
- SMPL-X
- ControlNet
- Stable-Diffusion
- EMOCA
- THuman2.0
- PIFu
- PIFuHD
- Open-PIFuhd
- DecoMR
- Densebody
@inproceedings{zhan2024shert,
title = {Semantic Human Mesh Reconsturction with Textures},
author = {Zhan, Xiaoyu and Yang, Jianxin and Li, Yuanqi and Guo, Jie and Guo, Yanwen and Wang, Wenping},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024},
}
Zhan, Xiaoyu ([email protected]) and Yang, Jianxin ([email protected])