- Installation
- Data Preparation
- Body Model Preparation
- Inference / Demo
- Evaluation
- Training
- More Tutorials
Please refer to install.md for installation.
Please refer to data_preparation.md for data preparation.
- SMPL v1.0 is used in our experiments.
- Neutral model can be downloaded from SMPLify.
- All body models have to be renamed in
SMPL_{GENDER}.pkl
format.
For example,mv basicModel_neutral_lbs_10_207_0_v1.0.0.pkl SMPL_NEUTRAL.pkl
- J_regressor_extra.npy
- J_regressor_h36m.npy
- smpl_mean_params.npz
Download the above resources and arrange them in the following file structure:
mmhuman3d
├── mmhuman3d
├── docs
├── tests
├── tools
├── configs
└── data
└── body_models
├── J_regressor_extra.npy
├── J_regressor_h36m.npy
├── smpl_mean_params.npz
└── smpl
├── SMPL_FEMALE.pkl
├── SMPL_MALE.pkl
└── SMPL_NEUTRAL.pkl
python demo/estimate_smpl_image.py ${CONFIG_FILE} ${CHECKPOINT} [optional]
Optional arguments include:
--single_person_demo
: flag for single-person inference--det_config
: MMDetection config--det_checkpoint
: MMDetection checkpoint--input_path
: input path--show_path
: directory to save rendered images or video--smooth_type
: smoothing mode
Example:
python demo/estimate_smpl_image.py \
configs/hmr/resnet50_hmr_pw3d.py \
data/checkpoints/resnet50_hmr_pw3d.pth \
--single_person_demo \
--det_config demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py \
--det_checkpoint https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
--input_path demo/resources/single_person_demo.mp4 \
--show_path vis_results/single_person_demo.mp4 \
--smooth_type savgol
Note that the MMHuman3D checkpoints can be downloaded from the model zoo. Here we take HMR (resnet50_hmr_pw3d.pth) as an example.
Optional arguments include:
--multi_person_demo
: flag for multi_person inference--mmtracking_config
: MMTracking config--input_path
: input path--show_path
: directory to save rendered images or video--smooth_type
: smoothing mode
Example 2: multi-person estimation
python demo/estimate_smpl_image.py \
configs/hmr/resnet50_hmr_pw3d.py \
data/checkpoints/resnet50_hmr_pw3d.pth \
--multi_person_demo \
--tracking_config demo/mmtracking_cfg/deepsort_faster-rcnn_fpn_4e_mot17-private-half.py \
--input_path demo/resources/multi_person_demo.mp4 \
--show_path vis_results/multi_person_demo.mp4 \
--smooth_type savgol
We provide pretrained models in the respective method folders in config.
python tools/test.py ${CONFIG} --work-dir=${WORK_DIR} ${CHECKPOINT} --metrics=${METRICS}
Example:
python tools/test.py configs/hmr/resnet50_hmr_pw3d.py --work-dir=work_dirs/hmr work_dirs/hmr/latest.pth --metrics pa-mpjpe mpjpe
If you can run MMHuman3D on a cluster managed with slurm, you can use the script slurm_test.sh
.
./tools/slurm_test.sh ${PARTITION} ${JOB_NAME} ${CONFIG} ${WORK_DIR} ${CHECKPOINT} --metrics ${METRICS}
Example:
./tools/slurm_test.sh my_partition test_hmr configs/hmr/resnet50_hmr_pw3d.py work_dirs/hmr work_dirs/hmr/latest.pth 8 --metrics pa-mpjpe mpjpe
python tools/train.py ${CONFIG_FILE} ${WORK_DIR} --no-validate
Example: using 1 GPU to train HMR.
python tools/train.py ${CONFIG_FILE} ${WORK_DIR} --gpus 1 --no-validate
If you can run MMHuman3D on a cluster managed with slurm, you can use the script slurm_train.sh
.
./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR} ${GPU_NUM} --no-validate
Common optional arguments include:
--resume-from ${CHECKPOINT_FILE}
: Resume from a previous checkpoint file.--no-validate
: Whether not to evaluate the checkpoint during training.
Example: using 8 GPUs to train HMR on a slurm cluster.
./tools/slurm_train.sh my_partition my_job configs/hmr/resnet50_hmr_pw3d.py work_dirs/hmr 8 --no-validate
You can check slurm_train.sh for full arguments and environment variables.