https://github.com/mkocabas/VIBE
GitHub - mkocabas/VIBE: Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimatio
Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation" - GitHub - mkocabas/VIBE: Official implementation of CVPR2020 paper "VIBE: ...
github.com
1. Install
conda create -n VIBE-env python=3.7
git clone https://github.com/mkocabas/VIBE.git
Make "requirements_alter.txt"
tqdm==4.28.1 yacs==0.1.6 h5py==2.10.0 numpy==1.17.5 scipy==1.4.1 numba==0.47.0 smplx==0.1.26 gdown==3.6.4 PyYAML==5.3.1 joblib==0.14.1 pillow==7.1.0 trimesh==3.5.25 pyrender==0.1.36 progress==1.5 filterpy==1.4.5 matplotlib==3.1.3 tensorflow==1.15.4 torchvision==0.5.0 scikit-image==0.16.2 scikit-video==1.1.11 opencv-python==4.1.2.30 llvmlite==0.32.1 git+https://github.com/mattloper/chumpy.git git+https://github.com/mkocabas/yolov3-pytorch.git git+https://github.com/mkocabas/multi-person-tracker.git |
Make and Run "install_conda.bat"
pip install mkl intel-openmp pip install torch==1.4.0 torchvision==0.5.0 -f https://download.pytorch.org/whl/torch_stable.html pip install numpy==1.17.5 pip install git+https://github.com/giacaglia/pytube.git --upgrade pip install -r requirements_alter.txt |
Make and Run "prepare_data.bat"
copy data\vibe_data\sample_video.mp4 . echo ### creating place for torch model ### md %homepath%\.torch\models copy data\vibe_data\yolov3.weights %homepath%\.torch\models echo ### creating place for yolo config ### md %homepath%\.torch\config copy yolov3.cfg %homepath%\.torch\config |
2. Run
conda create -n VIBE-env python=3.7
go to path (my case: E:\ExternalTools\VIBE-master\VIBE-master)
python demo_alter.py --vid_file sample_video.mp4 --output_folder output/ --display
(I just comment the os.envison line to make demo_alter.py as follows)
import os #os.environ['PYOPENGL_PLATFORM'] = 'egl' |
enjoy!