Skip to content

Commit

Permalink
2. visualization code
Browse files Browse the repository at this point in the history
  • Loading branch information
anitacen committed Oct 21, 2024
1 parent 0eb8ba1 commit 742e81c
Show file tree
Hide file tree
Showing 3 changed files with 84 additions and 3 deletions.
24 changes: 21 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ This repo is the official implementation of "Generating Human Motion in 3D Scene

![pipeline](doc/pipeline.png)

## News
[2024/10/21] We release the visualization code.
[2024/06/09] We first release the test & evaluation code.

## Installation
```bash
Expand Down Expand Up @@ -46,14 +49,14 @@ ln -s /path/to/humanise data/HUMANISE

### SMPLX models
1. Download SMPLX models from [link](https://smpl-x.is.tue.mpg.de/).
2. Put the smplx folder under data/smpl_models folder:
2. Put the smplx folder under ```data/smpl_models``` folder:
```bash
mkdir data/smpl_models
mv smplx data/smpl_models/
```

### Pretrained models
1. Weights are shared in [link](https://drive.google.com/file/d/1tftqacTRZoLfpZiNqGyDQPJlU7KnrQt6/view?usp=sharing). Please download and unzip it and put the folder most_release under out folder:
1. Weights are shared in [link](https://drive.google.com/file/d/1tftqacTRZoLfpZiNqGyDQPJlU7KnrQt6/view?usp=sharing). Please download and unzip it and put the folder most_release under ```out``` folder:
```bash
mv most_release out/release
```
Expand All @@ -76,12 +79,27 @@ We use Azure OpenAI service, please refer to this [link](https://learn.microsoft
```bash
python tools/generate_results.py -c configs/test/generate.yaml
```
The results will be saved in out/test.
The results will be saved in ```out/test```.
#### Evaluation
```bash
python tools/evaluate_results.py -c configs/test/evaluate.yaml
```

#### Visualization
The generated results are shared in [link](https://drive.google.com/file/d/1zrpzJltY9bseKV3BGuZZpmmtQRT7rQye/view?usp=sharing). You can use your own generated results or download it and unzip it as ```out/test``` folder.

We use [wis3d](https://pypi.org/project/wis3d/) lib to visualize the results.
To prepare for the visualization:
```bash
python tools/visualizae_results.py -c configs/test/visualize.yaml
```
Then, in terminal:
```bash
wis3d --vis_dir out/vis3d --host ${HOST} --port ${PORT}
```
You can then visualize the results in ```${HOST}:${PORT}```.


# Citation

```
Expand Down
5 changes: 5 additions & 0 deletions configs/test/visualize.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
vis_id: 1585
k_id: 0
device: 0
scannet_root: data/ScanNet
save_path: out/test/generate_most_twostage_motion_all/sample.pkl
58 changes: 58 additions & 0 deletions tools/visualizae_results.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
import os
import argparse
import pickle
import trimesh
import torch
import numpy as np

from lib.config import make_cfg
from lib.utils import logger
from lib.utils.vis3d_utils import make_vis3d
from lib.utils.smplx_utils import make_smplx

def main():
all_motion_data = pickle.load(open(cfg.save_path, 'rb'))
vis3d = make_vis3d(None, f'vis_output_release_{cfg.vis_id}', 'out/vis3d')
sample = all_motion_data[cfg.vis_id]
# print input text
utterance = sample['meta']['utterance']
logger.info(f'input text: {utterance}')
# load scene
scene_id = sample['scene_id']
scene_path = os.path.join(cfg.scannet_root, f'scans/{scene_id}/{scene_id}_vh_clean_2.ply')
scene_mesh = trimesh.load(scene_path)
# load predicted motion
params_ = sample['pred_params']
K = params_['trans'].size(0)
T = params_['trans'].size(1)
smplx_params = {
'betas': params_['betas'][None],
'global_orient': params_['orient'].reshape(K*T, 3), # [KT, 3]
'transl': params_['trans'].reshape(K*T, 3) + params_['t_to_scannet'],
'body_pose': params_['pose_body'].reshape(K*T, -1)}
smplx_params = {k: v.to(device) for k, v in smplx_params.items()}
smplx_output = smplx_model(**smplx_params)
pred_k_motions = smplx_output.vertices.reshape(K, T, -1, 3) # (K, T, V, 3)
# visualize
for t in range(T):
vis3d.set_scene_id(t)
body_mesh = trimesh.Trimesh(vertices=pred_k_motions[cfg.k_id, t].detach().cpu().numpy(), faces=smplx_face)
vis3d.add_mesh(body_mesh, name=f'pred_body')
vis3d.add_mesh(scene_mesh, name='scene')



if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--cfg_file", "-c", default='configs/test/generate.yaml')
parser.add_argument("--is_test", action="store_true", default=False)
parser.add_argument("opts", default=None, nargs=argparse.REMAINDER)
args = parser.parse_args()
cfg = make_cfg(args)

os.environ['CUDA_VISIBLE_DEVICES'] = str(cfg.device)
device = torch.device('cuda')
smplx_model = make_smplx('humanise').to(device)
smplx_face = torch.from_numpy(smplx_model.bm.faces.astype(np.int64))

main()

0 comments on commit 742e81c

Please sign in to comment.