Skip to content
/ MMVID Public
forked from snap-research/MMVID

[CVPR 2022] Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning

Notifications You must be signed in to change notification settings

nshah171/MMVID

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

MMVID
Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning (CVPR 2022)

Generated Videos on Multimodal VoxCeleb

This repo will contain the code for training and testing, models, and data for MMVID (coming soon).

Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Ligong Han, Jian Ren, Hsin-Ying Lee, Francesco Barbieri, Kyle Olszewski, Shervin Minaee, Dimitris Metaxas, Sergey Tulyakov
Snap Inc., Rutgers University
CVPR 2022

Citation

If our code, data, or models help your work, please cite our paper:

@article{han2022show,
  title={Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning},
  author={Han, Ligong and Ren, Jian and Lee, Hsin-Ying and Barbieri, Francesco and Olszewski, Kyle and Minaee, Shervin and Metaxas, Dimitris and Tulyakov, Sergey},
  journal={arXiv preprint arXiv:2203.02573},
  year={2022}
}

About

[CVPR 2022] Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published