Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,70 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# 3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models
|
| 6 |
+
Min Wei, Chaohui Yu, Jingkai Zhou, and Fan Wang. 2025.
|
| 7 |
+
3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models.
|
| 8 |
+
In Proceedings of the 33rd ACM International Conference on Multimedia (MM ’25),
|
| 9 |
+
October 27–31, 2025, Dublin, Ireland. ACM, New York, NY, USA, 10 pages.
|
| 10 |
+
https://doi.org/10.1145/3746027.3754754
|
| 11 |
+
|
| 12 |
+
[](https://arxiv.org/abs/2504.17414)
|
| 13 |
+
[](https://2y7c3.github.io/3DV-TON/)
|
| 14 |
+
[](https://huggingface.co/2y7c3/3DV-TON)
|
| 15 |
+
[](https://huggingface.co/datasets/2y7c3/HR-VVT)
|
| 16 |
+
|
| 17 |
+
## Installation
|
| 18 |
+
|
| 19 |
+
```
|
| 20 |
+
git clone https://github.com/2y7c3/3DV-TON.git
|
| 21 |
+
cd 3DV-TON
|
| 22 |
+
pip install -r requirements.txt
|
| 23 |
+
|
| 24 |
+
cd preprocess/model/DensePose/detectron2/projects/DensePose
|
| 25 |
+
pip install -e .
|
| 26 |
+
|
| 27 |
+
## install GVHMR
|
| 28 |
+
## see https://github.com/zju3dv/GVHMR/blob/main/docs/INSTALL.md
|
| 29 |
+
## replace GVHMR/hmr4d/utils/vis/renderer.py with our preprocess/renderer.py
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
### Weights
|
| 33 |
+
|
| 34 |
+
Download [Stable Diffusion](https://huggingface.co/lambdalabs/sd-image-variations-diffusers), [Motion module](https://huggingface.co/guoyww/animatediff/blob/main/v3_sd15_mm.ckpt),[VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse) and Our [3DV-TON models](https://huggingface.co/2y7c3/3DV-TON) in ``` ./ckpts ```.
|
| 35 |
+
|
| 36 |
+
Download [Cloth masker](https://huggingface.co/2y7c3/3DV-TON) in ``` ./preprocess/ckpts ```. Then you can use our cloth masker to generate agnostic mask videos for improved try-on results.
|
| 37 |
+
|
| 38 |
+
## Inference
|
| 39 |
+
We provid three demo examples in ```./demos/``` — run the following commands to test them.
|
| 40 |
+
|
| 41 |
+
```bash
|
| 42 |
+
python infer.py --config ./configs/inference/demo_test.yaml
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
Or you can prepare your own example by following the steps below.
|
| 46 |
+
|
| 47 |
+
``` bash
|
| 48 |
+
# 1. generate agnostic mask (type: 'upper', 'lower', 'overall')
|
| 49 |
+
cd preprocess
|
| 50 |
+
python seg_mask.py --input demos/videos/video.mp4 --output demos/ --type overall
|
| 51 |
+
|
| 52 |
+
# 2. use GVHMR to generate SMPL video
|
| 53 |
+
|
| 54 |
+
# 3. use image tryon model to generate tryon image (e.g. CaTVTON)
|
| 55 |
+
|
| 56 |
+
# 4. generate textured 3d mesh
|
| 57 |
+
|
| 58 |
+
# 5. modify demo_test.yaml, then run
|
| 59 |
+
python infer.py --config ./configs/inference/demo_test.yaml
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
## BibTeX
|
| 63 |
+
```text
|
| 64 |
+
@article{wei20253dv,
|
| 65 |
+
title={3dv-ton: Textured 3d-guided consistent video try-on via diffusion models},
|
| 66 |
+
author={Wei, Min and Yu, Chaohui and Zhou, Jingkai and Wang, Fan},
|
| 67 |
+
journal={arXiv preprint arXiv:2504.17414},
|
| 68 |
+
year={2025}
|
| 69 |
+
}
|
| 70 |
+
```
|