---
license: apache-2.0
language:
- en
---
arxiv.org/abs/2503.15667
[CVPR'25]DiffPortrait360: Consistent Portrait Diffusion for 360 View Synthesis
Yuming Gu1,2
·
Phong Tran2
·
Yujian Zheng2
·
Hongyi Xu3
·
Heyuan Li4
·
Adilbek Karmanov2
·
Hao Li2,5
1Unviersity of Southern California 2MBZUAI 3ByteDance Inc.
4The Chinese University of Hong Kong, Shenzhen 5Pinscreen Inc.
## 📜 Requirements
* An NVIDIA GPU with CUDA support is required.
* We have tested on a single A6000 GPU.
* **Minimum**: The minimum GPU memory required is 30GB for generating a single NVS video (batch_size=1) of 32 frames each time.
* **Recommended**: We recommend using a GPU with 40GB of memory.
* Operating system: Linux
## 🧱 Download Pretrained Models
```bash
Diffportrait360
|----...
|----pretrained_weights
|----back_head-230000.th # back head generator
|----model_state-3400000.th # diffportrait360 main module
|----easy-khair-180-gpc0.8-trans10-025000.th
|----...
```
## 🔗 BibTeX
If you find [Diffportrait360](https://arxiv.org/abs/2503.15667) is useful for your research and applications, please cite Diffportrait360 using this BibTeX:
```BibTeX
@article{gu2025diffportrait360,
title={DiffPortrait360: Consistent Portrait Diffusion for 360 View Synthesis},
author={Gu, Yuming and Tran, Phong and Zheng, Yujian and Xu, Hongyi and Li, Heyuan and Karmanov, Adilbek and Li, Hao},
journal={arXiv preprint arXiv:2503.15667},
year={2025}
}
```
## License
Our code is distributed under the Apache-2.0 license.
## Acknowledgements
This work is supported by the Metaverse Center Grant from the MBZUAI Research Office. We appreciate the contributions from [Diffportrait3D](https://github.com/FreedomGu/DiffPortrait3D), [PanoHead](https://github.com/SizheAn/PanoHead), [SphereHead](https://lhyfst.github.io/spherehead/), [ControlNet](https://github.com/lllyasviel/ControlNet) for their open-sourced research. We thank [Egor Zakharov](https://egorzakharov.github.io/), [Zhenhui Lin](https://www.linkedin.com/in/zhenhui-lin-5b6510226/?originalSubdomain=ae), [Maksat Kengeskanov](https://www.linkedin.com/in/maksat-kengeskanov/%C2%A0/), and Yiming Chen for the early discussions, helpful suggestions, and feedback.