Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Objaverse Dataset
Dataset Introduction
Humans intuitively perceive object shape and orientation from a single image, guided by strong priors about canonical poses. However, existing 3D generative models often produce misaligned results due to inconsistent training data, limiting their usability in downstream tasks. To address this gap, we introduce the task of orientation-aligned 3D object generation: producing 3D objects from single images with consistent orientations across categories.
To facilitate this, we construct Objaverse-OA, a dataset of 14,832 orientation-aligned 3D models spanning 1,008 categories. Leveraging Objaverse-OA, we fine-tune two representative 3D generative models based on multi-view diffusion and 3D variational autoencoder frameworks to produce aligned objects that generalize well to unseen objects across various categories.
The dataset has two chunks, and all 3D models are stored in GLB format. The 3D models are named with their corresponding uids in the original Objaverse-LVIS dataset, and you can get their category labels by referring to the annotation in the Objaverse-LVIS dataset.
The license of our Objaverse-OA dataset is the same as the Objaverse dataset. For more details about the license, please refer to the declaration in the Objaverse dataset.
Citation
If you found this dataset useful, please cite our paper.
@misc{lu2025orientationmatters,
title={Orientation Matters: Making 3D Generative Models Orientation-Aligned},
author={Yichong Lu and Yuzhuo Tian and Zijin Jiang and Yikun Zhao and Yuanbo Yang and Hao Ouyang and Haoji Hu and Huimin Yu and Yujun Shen and Yiyi Liao},
year={2025},
eprint={2506.08640},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.08640},
}
- Downloads last month
- 117