Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image |
|---|
DA3-BENCH: Depth Anything 3 Evaluation Benchmark
This repository contains processed benchmark datasets for evaluating Depth Anything 3 depth estimation and visual geometry models. The datasets are provided in a convenient, ready-to-use format for research and evaluation purposes.
About Depth Anything 3
Depth Anything 3 (DA3) is a state-of-the-art model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses. It achieves superior performance in:
- Monocular Depth Estimation: Outperforms Depth Anything 2 with better detail and generalization
- Camera Pose Estimation: 35.7% improvement over prior SOTA
- Multi-View Geometry: 23.6% improvement in geometric accuracy
- 3D Gaussian Splatting: Superior rendering quality from arbitrary visual inputs
For more details, visit the official project page.
π¦ Included Datasets
The benchmark includes the following datasets, each compressed as a separate zip file:
| Dataset | Size | Description |
|---|---|---|
| 7scenes.zip | 3.4 GB | 7-Scenes indoor localization dataset |
| dtu.zip | 8.3 GB | DTU Multi-View Stereo dataset |
| dtu64.zip | 1.7 GB | DTU 64-view subset |
| eth3d.zip | 15 GB | ETH3D high-resolution multi-view dataset |
| hiroom.zip | 683 MB | High-resolution indoor room scenes |
| scannetpp.zip | 11 GB | ScanNet++ indoor scene understanding dataset |
Total Size: ~40 GB
π Usage
Each dataset has been preprocessed and structured for convenient use in depth estimation evaluation pipelines. Simply download and extract the dataset(s) you need.
# Download from Hugging Face (example)
huggingface-cli download depth-anything/DA3-BENCH 7scenes.zip --repo-type dataset
# Extract a dataset
unzip 7scenes.zip
βοΈ License and Citation
IMPORTANT: These datasets are provided in a processed format for convenience. Users must strictly follow the original usage licenses of each respective dataset:
- 7-Scenes: Microsoft Research License
- DTU MVS: DTU Dataset License
- ETH3D: ETH3D Dataset Terms
- ScanNet++: ScanNet Dataset License
Citing Depth Anything 3
If you use this benchmark, please cite the Depth Anything 3 paper:
@article{depthanything3,
title={Depth Anything 3: Recovering the Visual Space from Any Views},
author={Haotong Lin and Sili Chen and Jun Hao Liew and Donny Y. Chen and Zhenyu Li and Guang Shi and Jiashi Feng and Bingyi Kang},
journal={arXiv preprint},
year={2025}
}
Citing Original Datasets
Additionally, please cite the respective original dataset papers for each benchmark you use. Refer to the original dataset websites for proper citation information.
π§ Contact
For questions about:
- Processed datasets: Please open an issue in this repository
- Depth Anything 3 model: Visit the official project page or GitHub repository
π Acknowledgements
We thank the authors of the original datasets for making their data publicly available for research purposes, and the Depth Anything team for developing this state-of-the-art depth estimation framework.
Disclaimer: This is a processed collection for evaluation purposes only. All rights to the original data belong to the respective dataset creators. Users must obtain proper permissions and follow all applicable licenses when using these datasets.
- Downloads last month
- 37