XR-1
Collection
3 items
β’
Updated
video
video |
|---|
[Project Page] [Paper] [GitHub]
This repository contains a representative sample of the XR-1 project's multi-modal dataset. The data is organized to support cross-embodiment training for Humanoids, Manipulators, and Ego-centric vision.
The dataset follows a hierarchy based on Embodiment -> Task -> Format:
Standard robot data (like TienKung or UR5) is organized following the LeRobot convention:
XR-1-Dataset-Sample/
βββ DUAL_ARM_TIEN_KUNG2/ # Robot Embodiment
βββ Press_Green_Button/ # Task Name
βββ lerobot/ # Data in LeRobot format
βββ metadata.json
βββ episodes.jsonl
βββ videos/
βββ data/
For ego-centric data (e.g., Ego4D subsets used for Stage 1 UVMC pre-training), the structure is adapted to its native recording format:
XR-1-Dataset-Sample/
βββ Ego4D/ # Human ego-centric source
βββ files.json # Unified annotation/mapping file
βββ files/ # Raw data storage
βββ [video_id].mp4 # Egocentric video clips
This sample is intended for use with the XR-1 GitHub Repository.
@article{fan2025xr,
title={XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations},
author={Fan, Shichao and others},
journal={arXiv preprint arXiv:2411.02776},
year={2025}
}
This dataset is released under the MIT License.
Contact: For questions, please open an issue on our GitHub.