Datasets:
The dataset viewer is not available for this dataset.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ZereData Bin Picking Dataset v1.0
Synthetic training data for robotic bin picking — RGB, depth, instance masks, 6D pose, 2D bounding boxes, and per-instance visibility, in BOP/COCO/YOLO formats.
Overview
Generated via physically-based ray tracing in Blender Cycles, this dataset delivers dense, photorealistic scenes of cluttered bins at warehouse scale. Each scene includes RGB, 32-bit depth, instance segmentation, camera intrinsics/extrinsics, and per-instance 6D pose with visibility ratios.
The dataset's value is simple: synthetic renders give perfect ground truth annotations impossible to obtain from real cameras, at a scale and cost real-world collection cannot match. Use it to train 6D pose estimators, bin-picking grasp predictors, and warehouse perception systems — then validate sim-to-real transfer on smaller real-world test sets.
Dataset Statistics
| Metric | Value |
|---|---|
| Total scenes | 10,000 |
| Train split | 8,000 |
| Val split | 2,000 |
| Resolution | 1280x720 |
| Object instances | 295,292 |
| Object categories | 4 |
| Modalities | 6 (RGB, depth, mask, pose, bboxes, visibility) |
| Total size on disk | 14.8 GB |
Modalities
- RGB — 1280×720 PNG per scene. The primary input for detection, segmentation, and pose models.
- Depth — 32-bit EXR in metres. Train depth-conditioned pose models or use as a second-channel input.
- Instance mask — colour-coded PNG per scene, one colour per object instance. Drives instance segmentation and occlusion reasoning.
- 6D pose — per-instance rotation and translation in camera frame (BOP
cam_R_m2c,cam_t_m2c). Supervises pose regression heads. - 2D bounding boxes — derived from masks, included in COCO and YOLO formats.
- Visibility ratio — BOP
visib_fractper instance; lets you weight the training loss by occlusion severity.
Formats
BOP (primary)
Canonical BOP directory layout under data/train/ and data/val/. Each scene folder contains scene_camera.json (cam_K, depth_scale), scene_gt.json (per-object cam_R_m2c, cam_t_m2c, obj_id), and scene_gt_info.json (bbox_obj, bbox_visib, visib_fract). Load with the BOP toolkit. Object IDs are ZereData-specific, not BOP canonical — see Limitations.
COCO
Merged annotations/coco_train.json and annotations/coco_val.json with images, annotations (bboxes + masks), and categories. Loads cleanly with pycocotools:
from pycocotools.coco import COCO
coco = COCO('annotations/coco_train.json')
YOLO
Per-image .txt label files under annotations/yolo_train/ and yolo_val/, with normalized class_id cx cy w h entries. Class IDs are consistent across both splits; see annotations/yolo_classes.txt and annotations/yolo_data.yaml.
Data Format
This dataset is packaged as per-format zip archives, mirroring the bop-benchmark HF layout convention (one zip per logical split) adapted for multi-format shipping. Loose files — README, LICENSE, CITATION, metadata.json, preview images — remain at the repository root so the HF dataset page renders a preview.
| Archive | Contents | On-extract layout |
|---|---|---|
bin_picking_train_bop.zip |
BOP-format train split (rgb/depth/mask + scene_camera.json / scene_gt.json / scene_gt_info.json per scene) |
data/train/{000000..007999}/... |
bin_picking_val_bop.zip |
BOP-format val split | data/val/{000000..001999}/... |
bin_picking_coco.zip |
coco_train.json, coco_val.json (merged, BOP obj IDs remapped to COCO categories) |
annotations/coco_*.json |
bin_picking_yolo.zip |
YOLO labels per split + yolo_classes.txt + yolo_data.yaml |
annotations/yolo_{train,val}/*.txt, annotations/yolo_*.{txt,yaml} |
bin_picking_native.zip |
Per-scene native annotations (full pre-export ZereData scene graph) | annotations/scene_NNNN.json |
bin_picking_models.zip |
27 GLB object models | models/*.glb |
Download and extract only what you need
from huggingface_hub import hf_hub_download
import zipfile
REPO = 'zeredata/bin-picking-v1'
# BOP train split
p = hf_hub_download(repo_id=REPO, filename='bin_picking_train_bop.zip', repo_type='dataset')
with zipfile.ZipFile(p) as z:
z.extractall('./zd_bp') # rehydrates ./zd_bp/data/train/...
Or the whole dataset in one shot:
huggingface-cli download --repo-type dataset zeredata/bin-picking-v1 --local-dir ./zd_bp
cd ./zd_bp && for z in bin_picking_*.zip; do unzip -q "$z"; done
All zip extractions share the same root-relative layout, so unzipping all six archives into one directory rehydrates the canonical flat tree.
Loading the Dataset
These snippets assume you have already extracted the relevant zip(s) into a working directory (see Data Format above). Paths are relative to that root.
PyTorch Dataset over BOP structure
from pathlib import Path
from torch.utils.data import Dataset
from PIL import Image
import json
class BopBinPicking(Dataset):
def __init__(self, root, split='train'):
# root must contain data/<split>/... (extract bin_picking_<split>_bop.zip there first)
self.scene_dirs = sorted((Path(root) / 'data' / split).iterdir())
def __len__(self):
return len(self.scene_dirs)
def __getitem__(self, idx):
sd = self.scene_dirs[idx]
rgb = Image.open(sd / 'rgb' / '000000.png')
gt = json.loads((sd / 'scene_gt.json').read_text())
cam = json.loads((sd / 'scene_camera.json').read_text())
return rgb, gt, cam
COCO via pycocotools
# After extracting bin_picking_coco.zip:
from pycocotools.coco import COCO
coco = COCO('annotations/coco_train.json')
img_ids = coco.getImgIds()
for ann in coco.loadAnns(coco.getAnnIds(imgIds=img_ids[0])):
print(ann['bbox'], ann['category_id'])
A datasets.load_dataset() loader is planned for v1.1.
Intended Use
Training 6D pose estimation models, bin-picking grasp models, and warehouse robotics perception systems. Synthetic data for sim-to-real transfer research.
Limitations and Known Issues
- Non-canonical BOP object IDs. This release uses ZereData-specific object IDs. It is BOP-format-compatible but not a drop-in replacement for evaluation against BOP test sets (LM-O, YCB-V, T-LESS). A BOP-dataset-compatible release with canonical CAD models is forthcoming.
- Warehouse-specific lighting. The three lighting profiles model warehouse conditions and may not transfer directly to outdoor, medical, or agricultural domains:
bin_picking_overhead— bright fluorescent overhead panels, typical of distribution-center shelving aisles.bin_picking_mixed— mixed overhead + rim lighting with warmer colour temperature, mimicking older facilities with partial skylights.studio— lower-energy three-point studio setup, producing darker scenes useful as a poor-lighting proxy. Each scene'svariety.lighting_profileannotation tag records which profile was used.
- Procedural materials. Material variation uses procedural textures, not photoscanned assets. High-frequency surface detail may look synthetic under close inspection.
- Geometric occlusion only. No category-level occlusion modelling — occlusion is derived from geometry alone.
- Simulated camera intrinsics. The intrinsic matrix is synthetic, not drawn from real sensor calibration.
Evaluation
Benchmark evaluation on LM-O is forthcoming; see ZereData for updates.
Comparison to Related Datasets
HOPE, T-LESS, and YCB-Video are excellent real-world datasets with limited scale and fixed object sets. This dataset is synthetic-only, scales without bound, and supports customer-specific object libraries. Treat the two as complementary: real data for evaluation, synthetic data for training.
Citation
@dataset{zeredata_binpicking_2026,
author = {Umit Kavala},
title = {ZereData Bin Picking Dataset v1.0},
year = {2026},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/zeredata/bin-picking-v1}
}
License
Released under CC BY 4.0. Attribution required. Commercial use permitted.
Contact and Links
- Website: https://zeredata.com
- Contact: engineering@zeredata.com
- Downloads last month
- 55



