The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ParserError
Message: Error tokenizing data. C error: Expected 1 fields in line 3, saw 2
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
for batch_idx, df in enumerate(csv_file_reader):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
return self.get_chunk()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
return self.read(nrows=size)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "pandas/_libs/parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 3, saw 2Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for PALMS Indoor Localization Dataset
The PALMS (Plane-based Accessible Indoor Localization Using Mobile Smartphones) dataset is a comprehensive indoor localization dataset collected at the University of California, Santa Cruz (UCSC) campus. The dataset supports both PALMS and PALMS+ algorithms for previous-visit-free indoor localization using commodity mobile devices. It contains RGB images, depth maps, camera poses, floor plans, and ground truth localization data from 80 recording sessions across 4 buildings.
Dataset Details
Dataset Description
The PALMS dataset enables research in indoor visual localization, depth estimation, and floor plan matching. It supports both LiDAR-based (PALMS) and image-based (PALMS+) localization approaches, making it suitable for evaluating various indoor localization algorithms.
Curated by: Yunqian Cheng, Benjamin Princen, Roberto Manduchi (University of California, Santa Cruz)
Funded by: National Eye Institute of the National Institutes of Health under award numbers R01EY029260-01 (PALMS) and R01EY036360 (PALMS+)
Shared by: University of California, Santa Cruz
Language(s) (NLP): English
License: CC-BY-NC-4.0 (Non-Commercial Use Only)
Dataset Sources
Repository: GitHub Repository
Paper:
- PALMS: eScholarship (IPIN 2024)
- PALMS+: arXiv:2511.09724 (WACV 2026)
Demo: See the GitHub repository for example visualizations and usage scripts
Uses
Direct Use
This dataset is intended for research purposes in:
- Indoor Visual Localization: Training and evaluating algorithms for indoor localization using floor plans
- Depth Estimation: Benchmarking monocular depth estimation models (includes depth maps captured using ARKit LiDAR)
- Point Cloud Reconstruction: Research on 3D point cloud reconstruction from RGB images
- Floor Plan Matching: Developing and testing algorithms that match visual observations to architectural floor plans
- Accessibility Research: Supporting research on accessible indoor navigation systems, particularly for users with visual impairments
The dataset supports both single-shot localization and sequential tracking with particle filters.
Out-of-Scope Use
- Commercial applications: This dataset is licensed for non-commercial use only (CC-BY-NC-4.0)
- Surveillance or tracking: The dataset should not be used for surveillance, tracking individuals, or any privacy-invasive applications
- Real-time navigation systems: While the dataset can inform such systems, it is not intended for direct deployment in production navigation systems without additional validation
- General-purpose computer vision: The dataset is specifically designed for indoor localization tasks and may not be suitable for general computer vision applications
Dataset Structure
The dataset is organized into several main components:
1. main_dataset/
Contains 80 complete recording sessions organized by building:
- BE (Baskin Engineering): 19 sessions
- E2 (Engineering Building 2): 24 sessions
- PS (Physical Sciences Buliding): 18 sessions
- SVC (Silicon Valley Campus): 19 sessions
Each session directory contains:
images/: RGB images (.png) with AnyLabeling JSON annotations (semantic labels that include transparent surfaces and human)poses/: Camera pose files (4×4 transformation matrices) for each frameintrinsics/: Camera intrinsic matrices (3×3) for each framedepths/: Depth data collected using ARKit LiDAR in JSON formatdp_depths/: Pre-computed Depth Pro depth maps (.npyfiles)confidences/: Confidence maps for ARKit depth estimation (.png)detectedPlanes.json: ARKit detected plane information with timestamps and alignment datapano.png: Panoramic image of the scene, collected using iPhone and aligned to the ARKit session reference framelabel.txt: Ground truth position (x, y coordinates and rotation to map)metadata.json: Session metadatatimeStamps.json: Timestamps for each frame
2. pano_samples/
Subset of sessions with panorama-specific data for quick testing and evaluation:
- 5 sample images per session with corresponding depth maps
- Camera poses and intrinsics
- Panorama metadata (FOV, interval, starting angle)
- Ground truth positions
3. maps/
CSV files containing floor plan geometry for each building:
BE.csv,E2.csv,PS.csv,SVC.csv: Building floor plans- Format: Each line contains 4 coordinates (x1, y1, x2, y2) representing wall segments/edges
4. trajectories/
IMU-based trajectory data organized by building and trajectory ID:
Tracking JSON files: Contains odometry trajectory data in JSON format from either ARKit VIO or RoNIN (IMU-based tracking). Each JSON file contains:
starting_vector: Starting position [x, y] in floor plan coordinatesARKit_raw_2D: Raw 2D trajectory data, estimated using ARKit VIO, as an array of [x, y] coordinates over timeRoNIN_raw_2D: Raw 2D trajectory data, estimated using the RoNIN model, as an array of [x, y] coordinates over timeARKit_PF: Particle filter processed trajectory as an array of [x, y] coordinates- Files are organized by building and trajectory ID (e.g.,
BE/trajectory_001/tracking_data.json)
tracking_obs_pairs.csv: Mapping file that pairs tracking trajectories with observation sessions. Contains the following columns:tracking_data_path: Path to the tracking JSON fileobs_data_path: Path to the corresponding observation session directorystarting_idx: Index in the tracking trace closest to the observation point (used to align tracking sequence with observation)theta: Rotation angle (in radians) applied to align the raw tracking trace with the particle filter reference frame- This file enables sequential localization by mapping odometry trajectories to panoramic observation sessions
Usage: Used for sequential localization with particle filter tracking in
test_pp_seq.py. The pairing file allows the system to initialize particle filters at observation points and track user movement using IMU-based odometry between observations.For details on pairing process: See
utils/prep_trajectories.pyfor tools to match tracking sequences with observation sessions
Quick Start
To take a quick sense of what the dataset contains, download the example_data.zip file.
Data Loading Tools are provided in the GitHub Repository. Under file utils/file_io.
Dataset Creation
Curation Rationale
The dataset was created to support research on accessible indoor localization systems that do not require prior environmental fingerprinting or external infrastructure. The PALMS system was designed to enhance accessibility for all users, including those with visual impairments, by enabling localization using publicly available floor plans and commodity mobile devices.
Source Data
Data Collection and Processing
Data Collection:
- Data was collected using an iPhone 14 Pro devices with ARKit capabilities at UCSC campus buildings
- RGB images were captured during 360-degree rotational scans at various indoor locations
- ARKit LiDAR depth data was collected simultaneously with RGB images
- Camera poses and intrinsics were recorded for each frame using ARKit
- Ground truth positions were manually annotated with the assistance of ARKit-detected vertical plane data
Data Processing:
- Depth maps were pre-computed using Depth Pro monocular depth estimation model
- Semantic annotations were created using AnyLabeling, we labeled classes like transparent surfaces because monocular depth estimation do not work well with them, and we labeled humans to apply de-identification.
- Panoramic images were captured separately using an iPhone 14 Pro at the label positions. We used the built-in panorama capturing tool in the "camera" app to capture the panorama. Then, we manually undistort and align the panoramas to the images captured in the corresponding scanning sessions.
- Floor plans were digitized from building architectural drawings using the SIM tool, if you are interested in using this tool, please contact Yunqian Cheng at [email protected] or professor Roberto Manduchi at [email protected].
Tools and Libraries:
- ARKit (iOS) for LiDAR data collection
- Depth Pro model for monocular depth estimation
- AnyLabeling for semantic annotation
- Custom Python scripts for data processing and organization
Who are the source data producers?
The dataset was collected and curated by:
- Yunqian Cheng (Primary curator, UCSC)
- Benjamin Princen (Co-curator, UCSC)
- Roberto Manduchi (Principal Investigator, UCSC)
- Loni Halsted-Ruelas (Dataset curation and experimental support, UCSC)
Data was collected at UCSC campus buildings with appropriate permissions.
Annotations
Personal and Sensitive Information
Privacy Measures:
- All human subjects in images have been manually blurred to protect privacy
- Full-body blurring was applied to ensure individuals cannot be identified
- No personally identifiable information (PII) is included in metadata files
- Timestamps do not reveal specific dates that could identify individuals
Data Collection Ethics:
- Data collection was conducted with appropriate institutional permissions
- Privacy considerations were prioritized throughout the dataset creation process
- The dataset complies with privacy and ethical guidelines for research data
Bias, Risks, and Limitations
Limitations
Geographic and Environmental Bias:
- Dataset is limited to UCSC campus buildings, which may not represent all indoor environments
- All data collected in academic/institutional settings, may not generalize to residential or commercial spaces
- Limited to four specific buildings, which may not capture the full diversity of indoor architectural styles
Technical Limitations:
- Depth data quality depends on ARKit LiDAR capabilities and lighting conditions
- Pre-computed depth maps use Depth Pro model, which may have limitations in certain scenarios
- Camera poses may have accumulated errors in texture-less environments
- Floor plans are simplified 2D representations and may not capture all architectural details
- Due to manual labeling, the ground truth poses may have reduced accuracy.
Dataset Size:
- 80 sessions may be limited for large-scale deep learning applications
- Some buildings have fewer sessions than others (imbalanced distribution)
Privacy Considerations:
- While humans are blurred, some contextual information about activities may remain visible
Recommendations
Users should be aware of the following:
Generalization: Results obtained on this dataset may not generalize to other indoor environments, particularly those with different architectural styles or lighting conditions
Privacy: While privacy measures have been taken, users should be mindful of potential privacy implications when using this dataset
Commercial Use: This dataset is licensed for non-commercial use only. For commercial applications, contact the dataset creators
Validation: For production systems, additional validation on diverse environments is recommended
Ethical Use: The dataset should be used responsibly and in accordance with research ethics guidelines
Citation
BibTeX:
@article{Cheng_Manduchi_2024, title={PALMS: Plane-based Accessible Indoor Localization Using Mobile Smartphones}, author={Cheng, Yunqian and Manduchi, Roberto}, journal={eScholarship, University of California}, url={https://escholarship.org/uc/item/7bw6797s}, year={2024}, month={Aug} }
@article{Cheng_Princen_Manduchi_2025, title={PALMS+: Modular Image-Based Floor Plan Localization Leveraging Depth Foundation Model}, author={Cheng, Yunqian and Princen, Benjamin and Manduchi, Roberto}, journal={arXiv preprint arXiv:2511.09724}, year={2025}, note={Accepted to IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2026, Application Track}, url={https://arxiv.org/abs/2511.09724} } APA:
Cheng, Y., & Manduchi, R. (2024). PALMS: Plane-based Accessible Indoor Localization Using Mobile Smartphones. eScholarship, University of California. https://escholarship.org/uc/item/7bw6797s
Cheng, Y., Princen, B., & Manduchi, R. (2025). PALMS+: Modular Image-Based Floor Plan Localization Leveraging Depth Foundation Model. arXiv preprint arXiv:2511.09724. Accepted to IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2026, Application Track.
Glossary
- PALMS: Plane-based Accessible Indoor Localization Using Mobile Smartphones
- PALMS+: Extended version of PALMS using monocular depth estimation instead of LiDAR
- CES: Certainly Empty Space - a constraint used for layout matching
- Depth Pro: Foundation model for zero-shot metric monocular depth estimation
- ARKit: Apple's augmented reality framework with LiDAR capabilities
- RoNIN: Model from the paper RoNIN: Robust Neural Inertial Navigation in the Wild: Benchmark, Evaluations, and New Methods
- FOV: Field of View
- BE: Baskin Engineering building
- E2: Engineering 2 building
- PS: Physical Sciences building
- SVC: Silicon Valley Campus
More Information
For more information about using this dataset, please refer to:
The repository includes example code, configuration files, and visualization tools for working with the dataset.
Dataset Card Authors
- Yunqian Cheng ([email protected])
- Benjamin Princen
- Roberto Manduchi
Dataset Card Contact
For questions about this dataset, please contact:
- Yunqian Cheng: [email protected]
- Downloads last month
- 9