The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
mission_crew_id: int32
mission_id: int32
astronaut_id: int32
role: string
primary_crew: bool
backup_crew: bool
to
{'astronaut_id': Value('int32'), 'name': Value('string'), 'nationality': Value('string'), 'birth_date': Value('date32'), 'agency': Value('string'), 'first_flight': Value('date32'), 'total_flights': Value('int32'), 'total_eva_hours': Value('decimal128(5, 2)'), 'total_space_days': Value('decimal128(7, 2)'), 'status': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1975, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
mission_crew_id: int32
mission_id: int32
astronaut_id: int32
role: string
primary_crew: bool
backup_crew: bool
to
{'astronaut_id': Value('int32'), 'name': Value('string'), 'nationality': Value('string'), 'birth_date': Value('date32'), 'agency': Value('string'), 'first_flight': Value('date32'), 'total_flights': Value('int32'), 'total_eva_hours': Value('decimal128(5, 2)'), 'total_space_days': Value('decimal128(7, 2)'), 'status': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
sciexp/fixtures
Test fixtures and example datasets for sciexp.
Quick start
git clone https://huggingface.co/datasets/sciexp/fixtures
cd fixtures
git lfs pull
Datasets
Space dataset
A relational dataset about space exploration demonstrating DuckLake workflows. Contains tables for astronauts, missions, spacecraft, and mission crew assignments.
duckdb -box < queries/space.sql
Manual DuckDB session
DuckDB can access Hugging Face datasets directly via the hf:// protocol. See also Access 150k+ datasets from Hugging Face with DuckDB.
Attach directly from Hugging Face:
INSTALL ducklake;
LOAD ducklake;
ATTACH 'ducklake:hf://datasets/sciexp/fixtures/lakes/frozen/space.db' AS space;
SHOW TABLES FROM space.main;
Or from a local clone:
ATTACH 'ducklake:lakes/frozen/space.db' AS space;
Attribution
The space missions dataset originates from:
- marhar/frozen - Frozen DuckLake Demo
- marhar/duckdb_tools - DuckDB tools including frozen-ducklake workflow scripts
Repository layout
Directory structure
lakes/frozen/ DuckLake databases
space.db Metadata database
space/ Parquet data files
queries/ SQL scripts
Contributing
Authentication and push workflow
Push access requires Hugging Face authentication:
hf auth login --token "${HF_TOKEN}" --add-to-git-credential
Push changes:
git lfs push --all origin
git push
- Downloads last month
- 26