The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
The Neural Pile (primate)
This dataset contains 40 billion tokens of curated spiking neural activity data recorded from primates.
The code and detailed instructions for creating this dataset from scratch can be found at this GitHub repository.
The dataset takes up about 40 GB on disk when stored as memory-mapped .arrow files (which is the format used by the local caching system of the Hugging Face
datasets library). The dataset comes with separate train and test splits. You can load, e.g., the train split of the dataset as follows:
ds = load_dataset("eminorhan/neural-pile-primate", num_proc=32, split='train')
and display the first data row:
>>> print(ds[0])
>>> {
'spike_counts': ...,
'subject_id': 'sub-Reggie',
'session_id': 'sub-Reggie_ses-20170115T125333_behavior+ecephys',
'segment_id': 'segment_2',
'source_dataset': 'even-chen'
}
where:
spike_countsis a serialized array containing the spike count data. Its shape is(n,t)wherenis the number of simultaneously recorded neurons in that session andtis the number of time bins (20 ms bins).source_datasetis an identifier string indicating the source dataset from which that particular row of data came from.subject_idis an identifier string indicating the subject the data were recorded from.session_idis an identifier string indicating the recording session.segment_idis a segment (or chunk) identifier useful for cases where the session was split into smaller chunks (we split long recording sessions (>10M tokens) into smaller equal-sized chunks of no more than 10M tokens), so the whole session can be reproduced from its chunks, if desired.
The dataset rows are pre-shuffled, so users do not have to re-shuffle them.
- Downloads last month
- 287