first README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,38 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# Dataset Card for DenSpine
|
| 6 |
+
|
| 7 |
+
## Volumetric Files
|
| 8 |
+
The dataset is comprised of dendrites from 3 brain samples: `seg_den` (also known as `M50`), `mouse` (`M10`), and `human` (`H10`).
|
| 9 |
+
|
| 10 |
+
Every species has 3 volumetric `.h5` files:
|
| 11 |
+
- `{species}_raw.h5`: instance segmentation of entire dendrites in volume (labelled `1-50` or `1-10`), where trunks and spines share the same label
|
| 12 |
+
- `{species}_spine.h5`: "binary" segmentation, where trunks are labelled `0` and spines are labelled their `raw` dendrite label
|
| 13 |
+
- `{species}_seg.h5`: spine instance segmentation (labelled `51-...` or `11-...`), where every spine in the volume is labelled uniquely
|
| 14 |
+
|
| 15 |
+
## Point Cloud Files
|
| 16 |
+
In addition, we provide preprocessed point clouds sampled along a dendrite's centerline skeletons for ease of use in evaluating point-cloud based methods.
|
| 17 |
+
```python
|
| 18 |
+
data=np.load(f"{species}_1000000_10000/{idx}.npz", allow_pickle=True)
|
| 19 |
+
trunk_id, pc, trunk_pc, label = data["trunk_id"], data["pc"], data["trunk_pc"], data["label"]
|
| 20 |
+
```
|
| 21 |
+
- `trunk_id` is an integer which corresponds to the dendrite's `raw` label
|
| 22 |
+
- `pc` is a shape `[1000000,3]` isotropic point cloud
|
| 23 |
+
- `trunk_pc` is a shape `[skeleton_length, 3]` (ordered) array, which represents the centerline of the trunk of `pc`
|
| 24 |
+
- `label` is a shape `[1000000]` array with values corresponding to the `seg` labels of each point in the point cloud
|
| 25 |
+
|
| 26 |
+
We provide a comprehensive example of how to instantiate a PyTorch dataloader using our dataset in `dataloader.py` (potentially using the FFD transform with `frenet=True`).
|
| 27 |
+
|
| 28 |
+
## Training splits for `seg_den`
|
| 29 |
+
The folds used for training/evaluating the `seg_den` dataset, based on `raw` labels are defined as follows:
|
| 30 |
+
```python
|
| 31 |
+
seg_den_folds = [
|
| 32 |
+
[3, 5, 11, 12, 23, 28, 29, 32, 39, 42],
|
| 33 |
+
[8, 15, 19, 27, 30, 34, 35, 36, 46, 49],
|
| 34 |
+
[9, 14, 16, 17, 21, 26, 31, 33, 43, 44],
|
| 35 |
+
[2, 6, 7, 13, 18, 24, 25, 38, 41, 50],
|
| 36 |
+
[1, 4, 10, 20, 22, 37, 40, 45, 47, 48],
|
| 37 |
+
]
|
| 38 |
+
```
|