kdoherty commited on
Commit
61103b0
·
verified ·
1 Parent(s): 5be3d23

Delete README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +0 -177
README.md DELETED
@@ -1,177 +0,0 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - feature-extraction
5
- - image-to-image
6
- language:
7
- - en
8
- tags:
9
- - remote-sensing
10
- - aerial-imagery
11
- - orthomosaic
12
- - lighting-invariance
13
- - semantic-stability
14
- - vision-encoder
15
- - time-series
16
- - dinov2
17
- - dinov3
18
- - embeddings
19
- - multi-config
20
- pretty_name: Light Stable Semantics
21
- size_categories:
22
- - n<1K
23
- ---
24
-
25
- # Light Stable Semantics Dataset
26
-
27
- ## Dataset Description
28
-
29
- This dataset contains aerial orthomosaic tiles captured at three different times of day (10:00, 12:00, and 15:00). The dataset is organized into three configurations: `default` (raw images + canopy height), `dinov2_base` (DINOv2 embeddings), and `dinov3_sat` (DINOv3 embeddings). All configurations share consistent train/test splits with matching tile identifiers for cross-referencing. The dataset is designed for training vision encoders that maintain consistent feature representations despite changes in illumination, with applications in remote sensing and environmental monitoring.
30
-
31
- ## Dataset Configurations
32
-
33
- The dataset is organized into three configurations, each serving different research needs:
34
-
35
- ### Configuration: `default`
36
- Raw imagery and environmental data for direct analysis:
37
-
38
- | Feature | Type | Shape | Description |
39
- |---------|------|--------|-------------|
40
- | `idx` | string | - | Tile identifier in format `{ROW}_{COL}` for geographic referencing |
41
- | `image_t0` | Image | 1024×1024×3 | Morning capture at 10:00 AM (time=1000) |
42
- | `image_t1` | Image | 1024×1024×3 | Noon capture at 12:00 PM (time=1200) |
43
- | `image_t2` | Image | 1024×1024×3 | Afternoon capture at 3:00 PM (time=1500) |
44
- | `canopy_height` | int32 | [1024, 1024] | Canopy height grid in centimeters from canopy height model |
45
-
46
- ### Configuration: `dinov2_base`
47
- Pre-computed DINOv2 Base (ViT-B/14) embeddings:
48
-
49
- | Feature | Type | Shape | Description |
50
- |---------|------|--------|-------------|
51
- | `idx` | string | - | Tile identifier matching other configurations |
52
- | `cls_t0` | float32 | [768] | DINOv2 CLS token (global features) for morning image |
53
- | `cls_t1` | float32 | [768] | DINOv2 CLS token (global features) for noon image |
54
- | `cls_t2` | float32 | [768] | DINOv2 CLS token (global features) for afternoon image |
55
- | `patch_t0` | float32 | [256, 768] | DINOv2 patch tokens (16×16 spatial grid) for morning image |
56
- | `patch_t1` | float32 | [256, 768] | DINOv2 patch tokens (16×16 spatial grid) for noon image |
57
- | `patch_t2` | float32 | [256, 768] | DINOv2 patch tokens (16×16 spatial grid) for afternoon image |
58
-
59
- ### Configuration: `dinov3_sat`
60
- Pre-computed DINOv3 Large (ViT-L/16) embeddings with satellite pretraining:
61
-
62
- | Feature | Type | Shape | Description |
63
- |---------|------|--------|-------------|
64
- | `idx` | string | - | Tile identifier matching other configurations |
65
- | `cls_t0` | float32 | [1024] | DINOv3 CLS token (global features) for morning image |
66
- | `cls_t1` | float32 | [1024] | DINOv3 CLS token (global features) for noon image |
67
- | `cls_t2` | float32 | [1024] | DINOv3 CLS token (global features) for afternoon image |
68
- | `patch_t0` | float32 | [196, 1024] | DINOv3 patch tokens (14×14 spatial grid) for morning image |
69
- | `patch_t1` | float32 | [196, 1024] | DINOv3 patch tokens (14×14 spatial grid) for noon image |
70
- | `patch_t2` | float32 | [196, 1024] | DINOv3 patch tokens (14×14 spatial grid) for afternoon image |
71
-
72
- **Notes:**
73
- - Canopy height values represent centimeters above ground; missing data is encoded as `-2147483648`
74
- - All configurations use consistent 80%/20% train/test splits with matching `idx` values
75
- - Patch tokens represent spatial features in different grid resolutions: 16×16 (DINOv2) vs 14×14 (DINOv3)
76
-
77
- ## Usage Example
78
-
79
- ```python
80
- from datasets import load_dataset
81
-
82
- # Load specific configurations
83
- dataset_default = load_dataset("mpg-ranch/light-stable-semantics", "default")
84
- dataset_dinov2 = load_dataset("mpg-ranch/light-stable-semantics", "dinov2_base")
85
- dataset_dinov3 = load_dataset("mpg-ranch/light-stable-semantics", "dinov3_sat")
86
-
87
- # Access raw imagery and canopy height
88
- sample_default = dataset_default['train'][0]
89
- morning_image = sample_default['image_t0'] # RGB image
90
- noon_image = sample_default['image_t1'] # RGB image
91
- afternoon_image = sample_default['image_t2'] # RGB image
92
- canopy_height = sample_default['canopy_height'] # Height grid in cm
93
- tile_id = sample_default['idx'] # Geographic identifier
94
-
95
- # Access DINOv2 embeddings (same tile via matching idx)
96
- sample_dinov2 = dataset_dinov2['train'][0]
97
- dinov2_cls_morning = sample_dinov2['cls_t0'] # Global features (768-dim)
98
- dinov2_patches_morning = sample_dinov2['patch_t0'] # Spatial features (256×768)
99
-
100
- # Access DINOv3 embeddings (same tile via matching idx)
101
- sample_dinov3 = dataset_dinov3['train'][0]
102
- dinov3_cls_morning = sample_dinov3['cls_t0'] # Global features (1024-dim)
103
- dinov3_patches_morning = sample_dinov3['patch_t0'] # Spatial features (196×1024)
104
-
105
- # Verify consistent tile identifiers across configurations
106
- assert sample_default['idx'] == sample_dinov2['idx'] == sample_dinov3['idx']
107
-
108
- # Access test sets for evaluation
109
- test_default = dataset_default['test'][0]
110
- test_dinov2 = dataset_dinov2['test'][0]
111
- test_dinov3 = dataset_dinov3['test'][0]
112
- ```
113
-
114
- ## Pre-computed Embeddings
115
-
116
- The dataset includes pre-computed embeddings from two state-of-the-art vision transformers:
117
-
118
- ### DINOv2 Base (`facebook/dinov2-base`)
119
- - **Architecture**: Vision Transformer Base with 14×14 patch size
120
- - **CLS Tokens**: 768-dimensional global feature vectors capturing scene-level semantics
121
- - **Patch Tokens**: 256×768 arrays (16×16 spatial grid) encoding local features
122
- - **Training**: Self-supervised learning on natural images
123
-
124
- ### DINOv3 Large (`facebook/dinov3-vitl16-pretrain-sat493m`)
125
- - **Architecture**: Vision Transformer Large with 16×16 patch size
126
- - **CLS Tokens**: 1024-dimensional global feature vectors capturing scene-level semantics
127
- - **Patch Tokens**: 196×1024 arrays (14×14 spatial grid) encoding local features
128
- - **Training**: Self-supervised learning with satellite imagery pretraining
129
-
130
- **Purpose**: Enable efficient training and analysis without requiring on-the-fly feature extraction, while providing comparison between natural image and satellite-pretrained models.
131
-
132
- ## Dataset Information
133
-
134
- - **Location**: Lower Partridge Alley, MPG Ranch, Montana, USA
135
- - **Survey Date**: November 7, 2024
136
- - **Coverage**: 620 complete tile sets (80% train / 20% test split via seeded random sampling)
137
- - **Resolution**: 1024×1024 pixels at 1.2cm ground resolution
138
- - **Total Size**: ~6.4GB of image data plus embeddings
139
- - **Quality Control**: Tiles with transient objects, such as vehicles, were excluded from the dataset. RGB imagery and canopy rasters are removed together to keep modalities aligned.
140
-
141
- ## Use Cases
142
-
143
- This dataset is intended for:
144
- - Developing vision encoders robust to lighting variations
145
- - Semantic stability research in computer vision
146
- - Time-invariant feature learning
147
- - Remote sensing applications requiring lighting robustness
148
- - Comparative analysis of illumination effects on vision model features
149
-
150
- ## Citation
151
-
152
- If you use this dataset in your research, please cite:
153
-
154
- ```bibtex
155
- @dataset{mpg_ranch_light_stable_semantics_2024,
156
- title={Light Stable Semantics Dataset},
157
- author={Kyle Doherty and Erik Samose and Max Gurinas and Brandon Trabucco and Ruslan Salakhutdinov},
158
- year={2024},
159
- month={November},
160
- url={https://huggingface.co/datasets/mpg-ranch/light-stable-semantics},
161
- publisher={Hugging Face},
162
- note={Aerial orthomosaic tiles with DINOv2 and DINOv3 embeddings for light-stable semantic vision encoder training},
163
- location={MPG Ranch, Montana, USA},
164
- survey_date={2024-11-07},
165
- organization={MPG Ranch}
166
- }
167
- ```
168
-
169
- ## License
170
-
171
- This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
172
-
173
- **Attribution Requirements:**
174
- - You must give appropriate credit to MPG Ranch
175
- - Provide a link to the license
176
- - Indicate if changes were made to the dataset
177
-