daybyday666 Mateo commited on
Commit
13296b5
Β·
verified Β·
0 Parent(s):

Duplicate from pyronear/pyro-sdis

Browse files

Co-authored-by: Mateo LOSTANLEN <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ dataset_info:
4
+ features:
5
+ - name: image
6
+ dtype: image
7
+ - name: annotations
8
+ dtype: string
9
+ - name: image_name
10
+ dtype: string
11
+ - name: partner
12
+ dtype: string
13
+ - name: camera
14
+ dtype: string
15
+ - name: date
16
+ dtype: string
17
+ splits:
18
+ - name: train
19
+ num_bytes: 2940743706.011
20
+ num_examples: 29537
21
+ - name: val
22
+ num_bytes: 391545545.068
23
+ num_examples: 4099
24
+ download_size: 3284043758
25
+ dataset_size: 3332289251.079
26
+ configs:
27
+ - config_name: default
28
+ data_files:
29
+ - split: train
30
+ path: data/train-*
31
+ - split: val
32
+ path: data/val-*
33
+ tags:
34
+ - wildfire
35
+ - smoke
36
+ - yolo
37
+ - pyronear
38
+ - ultralytics
39
+ size_categories:
40
+ - 10K<n<100K
41
+ ---
42
+
43
+ # Pyro-SDIS Dataset
44
+
45
+ ![Pyronear Logo](https://huggingface.co/datasets/pyronear/pyro-sdis/resolve/main/logo.png)
46
+
47
+
48
+ ---
49
+
50
+ ## About the Dataset
51
+
52
+ Pyro-SDIS is a dataset designed for wildfire smoke detection using AI models. It is developed in collaboration with the Fire and Rescue Services (SDIS) in France and the dedicated volunteers of the Pyronear association.
53
+
54
+ The images in this dataset come from Pyronear cameras installed with the support of our SDIS partners. These images have been carefully annotated by Pyronear volunteers, whose tireless efforts we deeply appreciate.
55
+
56
+ We extend our heartfelt thanks to all Pyronear volunteers and our SDIS partners for their trust and support:
57
+
58
+ - **Force 06**
59
+ - **SDIS 07**
60
+ - **SDIS 12**
61
+ - **SDIS 77**
62
+
63
+ Additionally, we express our gratitude to the DINUM for their financial and strategic support through the AIC, Etalab, and the Legal Service. Special thanks also go to the Mission StratΓ©gie Prospective (MSP) for their guidance and collaboration.
64
+
65
+ The Pyro-SDIS Subset contains **33,636 images**, including:
66
+
67
+ - **28,103 images with smoke**
68
+ - **31,975 smoke instances**
69
+
70
+ This dataset is formatted to be compatible with the Ultralytics YOLO framework, enabling efficient training of object detection models.
71
+
72
+ ---
73
+
74
+ Stay tuned for the full release in **January 2025**, which will include additional images and refined annotations. Thank you for your interest and support in advancing wildfire detection technologies!
75
+
76
+
77
+ ## Dataset Overview
78
+
79
+ ### Contents
80
+ The Pyro-SDIS Subset contains images and annotations for wildfire smoke detection. The dataset is structured with the following metadata for each image:
81
+
82
+ - **Image Path**: File path to the image.
83
+ - **Annotations**: YOLO-format bounding box annotations for smoke detection:
84
+ - `class_id`: Class label (e.g., smoke).
85
+ - `x_center`, `y_center`: Normalized center coordinates of the bounding box.
86
+ - `width`, `height`: Normalized width and height of the bounding box.
87
+ - **Metadata**:
88
+ - `partner`: Partner organization responsible for the camera (e.g., SDIS 07, Force 06).
89
+ - `camera`: Camera identifier.
90
+ - `date`: Date of image capture (formatted as `YYYY-MM-DDTHH-MM-SS`).
91
+ - `image_name`: Original file name of the image.
92
+ - **Split**: Indicates whether the image belongs to the training or validation set (`train` or `val`).
93
+
94
+ ### Example Record
95
+ Each record in the dataset contains the following structure:
96
+ ```json
97
+ {
98
+ "image": "./images/train/partner_camera_date.jpg",
99
+ "annotations": "0 0.5 0.5 0.1 0.2",
100
+ "split": "train",
101
+ "image_name": "partner_camera_date.jpg",
102
+ "partner": "partner",
103
+ "camera": "camera",
104
+ "date": "YYYY-MM-DDTHH-MM-SS"
105
+ }
106
+ ```
107
+
108
+ ---
109
+
110
+ Let me know if you’d like further refinements or if you want me to include specific numbers/statistics for the dataset.
111
+
112
+ ### Splits
113
+ The dataset is divided into:
114
+ - **Training split**: Used for training the model.
115
+ - **Validation split**: Used to evaluate model performance.
116
+
117
+
118
+
119
+ ## Exporting the Dataset for Ultralytics Training
120
+
121
+ To train a YOLO model using the Ultralytics framework, the dataset must be structured as follows:
122
+ - **Images**: Stored in `images/train/` and `images/val/` directories.
123
+ - **Annotations**: Stored in YOLO-compatible format in `labels/train/` and `labels/val/` directories.
124
+
125
+ ### Steps to Export the Dataset
126
+
127
+ 1. **Install Required Libraries**:
128
+ ```bash
129
+ pip install datasets ultralytics
130
+ ```
131
+
132
+ 2. **Define Paths**:
133
+ Set up the directory structure for the Ultralytics dataset:
134
+ ```python
135
+ import os
136
+ from datasets import load_dataset
137
+
138
+ # Define paths
139
+ REPO_ID = "pyronear/pyro-sdis"
140
+ OUTPUT_DIR = "./pyro-sdis"
141
+ IMAGE_DIR = os.path.join(OUTPUT_DIR, "images")
142
+ LABEL_DIR = IMAGE_DIR.replace("images", "labels")
143
+
144
+ # Create the directory structure
145
+ for split in ["train", "val"]:
146
+ os.makedirs(os.path.join(IMAGE_DIR, split), exist_ok=True)
147
+ os.makedirs(os.path.join(LABEL_DIR, split), exist_ok=True)
148
+
149
+ # Load the dataset from the Hugging Face Hub
150
+ dataset = load_dataset(REPO_ID)
151
+ ```
152
+
153
+ 3. **Export Dataset**:
154
+ Use the following function to save the dataset in Ultralytics format:
155
+ ```python
156
+ def save_ultralytics_format(dataset_split, split):
157
+ """
158
+ Save a dataset split into the Ultralytics format.
159
+ Args:
160
+ dataset_split: The dataset split (e.g., dataset["train"])
161
+ split: "train" or "val"
162
+ """
163
+ for example in dataset_split:
164
+ # Save the image to the appropriate folder
165
+ image = example["image"] # PIL.Image.Image
166
+ image_name = example["image_name"] # Original file name
167
+ output_image_path = os.path.join(IMAGE_DIR, split, image_name)
168
+
169
+ # Save the image object to disk
170
+ image.save(output_image_path)
171
+
172
+ # Save label
173
+ annotations = example["annotations"]
174
+ label_name = image_name.replace(".jpg", ".txt").replace(".png", ".txt")
175
+ output_label_path = os.path.join(LABEL_DIR, split, label_name)
176
+
177
+ with open(output_label_path, "w") as label_file:
178
+ label_file.write(annotations)
179
+
180
+ # Save train and validation splits
181
+ save_ultralytics_format(dataset["train"], "train")
182
+ save_ultralytics_format(dataset["val"], "val")
183
+
184
+ print("Dataset exported to Ultralytics format.")
185
+ ```
186
+
187
+ 4. **Directory Structure**:
188
+ After running the script, the dataset will have the following structure:
189
+ ```
190
+ pyro-sdis/
191
+ β”œβ”€β”€ images/
192
+ β”‚ β”œβ”€β”€ train/
193
+ β”‚ β”œβ”€β”€ val/
194
+ β”œβ”€β”€ labels/
195
+ β”‚ β”œβ”€β”€ train/
196
+ β”‚ β”œβ”€β”€ val/
197
+ ```
198
+
199
+ ---
200
+
201
+ ### Training with Ultralytics YOLO
202
+
203
+ 1. **Download the `data.yaml` File**:
204
+ Use the following code to download the configuration file:
205
+ ```python
206
+ from huggingface_hub import hf_hub_download
207
+
208
+ # Correctly set repo_id and repo_type
209
+ repo_id = "pyronear/pyro-sdis"
210
+ filename = "data.yaml"
211
+
212
+ # Download data.yaml to the current directory
213
+ yaml_path = hf_hub_download(repo_id=repo_id, filename=filename, repo_type="dataset", local_dir=".")
214
+ print(f"data.yaml downloaded to: {yaml_path}")
215
+ ```
216
+
217
+ 2. **Train the Model**:
218
+ Install the Ultralytics YOLO framework and train the model:
219
+ ```bash
220
+ pip install ultralytics
221
+ yolo task=detect mode=train data=data.yaml model=yolov8n.pt epochs=50 imgsz=640 single_cls=True
222
+ ```
223
+
224
+
225
+ ## License
226
+
227
+ The dataset is released under the [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
228
+
229
+ ## Citation
230
+
231
+ If you use this dataset, please cite:
232
+ ```
233
+ @dataset{pyro-sdis,
234
+ author = {Pyronear Team},
235
+ title = {Pyro-SDIS Dataset},
236
+ year = {2024},
237
+ publisher = {Hugging Face},
238
+ url = {https://huggingface.co/pyronear/pyro-sdis}
239
+ }
240
+ ```
data.yaml ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # data.yaml
2
+ train: pyro-sdis/images/train # Path to training images
3
+ val: pyro-sdis/images/val # Path to validation images
4
+
5
+ nc: 1 # Number of classes
6
+ names: ['smoke'] # Class names
data/train-00000-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cebaeff1d94d23513a59a5652ce04303d02297dd6126dbf9d4ea0d37d7eabae
3
+ size 480959863
data/train-00001-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:feb69d99b41e42cf9d73582ef2ff4237c7bbf5ad3f733a5ca0418e00795bd5b5
3
+ size 485351693
data/train-00002-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79ac3a22306cdafad474b4f56288b254d400e30ef1b2ed7d1e961cefd32e6858
3
+ size 482115894
data/train-00003-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce112887c5eb1a44fe4e5bae6843714a9574c89fa5d5215580b1e2ed4b64822e
3
+ size 482865791
data/train-00004-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1352d4d6b2443019fac83967b463bd0daab9030987363f26bae4ba8691653884
3
+ size 479975330
data/train-00005-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05c000f07253dd60858a629119a5a2d94c8f8427b54f6f2fd503bbe06f933065
3
+ size 482940251
data/val-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3929f81f755625e35c8b18f27f95cff8726bc279cedaf1c3d048b0fd35d6c57
3
+ size 389834936
logo.png ADDED

Git LFS Details

  • SHA256: a4965cad8eac1ad291ddf8a70c5c4fd64f972c825927e5020cec78c9cc948c85
  • Pointer size: 130 Bytes
  • Size of remote file: 12.8 kB