|
|
--- |
|
|
license: cc-by-nc-sa-4.0 |
|
|
task_categories: |
|
|
- object-detection |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- image |
|
|
- drone |
|
|
- uav |
|
|
- search-and-rescue |
|
|
- person |
|
|
- mav |
|
|
pretty_name: forestpersons-v1 |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
--- |
|
|
# ForestPersons Dataset |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
ForestPersons is a large-scale dataset designed for missing person detection in forested environments under search-and-rescue scenarios. The dataset simulates realistic field conditions with varied poses (standing, sitting, lying) and visibility levels (20, 40, 70, 100). Images were captured using RGB sensors at ground and low-altitude perspectives. |
|
|
|
|
|
## Supported Tasks |
|
|
|
|
|
- Object Detection |
|
|
- Search and Rescue Benchmarking |
|
|
- Robust Detection under Dense Canopy Conditions |
|
|
|
|
|
## Languages |
|
|
|
|
|
- Visual data only (no textual data) |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
- **image**: RGB image (Electro-Optical modality) |
|
|
- **annotations**: COCO-style bounding boxes with the following attributes: |
|
|
- Bounding box coordinates |
|
|
- Category: `person` |
|
|
- Visibility ratio (20, 40, 70, 100) |
|
|
- Pose (standing, sitting, lying) |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
| Split | # Images | # Annotations | |
|
|
|--------------|----------|---------------| |
|
|
| Train | 67,686 | 145,816 | |
|
|
| Validation | 18,243 | 37,395 | |
|
|
| Test | 10,553 | 20,867 | |
|
|
|
|
|
Total images: 96,482 |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Collection Process |
|
|
|
|
|
Data was collected during controlled simulations of missing persons in forested areas, using human subjects posing realistically. All images were taken from heights of 1.5m to 2.0m, mimicking UAV perspectives, and were captured with: |
|
|
- GoPro HERO9 Black |
|
|
- Sony A57 with 24β70mm lens |
|
|
- See3CAM industrial camera |
|
|
|
|
|
Tripods were employed when drone use was impractical for safety reasons. |
|
|
|
|
|
### Annotation Process |
|
|
|
|
|
Annotations were manually created using [COCO Annotator](https://github.com/jsbroks/coco-annotator) by trained annotators. |
|
|
|
|
|
### Note on Indexing |
|
|
|
|
|
Please note that there is no sample with index 311 in this dataset. This index was intentionally skipped during dataset construction due to internal filtering steps. This does not affect dataset integrity or model training in any way. |
|
|
|
|
|
## Usage Example |
|
|
|
|
|
### (Recommended) Full Download β COCO Format Ready |
|
|
|
|
|
```bash |
|
|
# Clone the dataset repo (includes CSV + annotations.zip + dataset.py) |
|
|
git lfs install |
|
|
git clone https://huggingface.co/datasets/etri/ForestPersons |
|
|
cd ForestPersons |
|
|
|
|
|
# Download and extract all images (already included in the repo) |
|
|
# Structure: images/{folder}/{image}.jpg |
|
|
|
|
|
# Unzip COCO-style annotations |
|
|
unzip annotations.zip |
|
|
|
|
|
# Resulting directory: |
|
|
# βββ images/ |
|
|
# βββ annotations/ |
|
|
# β βββ train.json |
|
|
# β βββ val.json |
|
|
# β βββ test.json |
|
|
|
|
|
``` |
|
|
|
|
|
### Visualize One Sample |
|
|
```python |
|
|
import requests |
|
|
from PIL import Image |
|
|
import matplotlib.pyplot as plt |
|
|
import matplotlib.patches as patches |
|
|
from io import BytesIO |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the dataset from Hugging Face |
|
|
dataset = load_dataset("etri/ForestPersons", split="validation") |
|
|
sample = dataset[0] |
|
|
|
|
|
# Image URL |
|
|
BASE_URL = "https://huggingface.co/datasets/etri/ForestPersons/resolve/main/" |
|
|
image_url = BASE_URL + sample["file_name"] |
|
|
|
|
|
# Load the image, Please wait about 30 seconds.... |
|
|
response = requests.get(image_url) |
|
|
if response.status_code == 200: |
|
|
image = Image.open(BytesIO(response.content)) |
|
|
|
|
|
# Draw image and bounding box |
|
|
fig, ax = plt.subplots() |
|
|
ax.imshow(image) |
|
|
|
|
|
# Bounding box coordinates |
|
|
x = sample["bbox_x"] |
|
|
y = sample["bbox_y"] |
|
|
w = sample["bbox_w"] |
|
|
h = sample["bbox_h"] |
|
|
|
|
|
# Draw bounding box |
|
|
rect = patches.Rectangle((x, y), w, h, linewidth=2, edgecolor='red', facecolor='none') |
|
|
ax.add_patch(rect) |
|
|
|
|
|
# Draw label above the bounding box |
|
|
label = sample["category_name"] |
|
|
ax.text(x, y - 5, label, fontsize=10, color='white', backgroundcolor='red', verticalalignment='bottom') |
|
|
|
|
|
plt.axis("off") |
|
|
plt.show() |
|
|
|
|
|
else: |
|
|
print(f"Failed to load image: {image_url}") |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
The ForestPersons Dataset is licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). |
|
|
|
|
|
Under this license, you may use, share, and adapt the dataset for non-commercial purposes, provided you give appropriate credit and distribute any derivatives under the same license. |
|
|
|
|
|
For full license terms, please refer to the [LICENSE](./LICENSE) file. |
|
|
|
|
|
If you have questions regarding the dataset or its usage, please contact: |
|
|
|
|
|
**[email protected]** |
|
|
|
|
|
## Additional Terms Regarding Trained Models |
|
|
|
|
|
Any AI models, algorithms, or systems trained, fine-tuned, or developed using the ForestPersons Dataset are strictly limited to non-commercial use. |
|
|
|
|
|
|
|
|
## Disclaimer |
|
|
|
|
|
The ForestPersons Dataset is provided "as is" without any warranty of any kind, either express or implied, including but not limited to warranties of merchantability, fitness for a particular purpose, and non-infringement. |
|
|
|
|
|
The authors and affiliated institutions shall not be held liable for any damages arising from the use of the dataset. |
|
|
|
|
|
## Citation Information |
|
|
If you are using this dataset, please cite |
|
|
```bibtex |
|
|
@misc{kim2025forestpersons, |
|
|
title = {ForestPersons: A Large-Scale Dataset for Under-Canopy Missing Person Detection}, |
|
|
author = {Deokyun Kim and Jeongjun Lee and Jungwon Choi and Jonggeon Park and Giyoung Lee and Yookyung Kim and Myungseok Ki and Juho Lee and Jihun Cha}, |
|
|
year = {2025}, |
|
|
note = {Manuscript in preparation} |
|
|
url = {https://huggingface.co/datasets/etri/ForestPersons} |
|
|
} |
|
|
``` |
|
|
Deokyun Kim and Jeongjun Lee and Jungwon Choi and Jonggeon Park contributed equally to this work. |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
This work was supported by the Institute of Information & communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No. RS-2022-II220021, Development of Core Technologies for Autonomous Searching Drones) |