|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: channel_id |
|
|
dtype: string |
|
|
- name: video_id |
|
|
dtype: string |
|
|
- name: segment_id |
|
|
dtype: int64 |
|
|
- name: duration |
|
|
dtype: string |
|
|
- name: fps |
|
|
dtype: int64 |
|
|
- name: conversation |
|
|
list: |
|
|
- name: end_time |
|
|
dtype: float64 |
|
|
- name: speaker |
|
|
dtype: int64 |
|
|
- name: start_time |
|
|
dtype: float64 |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: utterance_id |
|
|
dtype: int64 |
|
|
- name: words |
|
|
list: |
|
|
- name: end_time |
|
|
dtype: float64 |
|
|
- name: start_time |
|
|
dtype: float64 |
|
|
- name: word |
|
|
dtype: string |
|
|
- name: facial_expression |
|
|
list: |
|
|
- name: features |
|
|
sequence: float32 |
|
|
- name: frame |
|
|
dtype: int64 |
|
|
- name: utt_id |
|
|
dtype: int64 |
|
|
- name: body_language |
|
|
list: |
|
|
- name: features |
|
|
sequence: float32 |
|
|
- name: frame |
|
|
dtype: int64 |
|
|
- name: utt_id |
|
|
dtype: int64 |
|
|
- name: harmful_utterance_id |
|
|
sequence: int64 |
|
|
- name: speaker_bbox |
|
|
list: |
|
|
- name: bbox |
|
|
sequence: int64 |
|
|
- name: frame_id |
|
|
dtype: int64 |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 144100517656 |
|
|
num_examples: 7985 |
|
|
- name: test |
|
|
num_bytes: 31918682474 |
|
|
num_examples: 1993 |
|
|
download_size: 166967732474 |
|
|
dataset_size: 176019200130 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: test |
|
|
path: data/test-* |
|
|
--- |
|
|
|
|
|
## Dataset Card for VENUS |
|
|
|
|
|
### Dataset Summary |
|
|
|
|
|
Data from: Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues |
|
|
|
|
|
``` |
|
|
@article{kim2025speaking, |
|
|
title={Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues}, |
|
|
author={Kim, Youngmin and Chung, Jiwan and Kim, Jisoo and Lee, Sunghyun and Lee, Sangkyu and Kim, Junhyeok and Yang, Cheoljong and Yu, Youngjae}, |
|
|
journal={arXiv preprint arXiv:2506.00958}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
We provide a multimodal large-scale video dataset based on nonverbal communication. |
|
|
|
|
|
Please cite our work if you find our data helpful. |
|
|
|
|
|
Our dataset collection pipeline and the model implementation that uses it are available at <a href='https://github.com/winston1214/nonverbal-conversation'>https://github.com/winston1214/nonverbal-conversation</a> |
|
|
|
|
|
### Dataset Statistic |
|
|
|
|
|
| Split | Channels | Videos | Segments (10 min) | Frames (Nonverbal annotations) | Utterances | Words | |
|
|
|:---------------:|:--------:|:---------:|:-------:|:-------:|:----------:|:----------:| |
|
|
| Train |~ | ~ | ~ | ~ | ~ | | |
|
|
| Test | ~ | ~ | ~ | ~ | ~ | | |
|
|
|
|
|
|
|
|
### Language |
|
|
|
|
|
English |
|
|
|
|
|
### Other Version |
|
|
|
|
|
- **VENUS-1K**: <a href='https://huggingface.co/datasets/winston1214/VENUS-1K'>This link</a> |
|
|
- **VENUS-5K**: <a href='https://huggingface.co/datasets/winston1214/VENUS-5K'>This link</a> |
|
|
- **VENUS-25K**: <a href='https://huggingface.co/datasets/winston1214/VENUS-25K'>This link</a> |
|
|
- **VENUS-50K**: <a href=''>This link</a> ***(Comming Soon!)*** |
|
|
- **VENUS-100K** (Full): <a href=''>This link</a> ***(Comming Soon!)*** |
|
|
|
|
|
### Data Structure |
|
|
|
|
|
Here's an overview of our dataset structure: |
|
|
|
|
|
``` |
|
|
{ |
|
|
'channel_id': str, # YouTube channel ID |
|
|
'video_id': str, # Video ID |
|
|
'segment_id': int, # Segment ID within the video |
|
|
'duration': str, # Total segment duration (e.g., '00:11:00 ~ 00:21:00') |
|
|
'fps': int, # Frames per second |
|
|
|
|
|
'conversation': [ # Conversation information (consisting of multiple utterances) |
|
|
{ |
|
|
'utterance_id': int, # Utterance ID |
|
|
'speaker': int, # Speaker ID (represented as an integer) |
|
|
'text': str, # Full utterance text |
|
|
'start_time': float, # Start time of the utterance (in seconds) |
|
|
'end_time': float, # End time of the utterance (in seconds) |
|
|
'words': [ # Word-level information |
|
|
{ |
|
|
'word': str, # The word itself |
|
|
'start_time': float, # Word-level start time |
|
|
'end_time': float, # Word-level end time |
|
|
} |
|
|
] |
|
|
} |
|
|
], |
|
|
|
|
|
'facial_expression': [ # Facial expression features |
|
|
{ |
|
|
'utt_id': int, # ID of the utterance this expression is aligned to |
|
|
'frame': int, # Frame identifier |
|
|
'features': [float], # Facial feature vector (153-dimensional) |
|
|
} |
|
|
], |
|
|
|
|
|
'body_language': [ # Body language features |
|
|
{ |
|
|
'utt_id': int, # ID of the utterance this body language is aligned to |
|
|
'frame': int, # Frame identifier |
|
|
'features': [float], # Body movement feature vector (179-dimensional) |
|
|
} |
|
|
], |
|
|
|
|
|
'speaker_bbox': [ # speaker bounding boxes |
|
|
{ |
|
|
'frame_id': int, # Frame identifier |
|
|
'bbox': [int], # [x_top, y_top, x_bottom, y_bottom] |
|
|
} |
|
|
], |
|
|
'harmful_utterance_id': [int], # List of utterance IDs identified as harmful |
|
|
} |
|
|
|
|
|
``` |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
See above |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
See above |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
Data splits can be accessed as: |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
train_dataset = load_dataset("winston1214/VENUS-10K", split = "train") |
|
|
test_dataset = load_dataset("winston1214/VENUS-10K", split = "test") |
|
|
``` |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
Full details are in the paper. |
|
|
|
|
|
### Source Data |
|
|
We retrieve natural videos from YouTube and annotate the FLAME and SMPL-X parameters from EMOCAv2 and OSX. |
|
|
|
|
|
### Initial Data Collection |
|
|
Full details are in the paper. |
|
|
|
|
|
### Annotations |
|
|
|
|
|
Full details are in the paper. |
|
|
|
|
|
### Annotation Process |
|
|
|
|
|
Full details are in the paper. |
|
|
|
|
|
### Who are the annotators? |
|
|
|
|
|
We used an automatic annotation method, and the primary annotator was Youngmin Kim, the first author of the paper. |
|
|
|
|
|
For any questions regarding the dataset, please contact <a href='[email protected]'>e-mail</a> |
|
|
|
|
|
### Considerations for Using the Data |
|
|
|
|
|
This dataset (VENUS) consists of 3D annotations of human subjects and text extracted from conversations in the videos. |
|
|
Please note that the dialogues are sourced from online videos and may include informal or culturally nuanced expressions. |
|
|
Use of this dataset should be done with care, especially in applications involving human-facing interactions. |
|
|
|
|
|
### Licensing Information |
|
|
|
|
|
The annotations we provide are licensed under CC-BY-4.0. |
|
|
|