Improve dataset card: Add metadata, paper abstract, links, and sample usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,4 +1,21 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
configs:
|
| 3 |
- config_name: default
|
| 4 |
data_files:
|
|
@@ -36,10 +53,13 @@ dataset_info:
|
|
| 36 |
|
| 37 |
# ExpressiveSpeech Dataset
|
| 38 |
|
| 39 |
-
[**Project Webpage**](https://freedomintelligence.github.io/ExpressiveSpeech/)
|
| 40 |
|
| 41 |
[**中文版 (Chinese Version)**](./README_zh.md)
|
| 42 |
|
|
|
|
|
|
|
|
|
|
| 43 |
## About The Dataset
|
| 44 |
|
| 45 |
**ExpressiveSpeech** is a high-quality, **expressive**, and **bilingual** (Chinese-English) speech dataset created to address the common lack of consistent vocal expressiveness in existing dialogue datasets.
|
|
@@ -48,10 +68,10 @@ This dataset is meticulously curated from five renowned open-source emotional di
|
|
| 48 |
|
| 49 |
## Key Features
|
| 50 |
|
| 51 |
-
-
|
| 52 |
-
-
|
| 53 |
-
-
|
| 54 |
-
-
|
| 55 |
|
| 56 |
## Dataset Statistics
|
| 57 |
|
|
@@ -68,7 +88,35 @@ This dataset is meticulously curated from five renowned open-source emotional di
|
|
| 68 |
|
| 69 |
The high expressiveness of this dataset was achieved using our screening tool, **DeEAR**. If you need to build larger batches of high-expressiveness data yourself, you are welcome to use this tool. You can find it on our [GitHub](https://github.com/FreedomIntelligence/ExpressiveSpeech).
|
| 70 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
## Data Format
|
| 74 |
|
|
@@ -87,11 +135,11 @@ ExpressiveSpeech/
|
|
| 87 |
└── metadata.jsonl
|
| 88 |
```
|
| 89 |
|
| 90 |
-
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
|
| 96 |
### JSONL Files Example
|
| 97 |
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
+
license: cc-by-nc-sa-4.0
|
| 6 |
+
task_categories:
|
| 7 |
+
- audio-classification
|
| 8 |
+
- text-to-speech
|
| 9 |
+
tags:
|
| 10 |
+
- audio
|
| 11 |
+
- speech
|
| 12 |
+
- emotion
|
| 13 |
+
- bilingual
|
| 14 |
+
- tts
|
| 15 |
+
- s2s
|
| 16 |
+
- expressiveness
|
| 17 |
+
size_categories:
|
| 18 |
+
- 10K<n<100K
|
| 19 |
configs:
|
| 20 |
- config_name: default
|
| 21 |
data_files:
|
|
|
|
| 53 |
|
| 54 |
# ExpressiveSpeech Dataset
|
| 55 |
|
| 56 |
+
[**Project Webpage**](https://freedomintelligence.github.io/ExpressiveSpeech/) | [**Paper**](https://huggingface.co/papers/2510.20513) | [**Code**](https://github.com/FreedomIntelligence/ExpressiveSpeech)
|
| 57 |
|
| 58 |
[**中文版 (Chinese Version)**](./README_zh.md)
|
| 59 |
|
| 60 |
+
## Paper Abstract
|
| 61 |
+
Recent speech-to-speech (S2S) models generate intelligible speech but still lack natural expressiveness, largely due to the absence of a reliable evaluation metric. Existing approaches, such as subjective MOS ratings, low-level acoustic features, and emotion recognition are costly, limited, or incomplete. To address this, we present DeEAR (Decoding the Expressive Preference of eAR), a framework that converts human preference for speech expressiveness into an objective score. Grounded in phonetics and psychology, DeEAR evaluates speech across three dimensions: Emotion, Prosody, and Spontaneity, achieving strong alignment with human perception (Spearman's Rank Correlation Coefficient, SRCC = 0.86) using fewer than 500 annotated samples. Beyond reliable scoring, DeEAR enables fair benchmarking and targeted data curation. It not only distinguishes expressiveness gaps across S2S models but also selects 14K expressive utterances to form ExpressiveSpeech, which improves the expressive score (from 2.0 to 23.4 on a 100-point scale) of S2S models. Demos and codes are available at this https URL
|
| 62 |
+
|
| 63 |
## About The Dataset
|
| 64 |
|
| 65 |
**ExpressiveSpeech** is a high-quality, **expressive**, and **bilingual** (Chinese-English) speech dataset created to address the common lack of consistent vocal expressiveness in existing dialogue datasets.
|
|
|
|
| 68 |
|
| 69 |
## Key Features
|
| 70 |
|
| 71 |
+
- **High Expressiveness**: Achieves a significantly high average expressiveness score of **80.2** by **DeEAR**, far surpassing the original source datasets.
|
| 72 |
+
- **Bilingual Content**: Contains a balanced mix of Chinese and English speech, with a language ratio close to **1:1**.
|
| 73 |
+
- **Substantial Scale**: Comprises approximately **14,000 utterances**, totaling **51 hours** of audio.
|
| 74 |
+
- **Rich Metadata**: Includes ASR-generated text transcriptions, expressiveness scores, and source information for each utterance.
|
| 75 |
|
| 76 |
## Dataset Statistics
|
| 77 |
|
|
|
|
| 88 |
|
| 89 |
The high expressiveness of this dataset was achieved using our screening tool, **DeEAR**. If you need to build larger batches of high-expressiveness data yourself, you are welcome to use this tool. You can find it on our [GitHub](https://github.com/FreedomIntelligence/ExpressiveSpeech).
|
| 90 |
|
| 91 |
+
## Sample Usage
|
| 92 |
+
|
| 93 |
+
To get started with the DeEAR model for inference, follow the steps below from the [GitHub repository](https://github.com/FreedomIntelligence/ExpressiveSpeech):
|
| 94 |
+
|
| 95 |
+
### 1. Clone the Repository
|
| 96 |
+
```bash
|
| 97 |
+
git clone https://github.com/FreedomIntelligence/ExpressiveSpeech.git
|
| 98 |
+
cd ExpressiveSpeech
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
### 2. Setup
|
| 102 |
+
```bash
|
| 103 |
+
conda create -n DeEAR python=3.10
|
| 104 |
+
conda activate DeEAR
|
| 105 |
+
pip install -r requirements.txt
|
| 106 |
+
conda install -c conda-forge ffmpeg
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
### 3. Prepare
|
| 110 |
+
Download the DeEAR_Base model from [FreedomIntelligence/DeEAR_Base](https://huggingface.co/FreedomIntelligence/DeEAR_Base) and place it in the [models/DeEAR_Base/](./models/DeEAR_Base/) directory.
|
| 111 |
|
| 112 |
+
### 4. Inference
|
| 113 |
+
```bash
|
| 114 |
+
python inference.py \
|
| 115 |
+
--model_dir ./models \
|
| 116 |
+
--input_path /path/to/audio_folder \
|
| 117 |
+
--output_file /path/to/save/my_scores.jsonl \
|
| 118 |
+
--batch_size 64
|
| 119 |
+
```
|
| 120 |
|
| 121 |
## Data Format
|
| 122 |
|
|
|
|
| 135 |
└── metadata.jsonl
|
| 136 |
```
|
| 137 |
|
| 138 |
+
- **`metadata.jsonl`**: A jsonl file containing detailed information for each utterance. The metadata includes:
|
| 139 |
+
- `audio_path`: The relative path to the audio file.
|
| 140 |
+
- `value`: The ASR-generated text transcription.
|
| 141 |
+
- `emotion`: Emotion labels from the original datasets.
|
| 142 |
+
- `expressiveness_scores`: The expressiveness score from the **DeEAR** model.
|
| 143 |
|
| 144 |
### JSONL Files Example
|
| 145 |
|