tombagby variani commited on
Commit
c66c2d1
·
1 Parent(s): 177e4fa

Update README (#4)

Browse files

- Update README (ee5eac42c74315bba78d305bd9deb19fb5fe5063)


Co-authored-by: Ehsan Variani <[email protected]>

Files changed (1) hide show
  1. README.md +24 -1
README.md CHANGED
@@ -114,7 +114,19 @@ configs:
114
  ---
115
  # Simple Voice Questions
116
 
117
- Simple Voice Questions (SVQ) is a set of short audio questions recorded in 26 locales across 17 languages under multiple audio conditions.
 
 
 
 
 
 
 
 
 
 
 
 
118
 
119
  ## Data Collection
120
 
@@ -145,3 +157,14 @@ Therefore, to avoid this significant data loss and provide the fullest possible
145
  as an undivided evaluation set. Users intending to train models with this data will need to devise and implement their own
146
  splitting strategies, keeping in mind the inherent trade-offs between data volume and strict speaker/text disjointness
147
  if they attempt to replicate such conditions.
 
 
 
 
 
 
 
 
 
 
 
 
114
  ---
115
  # Simple Voice Questions
116
 
117
+ Simple Voice Questions (SVQ) is a set of short audio questions recorded in 26 locales across 17 languages under multiple audio conditions. It serves as a core evaluation componenet for **Massive Sound Embedding Benchmark (MSEB)**.
118
+
119
+ ## Technical Specifications
120
+
121
+ | Feature | Details |
122
+ | :--- | :--- |
123
+ | **Locales** | 26 |
124
+ | **Languages** | 17 |
125
+ | **Total Speakers** | ~700 (Capped at 250 recordings per speaker) |
126
+ | **Audio Conditions** | Clean, Background Speech, Media, Traffic Noise |
127
+ | **Gender Classes** | Female, Male, Non-binary, No answer |
128
+
129
+ ---
130
 
131
  ## Data Collection
132
 
 
157
  as an undivided evaluation set. Users intending to train models with this data will need to devise and implement their own
158
  splitting strategies, keeping in mind the inherent trade-offs between data volume and strict speaker/text disjointness
159
  if they attempt to replicate such conditions.
160
+
161
+ ## Citation
162
+
163
+ If you use this dataset, please cite the MSEB paper:
164
+
165
+ ```bibtex
166
+ @inproceedings{heigoldmassive,
167
+ title={Massive Sound Embedding Benchmark(MSEB)},
168
+ author={Heigold, Georg and Variani, Ehsan and Bagby, Tom and Allauzen, Cyril and Ma, Ji and Kumar, Shankar and Riley, Michael},
169
+ booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
170
+ }