| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - question-answering |
| | language: |
| | - ur |
| | size_categories: |
| | - ~50GB |
| | --- |
| | |
| | # Dataset Card for Dataset Name |
| | This dataset is the translation of the [MS-marco](https://microsoft.github.io/msmarco/Datasets.html) dataset, marking it the first large-scale urdu IR dataset. |
| |
|
| |
|
| | # Dataset Details |
| | The MS MARCO dataset is formed by a collection of 8.8M passages, approximately 530k queries, and at least one relevant passage per query, which were selected by humans. |
| | The development set of MS MARCO comprises more than 100k queries. However, a smaller set of 6,980 queries is used for evaluation in most published works. |
| |
|
| | The triples files (triples.train.small.urdu.tsv , named as triple files part aa to ae) is around 47 GB and is split into 5 parts. Download them and combine them before you start working. |
| |
|
| | This dataset is created using the [IndicTrans2](https://github.com/AI4Bharat/IndicTrans2) translation model. |
| |
|
| | # Dataset Description |
| |
|
| | The MS MARCO dataset's Urdu version comprises several files, each serving a distinct purpose in information retrieval tasks. Below is an overview of each file along with its structure: |
| |
|
| | ### triples.train.small.urdu.tsv: |
| | Purpose: Contains training data formatted as triples for ranking models. |
| | Structure: Each line includes a query, a relevant passage (positive example), and a non-relevant passage (negative example), separated by tabs. |
| | Sample Entry: |
| |
|
| | < query > \t < positive_passage > \t < negative_passage > |
| |
|
| |
|
| | ### urdu_collection.tsv: |
| | Purpose: Comprises the entire collection of Urdu documents in the dataset. |
| | Structure: Each line contains a document ID and the corresponding document text, separated by a tab. |
| | Sample Entry: |
| | |
| | < doc_id > \t < document_text > |
| | |
| | |
| | ### urdu_collection_small.tsv: |
| | Purpose: A subset of the full collection, containing 2,000 documents for preliminary experiments. |
| | Structure: Similar to urdu_collection.tsv, with each line containing a document ID and the corresponding document text. |
| | Sample Entry: |
| |
|
| | < doc_id > \t < document_text > |
| |
|
| | ### urdu_queries.dev.small.tsv: |
| | Purpose: Includes a small set of development (validation) queries in Urdu. |
| | Structure: Each line contains a query ID and the corresponding query text, separated by a tab. |
| | Sample Entry: |
| | |
| | < query_id > \t < query_text > |
| | |
| | |
| | ### urdu_queries.dev.tsv: |
| | Purpose: Provides a more extensive set of development queries for validating model performance. |
| | Structure: Each line contains a query ID and the corresponding query text, separated by a tab. |
| | Sample Entry: |
| |
|
| | < query_id > \t < query_text > |
| |
|
| |
|
| | ### urdu_queries.train.tsv: |
| | Purpose: Contains the training queries in Urdu, each paired with relevant documents. |
| | Structure: Each line includes a query ID and the corresponding query text, separated by a tab. |
| | Sample Entry: |
| | |
| | < query_id > \t < query_text > |
| | |
| | These files collectively support the development and evaluation of retrieval models in Urdu, enabling research in multilingual information retrieval. |
| | |
| | |
| | |
| | # Bias, Risks, and Limitations |
| | |
| | Because this is a machine translated dataset so the limitations of the machine translation model (IndicTrans2) apply here. |
| | |
| | |
| | # Dataset Card Authors |
| | Umer Butt |
| | |