license: apache-2.0
pretty_name: KazakhTextDuplicates v2.0
language:
- kk
task_categories:
- sentence-similarity
- text-classification
- text-retrieval
size_categories:
- 100K<n<1M
tags:
- kazakh
- duplicates
- near-duplicate
- plagiarism-detection
- sentence-similarity
- sts
- low-resource-language
homepage: https://huggingface.co/datasets/Arailym-tleubayeva/KazakhTextDuplicatesv2.0
KazakhTextDuplicates v2.0
KazakhTextDuplicates v2.0 is a large-scale dataset for duplicate detection, near-duplicate retrieval, semantic textual similarity (STS), and plagiarism detection in the Kazakh language. Version 2.0 significantly extends the dataset with: a large augmented training corpus (200K+ pairs), a continuous semantic similarity score (similarity_score), multiple difficulty levels of noisy duplicates, a clean train/validation/test split without identifier overlap. The dataset is designed for training and evaluating modern sentence embedding and retrieval models for low-resource languages.
Overview
Property Value Language Kazakh (kk) Total examples 207,376 text pairs Train 146,072 Validation 16,231 Test 45,073 License Apache 2.0 Similarity labels Continuous (0.40–1.00) Text length Document-level (up to ~2900 tokens)
Primary tasks
Duplicate & near-duplicate detection Plagiarism detection Semantic Textual Similarity (STS) Dense retrieval & re-ranking Embedding model fine-tuning (SBERT, E5, BGE, multilingual LLMs)
Dataset Structure
Each record consists of a pair of documents and metadata fields. Column Description id Unique pair identifier content Original text modified_content Modified / duplicated text type_duplicate Type of semantic duplication similarity_score Continuous similarity score (0.40–1.00) category Text domain/source language Language code (kk) content_len_tokens Token length of original text modified_content_len_tokens Token length of modified text split train, validation, or test
Files in This Dataset
1️⃣ KazakhTextDuplicates_v2.csv — full dataset Contains all 207,376 pairs with complete metadata. Use cases
- Duplicate type classification
- Near-duplicate & plagiarism detection
- Document-level retrieval (FAISS / ANN)
- Corpus-level analysis 2️⃣ KazakhTextDuplicates_v2_STS.csv — STS-ready dataset Prepared specifically for semantic textual similarity regression. Columns
- sentence1
- sentence2
- score
- type_duplicate
- split Compatible with Sentence-Transformers and similar frameworks.
Duplicate Types and Similarity Scores
The dataset includes 7 types of semantic duplication, covering a wide range of difficulty: Type Description Score exact Fully identical texts 1.00 noisy_soft Light noise, meaning preserved 0.90 paraphrase Deep paraphrase, meaning preserved 0.85 noisy_medium Moderate corruption 0.75 contextual Partial reformulation 0.70 noisy_hard Strong corruption 0.55 partial Limited content overlap 0.40 This hierarchy enables fine-grained semantic learning and hard-negative mining.
Dataset Splits
Split Size Description Train 146,072 Model training Validation 16,231 Hyperparameter tuning Test 45,073 Held-out evaluation No overlap of id values between splits Validation mirrors the training distribution Test split is suitable for robust and reproducible evaluation
Statistics
Class distribution (full dataset) partial — 28.1% exact — 15.7% contextual — 15.6% noisy_soft — 12.5% noisy_medium — 12.5% noisy_hard — 12.5% paraphrase — 3.1%
Document length (tokens) Metric content modified_content Mean 288 272 Median 153 144 Max 2904 2909 This is a document-level dataset, not a short-sentence benchmark.
Example Record
{ "id": "14366_noisy_hard_aug", "content": "Болашақ мұғалімдердің бақылау-бағалау құзыреттілігін қалыптастыру...", "modified_content": "Болашақ мұғалімдердің бақыла-убағалау ұқзыреттілігін...", "type_duplicate": "noisy_hard", "similarity_score": 0.55, "category": "TEXT", "language": "kk", "content_len_tokens": 182, "modified_content_len_tokens": 97, "split": "train" }
Limitations
- Some paraphrase and noise-based examples are automatically generated
- Very long documents may require models with extended context
- Domain distribution is biased toward academic and technical text
Citation
If you use this dataset, please cite: Tleubayeva, A. (2025). KazakhTextDuplicates v2.0: A Large-Scale Dataset for Near-Duplicate Detection and Semantic Similarity in Kazakh. https://huggingface.co/datasets/Arailym-tleubayeva/KazakhTextDuplicatesv2.0