# Munch-1 Hashed Index - Lightweight Audio Reference Dataset [![Original Dataset](https://img.shields.io/badge/🤗%20Original-Munch--1-blue)](https://huggingface.co/datasets/humair025/munch-1) [![Hashed Index](https://img.shields.io/badge/🤗%20Index-hashed__data-green)](https://huggingface.co/datasets/humair025/hashed_data_munch_1) [![Size](https://img.shields.io/badge/Size-~1GB-brightgreen)]() [![Original Size](https://img.shields.io/badge/Original-3.28TB-orange)]() [![Space Saved](https://img.shields.io/badge/Space%20Saved-99.97%25-success)]() ## 📖 Overview **Munch-1 Hashed Index** is a lightweight reference dataset that provides SHA-256 hashes for all audio files in the [Munch-1 Urdu TTS Dataset](https://huggingface.co/datasets/humair025/munch-1). Instead of storing 3.28 TB of raw audio, this index stores only metadata and cryptographic hashes, enabling: - ✅ **Fast duplicate detection** across 3.86M+ audio samples - ✅ **Efficient dataset exploration** without downloading terabytes - ✅ **Quick metadata queries** (voice distribution, text stats, etc.) - ✅ **Selective audio retrieval** - download only what you need - ✅ **Storage efficiency** - 99.97% space reduction (3.28 TB → ~1 GB) ### 🔗 Related Datasets - **Original Dataset**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1) - Full audio dataset (3.28 TB) - **This Index**: [humair025/hashed_data_munch_1](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Hashed reference (~1 GB) --- ## 🎯 What Problem Does This Solve? ### The Challenge The original [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1) contains: - 📊 **3,856,500 audio-text pairs** - 💾 **3.28 TB total size** - 📦 **~7,714 separate parquet files** (~400 MB each) This makes it difficult to: - ❌ Quickly check if specific audio exists - ❌ Find duplicate audio samples - ❌ Explore metadata without downloading everything - ❌ Work on limited bandwidth/storage ### The Solution This hashed index provides: - ✅ **All metadata** (text, voice, timestamps) without audio bytes - ✅ **SHA-256 hashes** for every audio file (unique fingerprint) - ✅ **File references** (which parquet contains each audio) - ✅ **Fast queries** - search 3.86M records in seconds - ✅ **Retrieve on demand** - download only specific audio when needed --- ## 🚀 Quick Start ### Installation ```bash pip install datasets pandas ``` ### Basic Usage ```python from datasets import load_dataset import pandas as pd # Load the entire hashed index (fast - only ~1 GB!) ds = load_dataset("humair025/hashed_data_munch_1", split="train") df = pd.DataFrame(ds) print(f"Total records: {len(df)}") print(f"Unique audio hashes: {df['audio_bytes_hash'].nunique()}") print(f"Voices: {df['voice'].unique()}") ``` ### Find Duplicates ```python # Check for duplicate audio duplicates = df[df.duplicated(subset=['audio_bytes_hash'], keep=False)] if len(duplicates) > 0: print(f"⚠️ Found {len(duplicates)} duplicate rows") print(f" Unique audio files: {df['audio_bytes_hash'].nunique()}") print(f" Redundancy: {(1 - df['audio_bytes_hash'].nunique()/len(df))*100:.2f}%") else: print("✅ No duplicates found!") ``` ### Search by Voice ```python # Find all "ash" voice samples ash_samples = df[df['voice'] == 'ash'] print(f"Ash voice samples: {len(ash_samples)}") # Get file containing first ash sample first_ash = ash_samples.iloc[0] print(f"File: {first_ash['parquet_file_name']}") print(f"Text: {first_ash['text']}") ``` ### Search by Text ```python # Find audio for specific text query = "یہ ایک نمونہ" matches = df[df['text'].str.contains(query, na=False)] print(f"Found {len(matches)} matches") ``` ### Retrieve Original Audio ```python from datasets import load_dataset as load_original import numpy as np from scipy.io import wavfile import io def get_audio_by_hash(audio_hash, index_df): """Retrieve original audio bytes using the hash""" # Find the row with this hash row = index_df[index_df['audio_bytes_hash'] == audio_hash].iloc[0] # Download only the specific parquet file containing this audio ds = load_original( "humair025/munch-1", data_files=[row['parquet_file_name']], split="train" ) # Find matching row by ID for audio_row in ds: if audio_row['id'] == row['id']: return audio_row['audio_bytes'] return None # Example: Get audio for first row row = df.iloc[0] audio_bytes = get_audio_by_hash(row['audio_bytes_hash'], df) # Convert to WAV and play def pcm16_to_wav(pcm_bytes, sample_rate=22050): audio_array = np.frombuffer(pcm_bytes, dtype=np.int16) wav_io = io.BytesIO() wavfile.write(wav_io, sample_rate, audio_array) wav_io.seek(0) return wav_io wav_io = pcm16_to_wav(audio_bytes) # In Jupyter: IPython.display.Audio(wav_io, rate=22050) ``` --- ## 📊 Dataset Structure ### Data Fields | Field | Type | Description | |-------|------|-------------| | `id` | int | Original paragraph ID from source dataset | | `parquet_file_name` | string | Source file in [munch-1](https://huggingface.co/datasets/humair025/munch-1) dataset | | `text` | string | Original Urdu text | | `transcript` | string | TTS transcript (may differ from input) | | `voice` | string | Voice used (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) | | `audio_bytes_hash` | string | SHA-256 hash of audio_bytes (64 hex chars) | | `audio_size_bytes` | int | Size of original audio in bytes | | `timestamp` | string | ISO timestamp of generation (nullable) | | `error` | string | Error message if generation failed (nullable) | ### Example Row ```python { 'id': 42, 'parquet_file_name': 'tts_data_20251203_130314_83ab0706.parquet', 'text': 'یہ ایک نمونہ متن ہے۔', 'transcript': 'یہ ایک نمونہ متن ہے۔', 'voice': 'ash', 'audio_bytes_hash': 'a3f7b2c8e9d1f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9', 'audio_size_bytes': 52340, 'timestamp': '2024-12-03T13:03:14.123456', 'error': None } ``` --- ## 🎯 Use Cases ### 1. **Dataset Quality Analysis** ```python # Check for duplicates unique_ratio = df['audio_bytes_hash'].nunique() / len(df) print(f"Unique audio ratio: {unique_ratio*100:.2f}%") # Analyze voice distribution voice_dist = df['voice'].value_counts() print(voice_dist) # Find failed generations failed = df[df['error'].notna()] print(f"Failed generations: {len(failed)}") ``` ### 2. **Efficient Data Exploration** ```python # Browse dataset without downloading audio print(df[['id', 'text', 'voice', 'audio_size_bytes']].head(20)) # Filter by criteria short_audio = df[df['audio_size_bytes'] < 30000] long_text = df[df['text'].str.len() > 200] ``` ### 3. **Selective Download** ```python # Download only specific voices ash_files = df[df['voice'] == 'ash']['parquet_file_name'].unique() ds = load_dataset("humair025/munch-1", data_files=list(ash_files)) # Download only short audio samples small_files = df[df['audio_size_bytes'] < 40000]['parquet_file_name'].unique() ds = load_dataset("humair025/munch-1", data_files=list(small_files[:10])) ``` ### 4. **Deduplication Pipeline** ```python # Create deduplicated subset df_unique = df.drop_duplicates(subset=['audio_bytes_hash'], keep='first') print(f"Original: {len(df)} rows") print(f"Unique: {len(df_unique)} rows") print(f"Duplicates removed: {len(df) - len(df_unique)}") # Save unique references df_unique.to_parquet('unique_audio_index.parquet') ``` ### 5. **Audio Similarity Search** ```python # Find audio with similar hash prefixes (for clustering) target_hash = df.iloc[0]['audio_bytes_hash'] prefix = target_hash[:8] similar = df[df['audio_bytes_hash'].str.startswith(prefix)] print(f"Similar audio candidates: {len(similar)}") ``` --- ## 📈 Dataset Statistics ### Size Comparison | Metric | Original Dataset | Hashed Index | Reduction | |--------|------------------|--------------|-----------| | Total Size | 3.28 TB | ~1 GB | **99.97%** | | Records | 3,856,500 | 3,856,500 | Same | | Files | 7,714 parquet | Consolidated | **~7,700× fewer** | | Download Time (100 Mbps) | ~73 hours | ~90 seconds | **~3,000×** | | Load Time | Minutes-Hours | Seconds | **~100×** | | Memory Usage | Cannot fit in RAM | ~2-3 GB RAM | **Fits easily** | ### Content Statistics ``` 📊 Dataset Overview: Total Records: 3,856,500 Total Files: 7,714 parquet files (~400 MB each) Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) Language: Urdu (primary) Avg Audio Size: ~50-60 KB per sample Avg Duration: ~3-5 seconds per sample Total Duration: ~3,200-4,800 hours of audio ``` --- ## 🔧 Advanced Usage ### Batch Analysis ```python # Analyze all hash files from datasets import load_dataset ds = load_dataset("humair025/hashed_data_munch_1", split="train") df = pd.DataFrame(ds) # Group by voice voice_stats = df.groupby('voice').agg({ 'id': 'count', 'audio_size_bytes': 'mean', 'audio_bytes_hash': 'nunique' }).rename(columns={ 'id': 'total_samples', 'audio_size_bytes': 'avg_size', 'audio_bytes_hash': 'unique_audio' }) print(voice_stats) ``` ### Cross-Reference with Original ```python # Check if a hash exists in original dataset def verify_hash_exists(audio_hash, parquet_file): """Verify a hash actually exists in the original dataset""" from datasets import load_dataset import hashlib ds = load_dataset( "humair025/munch-1", data_files=[parquet_file], split="train" ) for row in ds: computed_hash = hashlib.sha256(row['audio_bytes']).hexdigest() if computed_hash == audio_hash: return True return False # Verify first entry first_row = df.iloc[0] exists = verify_hash_exists( first_row['audio_bytes_hash'], first_row['parquet_file_name'] ) print(f"Hash verified: {exists}") ``` ### Export Unique Dataset ```python # Create a new dataset with only unique audio df_unique = df.drop_duplicates(subset=['audio_bytes_hash'], keep='first') # Get list of unique parquet files needed unique_files = df_unique['parquet_file_name'].unique() print(f"Unique audio samples: {len(df_unique)}") print(f"Files needed: {len(unique_files)} out of {df['parquet_file_name'].nunique()}") # Calculate space savings original_size = len(df) * df['audio_size_bytes'].mean() unique_size = len(df_unique) * df_unique['audio_size_bytes'].mean() savings = (1 - unique_size/original_size) * 100 print(f"Space savings: {savings:.2f}%") ``` --- ## 🛠️ How This Index Was Created This dataset was generated using an automated pipeline: ### Processing Pipeline 1. **Batch Download**: Download 40 parquet files at a time from source 2. **Hash Computation**: Compute SHA-256 for each audio_bytes field 3. **Metadata Extraction**: Extract text, voice, and other metadata 4. **Save & Upload**: Save hash file, upload to HuggingFace 5. **Clean Up**: Delete local cache to save disk space 6. **Resume**: Track processed files, skip already-processed ### Pipeline Features - ✅ **Resumable**: Checkpoint system tracks progress - ✅ **Memory Efficient**: Processes in batches, clears cache - ✅ **Error Tolerant**: Skips corrupted files, continues processing - ✅ **No Duplicates**: Checks target repo to avoid reprocessing - ✅ **Automatic Upload**: Streams results to HuggingFace ### Technical Details ```python # Hash computation import hashlib hash = hashlib.sha256(audio_bytes).hexdigest() # Batch size: 40 files per batch # Processing time: ~4-6 hours for full dataset # Output: Multiple hashed_*.parquet files ``` --- ## 📊 Performance Metrics ### Query Performance ```python import time # Load index start = time.time() ds = load_dataset("humair025/hashed_data_munch_1", split="train") df = pd.DataFrame(ds) print(f"Load time: {time.time() - start:.2f}s") # Query by hash start = time.time() result = df[df['audio_bytes_hash'] == 'target_hash'] print(f"Hash lookup: {(time.time() - start)*1000:.2f}ms") # Query by voice start = time.time() result = df[df['voice'] == 'ash'] print(f"Voice filter: {(time.time() - start)*1000:.2f}ms") ``` **Expected Performance**: - Load full dataset: 10-30 seconds - Hash lookup: < 10 milliseconds - Voice filter: < 50 milliseconds - Full dataset scan: < 5 seconds --- ## 🔗 Integration with Original Dataset ### Workflow Example ```python # 1. Query the index (fast) df = pd.DataFrame(load_dataset("humair025/hashed_data_munch_1", split="train")) target_rows = df[df['voice'] == 'ash'].head(100) # 2. Get unique parquet files files_needed = target_rows['parquet_file_name'].unique() # 3. Download only needed files (selective) from datasets import load_dataset ds = load_dataset( "humair025/munch-1", data_files=list(files_needed), split="train" ) # 4. Match by ID to get audio for idx, row in target_rows.iterrows(): for audio_row in ds: if audio_row['id'] == row['id']: # Process audio_bytes audio = audio_row['audio_bytes'] break ``` --- ## 📜 Citation If you use this dataset in your research, please cite both the original dataset and this index: ### BibTeX ```bibtex @dataset{munch_hashed_index_2025, title={Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS}, author={humair025}, year={2025}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/datasets/humair025/hashed_data_munch_1}}, note={Index of humair025/munch-1 dataset with SHA-256 audio hashes} } @dataset{munch_urdu_tts_2025, title={Munch-1: Large-Scale Urdu Text-to-Speech Dataset}, author={humair025}, year={2025}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/datasets/humair025/munch-1}} } ``` ### APA Format ``` humair025. (2025). Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS [Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data_munch_1 humair025. (2025). Munch-1: Large-Scale Urdu Text-to-Speech Dataset [Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/munch-1 ``` ### MLA Format ``` humair025. "Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS." Hugging Face, 2025, https://huggingface.co/datasets/humair025/hashed_data_munch_1. humair025. "Munch-1: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025, https://huggingface.co/datasets/humair025/munch-1. ``` --- ## 🤝 Contributing ### Report Issues Found a problem? Please open an issue: - Missing hash files - Incorrect metadata - Hash mismatches - Documentation improvements ### Suggest Improvements We welcome suggestions for: - Additional metadata fields - Better indexing strategies - Integration examples - Use case documentation --- ## 📄 License This index dataset inherits the license from the original [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1): **Creative Commons Attribution 4.0 International (CC-BY-4.0)** You are free to: - ✅ **Share** — copy and redistribute - ✅ **Adapt** — remix, transform, build upon - ✅ **Commercial use** — use commercially Under the terms: - 📝 **Attribution** — Give appropriate credit to original dataset --- ## 🔗 Important Links - 🎧 [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/munch-1) - Full 3.28 TB audio - 📊 [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Lightweight reference - 💬 [**Discussions**](https://huggingface.co/datasets/humair025/hashed_data_munch_1/discussions) - Ask questions - 🐛 [**Report Issues**](https://huggingface.co/datasets/humair025/hashed_data_munch_1/discussions) - Bug reports --- ## ❓ FAQ ### Q: Why use hashes instead of audio? **A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~50KB per audio. This enables duplicate detection and fast queries without storing massive audio files. ### Q: Can I reconstruct audio from hashes? **A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1) using the file reference provided. ### Q: How accurate are the hashes? **A:** SHA-256 has virtually zero collision probability. If two hashes match, the audio is identical (byte-for-byte). ### Q: How do I get the actual audio? **A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/munch-1). See examples above. ### Q: Is this dataset complete? **A:** Yes, this index covers all 3,856,500 rows across all 7,714 parquet files from the original Munch-1 dataset. ### Q: Can I contribute? **A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions. --- ## 🙏 Acknowledgments - **Original Dataset**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1) - **TTS Generation**: OpenAI-compatible models - **Voices**: 13 high-quality voices (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) - **Infrastructure**: HuggingFace Datasets platform - **Hashing**: SHA-256 cryptographic hash function --- ## 📝 Version History - **v1.0.0** (December 2025): Initial release - Processed all 7,714 parquet files - 3,856,500 audio samples indexed - SHA-256 hashes computed for all audio - ~99.97% space reduction achieved --- **Last Updated**: December 2025 **Status**: ✅ Complete --- 💡 **Pro Tip**: Start with this lightweight index to explore the dataset, then selectively download only the audio you need from the [original Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1)!