Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   RetryableConfigNamesError
Exception:    HfHubHTTPError
Message:      500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/undertheseanlp/sentence-segmentation-1/tree/9ec538b1ce9ce96446f1e2f8c25e0b52fff2ba39/underthesea?recursive=True&expand=False (Request ID: Root=1-698d8cab-22ca80a03bd459933dfe825b;be7731c1-e744-4738-ba35-ef2416c49bbe)

Internal Error - We're working hard to fix this as soon as possible!
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1029, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 630, in get_module
                  patterns = get_data_patterns(base_path, download_config=self.download_config)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 480, in get_data_patterns
                  return _get_data_files_patterns(resolver)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 287, in _get_data_files_patterns
                  data_files = pattern_resolver(pattern)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 367, in resolve_pattern
                  for filepath, info in fs.glob(pattern, detail=True, **glob_kwargs).items()
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 521, in glob
                  return super().glob(path, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 604, in glob
                  allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 563, in find
                  out = self._ls_tree(path, recursive=True, refresh=refresh, revision=resolved_path.revision, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 446, in _ls_tree
                  self._ls_tree(
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 463, in _ls_tree
                  for path_info in tree:
                                   ^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 3140, in list_repo_tree
                  for path_info in paginate(path=tree_url, headers=headers, params={"recursive": recursive, "expand": expand}):
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_pagination.py", line 37, in paginate
                  hf_raise_for_status(r)
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
                  raise _format(HfHubHTTPError, str(e), response) from e
              huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/undertheseanlp/sentence-segmentation-1/tree/9ec538b1ce9ce96446f1e2f8c25e0b52fff2ba39/underthesea?recursive=True&expand=False (Request ID: Root=1-698d8cab-22ca80a03bd459933dfe825b;be7731c1-e744-4738-ba35-ef2416c49bbe)
              
              Internal Error - We're working hard to fix this as soon as possible!

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Sentence Segmentation

Test set for evaluating and improving Vietnamese sentence boundary detection (sent_tokenize) in underthesea.

Problem

The current PunktSentenceTokenizer in underthesea fails on several Vietnamese-specific patterns, primarily in legal text where article titles are merged with sentence bodies without punctuation boundaries.

Current Results

Category Total Correct Accuracy
title_content_merge 38 0 0.0%
repeated_title 13 0 0.0%
ellipsis 1 0 0.0%
numeric_period 1 0 0.0%
quoted_speech 1 0 0.0%
abbreviation 3 2 66.7%
article_header 20 20 100.0%
article_reference 1 1 100.0%
empty_input 1 1 100.0%
multi_sentence 20 20 100.0%
no_punctuation 1 1 100.0%
single_sentence 30 30 100.0%
TOTAL 130 75 57.7%

Key Issues

  1. Title-content merge (38 cases, 0% accuracy): Legal article titles like "Tội trốn thuế" are followed by sentence body "Người nào thực hiện..." without punctuation. sent_tokenize fails to detect this boundary.

  2. Repeated title (13 cases, 0% accuracy): Pattern "X X là..." where the title is repeated as the subject of a definition. E.g., "Hợp đồng mượn tài sản Hợp đồng mượn tài sản là..."

  3. Ellipsis handling (0% accuracy): "..." mid-sentence causes incorrect split.

  4. Numeric periods (0% accuracy): Periods in numbers like "1.500.000" can cause false boundaries.

Trained Punkt Model Results

Trained NLTK PunktTrainer on Vietnamese text from 4 sources (Wikipedia, news, books, legal documents). The trained model fixes punctuation-related issues but cannot address structural patterns (title-content merge).

Category Total Baseline Trained Change
title_content_merge 38 0 0
repeated_title 13 0 0
ellipsis 1 0 1 +1
numeric_period 1 0 1 +1
quoted_speech 1 0 0
abbreviation 3 2 2
article_header 20 20 20
article_reference 1 1 1
empty_input 1 1 1
multi_sentence 20 20 18 -2
no_punctuation 1 1 1
single_sentence 30 30 30
TOTAL 130 75 75 0

Improvements: ellipsis handling (+1), numeric period handling (+1) Regressions: multi_sentence (-2) — trade-off from better ellipsis handling (... followed by new sentence) and quote tokenization differences.

Conclusion: Punkt (trained or not) cannot solve title-content merge (51/130 failures = 39% of test set) because these require structural understanding beyond punctuation disambiguation. A different approach is needed for this category.

Files

  • test_cases.json — 130 test cases with input, expected output, category, and domain
  • evaluate.py — Evaluation script (--improved flag for trained model)
  • eval_results.json — Detailed evaluation results
  • train_punkt.py — Fetch data + train Punkt model
  • punkt_params_trained.json — Trained model parameters (672 abbreviations, 378 sentence starters, 3264 collocations)
  • sent_tokenize.py — Tokenizer using trained model

Test Case Format

{
  "id": "vlc-6200",
  "input": "Tội ngược đãi tù binh , hàng binh Người nào ngược đãi tù binh ...",
  "expected": [
    "Tội ngược đãi tù binh , hàng binh",
    "Người nào ngược đãi tù binh , hàng binh , thì bị phạt ..."
  ],
  "category": "title_content_merge",
  "domain": "legal",
  "issue": "Title merged with sentence body without boundary"
}

Usage

# Run baseline evaluation (underthesea)
python evaluate.py
python evaluate.py -v

# Run trained model evaluation
python evaluate.py --improved -v

# Train Punkt model from scratch
python train_punkt.py

Data Source

Test cases derived from undertheseanlp/UDD-1 across 5 domains: legal (VLC), news (UVN), Wikipedia (UVW), fiction (UVB-F), non-fiction (UVB-N).

Categories

Category Description Count
title_content_merge Article title + body merged without punctuation 38
single_sentence Normal sentence, should not be split 30
article_header Article header like "Quyền X Y 1 ." 20
multi_sentence Two concatenated sentences, should be split 20
repeated_title "X X là..." definition pattern 13
abbreviation TS., PGS., TP. should not cause splits 3
ellipsis "..." should not split mid-sentence 1
numeric_period Periods in numbers should not split 1
article_reference "Điều N ." boundary 1
quoted_speech Periods inside quotes 1
empty_input Empty string input 1
no_punctuation Text without punctuation 1
Downloads last month
16