script-based-tokenizers
Collection
5 items • Updated
A Byte-Level BPE tokenizer trained on ['arb_Arab', 'ces_Latn', 'cmn_Hani', 'dan_Latn', 'deu_Latn', 'ell_Grek', 'fra_Latn', 'fw_edu', 'hun_Latn', 'ind_Latn', 'ita_Latn', 'jpn_Jpan', 'nld_Latn', 'pol_Latn', 'por_Latn', 'rus_Cyrl', 'spa_Latn', 'swe_Latn', 'tur_Latn', 'vie_Latn'] data from Fineweb-2-HQ.
| Parameter | Value |
|---|---|
| Algorithm | Byte-Level BPE |
| Language | ['arb_Arab', 'ces_Latn', 'cmn_Hani', 'dan_Latn', 'deu_Latn', 'ell_Grek', 'fra_Latn', 'fw_edu', 'hun_Latn', 'ind_Latn', 'ita_Latn', 'jpn_Jpan', 'nld_Latn', 'pol_Latn', 'por_Latn', 'rus_Cyrl', 'spa_Latn', 'swe_Latn', 'tur_Latn', 'vie_Latn'] |
| Target Vocab Size | 128,000 |
| Final Vocab Size | 128,715 |
| Pre-tokenizer | custom:boundless_bpe_renewed |
| Number handling | ltr_3digit |
| Contraction handling | False |
| Normalizer | NFC |
| Special Tokens | <s>, </s>, <pad>, <unk> |
| Training Shards | 40, ['fineweb_2_hq.arb_Arab.chunk.00.jsonl', 'fineweb_2_hq.arb_Arab.chunk.01.jsonl', 'fineweb_2_hq.ces_Latn.chunk.00.jsonl', 'fineweb_2_hq.ces_Latn.chunk.01.jsonl', 'fineweb_2_hq.cmn_Hani.chunk.00.jsonl', 'fineweb_2_hq.cmn_Hani.chunk.01.jsonl', 'fineweb_2_hq.dan_Latn.chunk.00.jsonl', 'fineweb_2_hq.dan_Latn.chunk.01.jsonl', 'fineweb_2_hq.deu_Latn.chunk.00.jsonl', 'fineweb_2_hq.deu_Latn.chunk.01.jsonl', 'fineweb_2_hq.ell_Grek.chunk.00.jsonl', 'fineweb_2_hq.ell_Grek.chunk.01.jsonl', 'fineweb_2_hq.fra_Latn.chunk.00.jsonl', 'fineweb_2_hq.fra_Latn.chunk.01.jsonl', 'fineweb_edu_100bt.chunk.00.jsonl', 'fineweb_edu_100bt.chunk.01.jsonl', 'fineweb_2_hq.hun_Latn.chunk.00.jsonl', 'fineweb_2_hq.hun_Latn.chunk.01.jsonl', 'fineweb_2_hq.ind_Latn.chunk.00.jsonl', 'fineweb_2_hq.ind_Latn.chunk.01.jsonl', 'fineweb_2_hq.ita_Latn.chunk.00.jsonl', 'fineweb_2_hq.ita_Latn.chunk.01.jsonl', 'fineweb_2_hq.jpn_Jpan.chunk.00.jsonl', 'fineweb_2_hq.jpn_Jpan.chunk.01.jsonl', 'fineweb_2_hq.nld_Latn.chunk.00.jsonl', 'fineweb_2_hq.nld_Latn.chunk.01.jsonl', 'fineweb_2_hq.pol_Latn.chunk.00.jsonl', 'fineweb_2_hq.pol_Latn.chunk.01.jsonl', 'fineweb_2_hq.por_Latn.chunk.00.jsonl', 'fineweb_2_hq.por_Latn.chunk.01.jsonl', 'fineweb_2_hq.rus_Cyrl.chunk.00.jsonl', 'fineweb_2_hq.rus_Cyrl.chunk.01.jsonl', 'fineweb_2_hq.spa_Latn.chunk.00.jsonl', 'fineweb_2_hq.spa_Latn.chunk.01.jsonl', 'fineweb_2_hq.swe_Latn.chunk.00.jsonl', 'fineweb_2_hq.swe_Latn.chunk.01.jsonl', 'fineweb_2_hq.tur_Latn.chunk.00.jsonl', 'fineweb_2_hq.tur_Latn.chunk.01.jsonl', 'fineweb_2_hq.vie_Latn.chunk.00.jsonl', 'fineweb_2_hq.vie_Latn.chunk.01.jsonl'] |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_boundless_AllL_128000")
tokens = tokenizer.encode("Hello, world!")
tokenizer.json — Full HuggingFace tokenizervocab.json — Vocabulary mappingmerges.txt — BPE merge rules| Text | Tokens | Token IDs |
|---|---|---|
Hello, world! 12345 This is a test. こんにちは |
H, ello, ,, Ġworld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ãģĵ, ãĤĵãģ«, ãģ¡ãģ¯ |
42, 4226, 14, 7608, 3, 223, 29410, 5127, 4924, 591, 266, 2801, 16, 223, 1094, 104413, 124472 |
Command used to create this tokenizer:
['/home/gsa/tokenizers2/flexitok/tokenizer_training/train_tokenizers.py', 'algorithm=bpe', 'vocab_size=128000', 'langs=[arb_Arab,ces_Latn,cmn_Hani,dan_Latn,deu_Latn,ell_Grek,fra_Latn,fw_edu,hun_Latn,ind_Latn,ita_Latn,jpn_Jpan,nld_Latn,pol_Latn,por_Latn,rus_Cyrl,spa_Latn,swe_Latn,tur_Latn,vie_Latn]', 'data_dir=/scratch/gsa/data/flexitok/', 'output_dir=/scratch/gsa/trained_tokenizers', 'pretokenizer=custom:boundless_bpe_renewed', 'number_handling=ltr_3digit', 'add_numbers=true', 'handle_contractions=false', 'unicode_normalization=nfc', 'use_byte_level_regex=false', 'byte_fallback=false', 'strip_zero_width=false', 'cjk_char_split=false', 'cjk_char_coverage=0.999', 'add_cjk_chars=true', 'max_lines=500_000', 'hf.publish_to_hf=true', 'hf_repo_prefix=flexitok/', 'hf.hf_repo_id=flexitok/bpe_boundless_AllL_128000', 'hf.collections=[flexitok/script-based-tokenizers]']