gsaltintas commited on
Commit
fed3066
·
verified ·
1 Parent(s): 5efb5f1

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +8 -7
  2. merges.txt +0 -0
  3. tokenizer.json +0 -0
  4. tokenizer_config.json +0 -0
  5. vocab.json +0 -0
README.md CHANGED
@@ -1,7 +1,8 @@
1
  ---
2
  license: mit
3
  language:
4
- - ind # ISO 639-3 code or "und" if not identifiable
 
5
  tags:
6
  - tokenizer
7
  - bpe
@@ -9,24 +10,24 @@ tags:
9
  - fineweb2
10
  ---
11
 
12
- # Byte-Level BPE Tokenizer: ind_Latn (16K)
13
 
14
- A **Byte-Level BPE** tokenizer trained on **ind_Latn** data from Fineweb-2-HQ.
15
 
16
  ## Training Details
17
 
18
  | Parameter | Value |
19
  |-----------|-------|
20
  | Algorithm | Byte-Level BPE |
21
- | Language | `ind_Latn` |
22
  | Target Vocab Size | 16,000 |
23
- | Final Vocab Size | 16,961 |
24
  | Pre-tokenizer | custom:ind_Latn |
25
  | Number handling | ltr_3digit |
26
  | Contraction handling | True |
27
  | Normalizer | NFC |
28
  | Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` |
29
- | Training Shards | 2 |
30
 
31
  ## Usage
32
 
@@ -46,4 +47,4 @@ tokens = tokenizer.encode("Hello, world!")
46
  ## Sample Encoding
47
  | Text | Tokens | Token IDs |
48
  |------|--------|-----------|
49
- | `Hello, world! 12345 This is a test. こんにちは` | `H, ello, ,, Ġw, orld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ãģ, ĵ, ãĤ, ĵ, ãģ` | `42, 15107, 14, 429, 4639, 3, 223, 16038, 4529, 13915, 1153, 395, 7029, 16, 223, 9732, 244, 15716, 244, 9732` |
 
1
  ---
2
  license: mit
3
  language:
4
+ - ind
5
+ - vie #['ind_Latn', 'vie_Latn'] # ISO 639-3 code or "und" if not identifiable
6
  tags:
7
  - tokenizer
8
  - bpe
 
10
  - fineweb2
11
  ---
12
 
13
+ # Byte-Level BPE Tokenizer: ['ind_Latn', 'vie_Latn'] (16K)
14
 
15
+ A **Byte-Level BPE** tokenizer trained on **['ind_Latn', 'vie_Latn']** data from Fineweb-2-HQ.
16
 
17
  ## Training Details
18
 
19
  | Parameter | Value |
20
  |-----------|-------|
21
  | Algorithm | Byte-Level BPE |
22
+ | Language | `['ind_Latn', 'vie_Latn']` |
23
  | Target Vocab Size | 16,000 |
24
+ | Final Vocab Size | 16,959 |
25
  | Pre-tokenizer | custom:ind_Latn |
26
  | Number handling | ltr_3digit |
27
  | Contraction handling | True |
28
  | Normalizer | NFC |
29
  | Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` |
30
+ | Training Shards | 4 |
31
 
32
  ## Usage
33
 
 
47
  ## Sample Encoding
48
  | Text | Tokens | Token IDs |
49
  |------|--------|-----------|
50
+ | `Hello, world! 12345 This is a test. こんにちは` | `H, el, lo, ,, Ġw, orld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ãģ, ĵ, ã, Ĥ` | `42, 324, 2155, 14, 505, 4659, 3, 223, 16876, 4702, 15780, 1555, 1333, 8184, 16, 223, 11148, 244, 162, 227` |
merges.txt CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
The diff for this file is too large to render. See raw diff
 
vocab.json CHANGED
The diff for this file is too large to render. See raw diff