gsaltintas commited on
Commit
ccf3178
·
verified ·
1 Parent(s): 4fa1ed8

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +8 -7
  2. merges.txt +0 -0
  3. tokenizer.json +0 -0
  4. tokenizer_config.json +0 -0
  5. vocab.json +0 -0
README.md CHANGED
@@ -1,7 +1,8 @@
1
  ---
2
  license: mit
3
  language:
4
- - arb # ISO 639-3 code or "und" if not identifiable
 
5
  tags:
6
  - tokenizer
7
  - bpe
@@ -9,24 +10,24 @@ tags:
9
  - fineweb2
10
  ---
11
 
12
- # Byte-Level BPE Tokenizer: arb_Arab (16K)
13
 
14
- A **Byte-Level BPE** tokenizer trained on **arb_Arab** data from Fineweb-2-HQ.
15
 
16
  ## Training Details
17
 
18
  | Parameter | Value |
19
  |-----------|-------|
20
  | Algorithm | Byte-Level BPE |
21
- | Language | `arb_Arab` |
22
  | Target Vocab Size | 16,000 |
23
- | Final Vocab Size | 16,949 |
24
  | Pre-tokenizer | custom:arb_Arab |
25
  | Number handling | ltr_3digit |
26
  | Contraction handling | True |
27
  | Normalizer | NONE |
28
  | Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` |
29
- | Training Shards | 2 |
30
 
31
  ## Usage
32
 
@@ -46,4 +47,4 @@ tokens = tokenizer.encode("Hello, world!")
46
  ## Sample Encoding
47
  | Text | Tokens | Token IDs |
48
  |------|--------|-----------|
49
- | `Hello, world! 12345 This is a test. こんにちは` | `H, ell, o, ,, Ġ, w, orld, !, Ġ, 123, 45, Ġ, This, Ġ, is, Ġ, a, Ġ, t, est` | `42, 3848, 81, 14, 223, 89, 10002, 3, 223, 16715, 4208, 223, 12697, 223, 901, 223, 67, 223, 86, 2704` |
 
1
  ---
2
  license: mit
3
  language:
4
+ - arb
5
+ - fas #['arb_Arab', 'fas_Arab'] # ISO 639-3 code or "und" if not identifiable
6
  tags:
7
  - tokenizer
8
  - bpe
 
10
  - fineweb2
11
  ---
12
 
13
+ # Byte-Level BPE Tokenizer: ['arb_Arab', 'fas_Arab'] (16K)
14
 
15
+ A **Byte-Level BPE** tokenizer trained on **['arb_Arab', 'fas_Arab']** data from Fineweb-2-HQ.
16
 
17
  ## Training Details
18
 
19
  | Parameter | Value |
20
  |-----------|-------|
21
  | Algorithm | Byte-Level BPE |
22
+ | Language | `['arb_Arab', 'fas_Arab']` |
23
  | Target Vocab Size | 16,000 |
24
+ | Final Vocab Size | 16,960 |
25
  | Pre-tokenizer | custom:arb_Arab |
26
  | Number handling | ltr_3digit |
27
  | Contraction handling | True |
28
  | Normalizer | NONE |
29
  | Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` |
30
+ | Training Shards | 4 |
31
 
32
  ## Usage
33
 
 
47
  ## Sample Encoding
48
  | Text | Tokens | Token IDs |
49
  |------|--------|-----------|
50
+ | `Hello, world! 12345 This is a test. こんにちは` | `H, ell, o, ,, Ġ, w, orld, !, Ġ, 123, 45, Ġ, Th, is, Ġ, is, Ġ, a, Ġ, t` | `42, 5027, 81, 14, 223, 89, 12762, 3, 223, 16853, 5208, 223, 5728, 1147, 223, 1147, 223, 67, 223, 86` |
merges.txt CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
The diff for this file is too large to render. See raw diff
 
vocab.json CHANGED
The diff for this file is too large to render. See raw diff