mrshlltaylor commited on
Commit
c013431
1 Parent(s): f97fcaf

Upload tokenizer

Browse files
Files changed (4) hide show
  1. README.md +199 -0
  2. special_tokens_map.json +7 -0
  3. tokenizer_config.json +12 -0
  4. vocab.json +1 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "0",
3
+ "mask_token": "a01",
4
+ "pad_token": "a0",
5
+ "sep_token": "a",
6
+ "unk_token": "a00"
7
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {},
3
+ "clean_up_tokenization_spaces": true,
4
+ "cls_token": "0",
5
+ "mask_token": "a01",
6
+ "max_len": null,
7
+ "model_max_length": 1000000000000000019884624838656,
8
+ "pad_token": "a0",
9
+ "sep_token": "a",
10
+ "tokenizer_class": "FixedVocabTokenizer",
11
+ "unk_token": "a00"
12
+ }
vocab.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"a00": 0, "a0": 1, "0": 2, "a": 3, "a01": 4, "1": 5, "a02": 6, "2": 7, "a03": 8, "3": 9, "a04": 10, "4": 11, "a05": 12, "5": 13, "a06": 14, "6": 15, "a07": 16, "7": 17, "a08": 18, "8": 19, "a09": 20, "9": 21, "a10": 22, "a1": 23, "a11": 24, "a12": 25, "a13": 26, "a14": 27, "a15": 28, "a16": 29, "a17": 30, "a18": 31, "a19": 32, "a20": 33, "a2": 34, "a21": 35, "a22": 36, "a23": 37, "a24": 38, "a25": 39, "a26": 40, "a27": 41, "a28": 42, "a29": 43, "a30": 44, "a3": 45, "a31": 46, "a32": 47, "a33": 48, "a34": 49, "a35": 50, "a36": 51, "a37": 52, "a38": 53, "a39": 54, "a40": 55, "a4": 56, "a41": 57, "a42": 58, "a43": 59, "a44": 60, "a45": 61, "a46": 62, "a47": 63, "a48": 64, "a49": 65, "a50": 66, "a5": 67, "a51": 68, "a52": 69, "a53": 70, "a54": 71, "a55": 72, "a56": 73, "a57": 74, "a58": 75, "a59": 76, "a60": 77, "a6": 78, "a61": 79, "a62": 80, "a63": 81, "a64": 82, "a65": 83, "a66": 84, "a67": 85, "a68": 86, "a69": 87, "a70": 88, "a7": 89, "a71": 90, "a72": 91, "a73": 92, "a74": 93, "a75": 94, "a76": 95, "a77": 96, "a78": 97, "a79": 98, "a80": 99, "a8": 100, "a81": 101, "a82": 102, "a83": 103, "a84": 104, "a85": 105, "a86": 106, "a87": 107, "a88": 108, "a89": 109, "a90": 110, "a9": 111, "a91": 112, "a92": 113, "a93": 114, "a94": 115, "a95": 116, "a96": 117, "a97": 118, "a98": 119, "a99": 120, "b00": 121, "b0": 122, "b": 123, "b01": 124, "b02": 125, "b03": 126, "b04": 127, "b05": 128, "b06": 129, "b07": 130, "b08": 131, "b09": 132, "b10": 133, "b1": 134, "b11": 135, "b12": 136, "b13": 137, "b14": 138, "b15": 139, "b16": 140, "b17": 141, "b18": 142, "b19": 143, "b20": 144, "b2": 145, "b21": 146, "b22": 147, "b23": 148, "b24": 149, "b25": 150, "b26": 151, "b27": 152, "b28": 153, "b29": 154, "b30": 155, "b3": 156, "b31": 157, "b32": 158, "b33": 159, "b34": 160, "b35": 161, "b36": 162, "b37": 163, "b38": 164, "b39": 165, "b40": 166, "b4": 167, "b41": 168, "b42": 169, "b43": 170, "b44": 171, "b45": 172, "b46": 173, "b47": 174, "b48": 175, "b49": 176, "b50": 177, "b5": 178, "b51": 179, "b52": 180, "b53": 181, "b54": 182, "b55": 183, "b56": 184, "b57": 185, "b58": 186, "b59": 187, "b60": 188, "b6": 189, "b61": 190, "b62": 191, "b63": 192, "b64": 193, "b65": 194, "b66": 195, "b67": 196, "b68": 197, "b69": 198, "b70": 199, "b7": 200, "b71": 201, "b72": 202, "b73": 203, "b74": 204, "b75": 205, "b76": 206, "b77": 207, "b78": 208, "b79": 209, "b80": 210, "b8": 211, "b81": 212, "b82": 213, "b83": 214, "b84": 215, "b85": 216, "b86": 217, "b87": 218, "b88": 219, "b89": 220, "b90": 221, "b9": 222, "b91": 223, "b92": 224, "b93": 225, "b94": 226, "b95": 227, "b96": 228, "b97": 229, "b98": 230, "b99": 231, "c00": 232, "c0": 233, "c": 234, "c01": 235, "c02": 236, "c03": 237, "c04": 238, "c05": 239, "c06": 240, "c07": 241, "c08": 242, "c09": 243, "c10": 244, "c1": 245, "c11": 246, "c12": 247, "c13": 248, "c14": 249, "c15": 250, "c16": 251, "c17": 252, "c18": 253, "c19": 254, "c20": 255, "c2": 256, "c21": 257, "c22": 258, "c23": 259, "c24": 260, "c25": 261, "c26": 262, "c27": 263, "c28": 264, "c29": 265, "c30": 266, "c3": 267, "c31": 268, "c32": 269, "c33": 270, "c34": 271, "c35": 272, "c36": 273, "c37": 274, "c38": 275, "c39": 276, "c40": 277, "c4": 278, "c41": 279, "c42": 280, "c43": 281, "c44": 282, "c45": 283, "c46": 284, "c47": 285, "c48": 286, "c49": 287, "c50": 288, "c5": 289, "c51": 290, "c52": 291, "c53": 292, "c54": 293, "c55": 294, "c56": 295, "c57": 296, "c58": 297, "c59": 298, "c60": 299, "c6": 300, "c61": 301, "c62": 302, "c63": 303, "c64": 304, "c65": 305, "c66": 306, "c67": 307, "c68": 308, "c69": 309, "c70": 310, "c7": 311, "c71": 312, "c72": 313, "c73": 314, "c74": 315, "c75": 316, "c76": 317, "c77": 318, "c78": 319, "c79": 320, "c80": 321, "c8": 322, "c81": 323, "c82": 324, "c83": 325, "c84": 326, "c85": 327, "c86": 328, "c87": 329, "c88": 330, "c89": 331, "c90": 332, "c9": 333, "c91": 334, "c92": 335, "c93": 336, "c94": 337, "c95": 338, "c96": 339, "c97": 340, "c98": 341, "c99": 342, "forward": 343, "forwar": 344, "d": 345, "forwa": 346, "r": 347, "forw": 348, "for": 349, "w": 350, "fo": 351, "f": 352, "o": 353, "backward": 354, "backwar": 355, "backwa": 356, "backw": 357, "back": 358, "bac": 359, "k": 360, "ba": 361}