jamiehudson commited on
Commit
903380b
1 Parent(s): 2225cd8

Push model using huggingface_hub.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: setfit
3
+ tags:
4
+ - setfit
5
+ - sentence-transformers
6
+ - text-classification
7
+ - generated_from_setfit_trainer
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ - precision
12
+ - recall
13
+ widget:
14
+ - text: this is complete crap. i asked exactly five questions and he asked me to start
15
+ a new topic, after which my daily limit was reached. why the hell did you add
16
+ this restriction that makes the chat process completely useless??
17
+ - text: brand wow, brands product is amazing! its definitely going to revolutionize
18
+ product workflows! great job, brand!
19
+ - text: why though? whats the harm in using ai as a tool. theres more to ai than product.
20
+ - text: i got invited to participate in an early preview of the new product ai-powered
21
+ product in product. as a scientific researcher, i'm finding this an amazingly
22
+ powerful tool. this technology is simply revolutionary.
23
+ - text: brand is the premier anti-fascist enterprise in the world today buy product!
24
+ stop fascism!
25
+ pipeline_tag: text-classification
26
+ inference: true
27
+ base_model: BAAI/bge-large-en-v1.5
28
+ model-index:
29
+ - name: SetFit with BAAI/bge-large-en-v1.5
30
+ results:
31
+ - task:
32
+ type: text-classification
33
+ name: Text Classification
34
+ dataset:
35
+ name: Unknown
36
+ type: unknown
37
+ split: test
38
+ metrics:
39
+ - type: accuracy
40
+ value: 0.88
41
+ name: Accuracy
42
+ - type: f1
43
+ value:
44
+ - 0.8846153846153847
45
+ - 0.6666666666666666
46
+ - 0.9222520107238605
47
+ name: F1
48
+ - type: precision
49
+ value:
50
+ - 0.8214285714285714
51
+ - 0.5
52
+ - 1.0
53
+ name: Precision
54
+ - type: recall
55
+ value:
56
+ - 0.9583333333333334
57
+ - 1.0
58
+ - 0.8557213930348259
59
+ name: Recall
60
+ ---
61
+
62
+ # SetFit with BAAI/bge-large-en-v1.5
63
+
64
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
65
+
66
+ The model has been trained using an efficient few-shot learning technique that involves:
67
+
68
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
69
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
70
+
71
+ ## Model Details
72
+
73
+ ### Model Description
74
+ - **Model Type:** SetFit
75
+ - **Sentence Transformer body:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)
76
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
77
+ - **Maximum Sequence Length:** 512 tokens
78
+ - **Number of Classes:** 3 classes
79
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
80
+ <!-- - **Language:** Unknown -->
81
+ <!-- - **License:** Unknown -->
82
+
83
+ ### Model Sources
84
+
85
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
86
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
87
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
88
+
89
+ ### Model Labels
90
+ | Label | Examples |
91
+ |:--------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
92
+ | peak | <ul><li>'after using product to summarize and gather main points of hundreds of research articles that are 50+ pages, i think i can confidently say that brand is on the right track with regards to implementing product in their business. truly extraordinary.'</li><li>'i was stuck in a error for 2+ hours and my bingey bot cleared it!! awesome ai product'</li><li>'product in teams: in teams, product transforms meetings. it organizes thoughts, maintains context, and facilitates collaborative brainstorming, making every meeting more productive.'</li></ul> |
93
+ | neither | <ul><li>">youll receive the test via email and will have two hours to complete it. finally, youll return to zoom with the analyst to go over your results together i don't think it's live. op will get the assigment and he/she has 2 hours to complete it. if this is correct, then op is an idiot because there are thousands of examples online and then there's product. op, start working on the fundamentals and pay the $20 product suscription for product."</li><li>'utilising advanced technologies with brand to perform a practical demonstration for a client on themes of cyber security, product, product, digital transformation, product, the product and more. these skills are rapidly being adopted for safety and efficielnkd.in/ghumbffm'</li><li>"another great example of the elites in the tech world using control of the information to infl your thoughts and actions. as product becomes more prevalent doing your own research will be essential. will be interesting to see if anyone finds success with designing a true 'unbiased' product"</li></ul> |
94
+ | pit | <ul><li>"the utter disappointment of learning from an amazing passionate teacher for two years who gives you decades of knowledge in 2 years and then you continue the subject and get some bland intellectual from the capital who can't even make a product presentation"</li><li>'the amount of times that product has been forced on me against my will after updates is just infuriating. product just taking advantage of the market position they (illegally) established long ago. near-universal software compatibility and being the default os of the general market are why people keep using them. they are in the position where they can fail upwards. and it sucks for the rest of us.'</li><li>'literally canceling my subscription on my product because this is terrible business practice. forcing subscription services to squeeze out every last dollar is disgusting especially when your whole program is a rip off of another established program. cringe'</li></ul> |
95
+
96
+ ## Evaluation
97
+
98
+ ### Metrics
99
+ | Label | Accuracy | F1 | Precision | Recall |
100
+ |:--------|:---------|:-------------------------------------------------------------|:-------------------------------|:----------------------------------------------|
101
+ | **all** | 0.88 | [0.8846153846153847, 0.6666666666666666, 0.9222520107238605] | [0.8214285714285714, 0.5, 1.0] | [0.9583333333333334, 1.0, 0.8557213930348259] |
102
+
103
+ ## Uses
104
+
105
+ ### Direct Use for Inference
106
+
107
+ First install the SetFit library:
108
+
109
+ ```bash
110
+ pip install setfit
111
+ ```
112
+
113
+ Then you can load this model and run inference.
114
+
115
+ ```python
116
+ from setfit import SetFitModel
117
+
118
+ # Download from the 🤗 Hub
119
+ model = SetFitModel.from_pretrained("jamiehudson/725_model_v6")
120
+ # Run inference
121
+ preds = model("why though? whats the harm in using ai as a tool. theres more to ai than product.")
122
+ ```
123
+
124
+ <!--
125
+ ### Downstream Use
126
+
127
+ *List how someone could finetune this model on their own dataset.*
128
+ -->
129
+
130
+ <!--
131
+ ### Out-of-Scope Use
132
+
133
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
134
+ -->
135
+
136
+ <!--
137
+ ## Bias, Risks and Limitations
138
+
139
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
140
+ -->
141
+
142
+ <!--
143
+ ### Recommendations
144
+
145
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
146
+ -->
147
+
148
+ ## Training Details
149
+
150
+ ### Training Set Metrics
151
+ | Training set | Min | Median | Max |
152
+ |:-------------|:----|:-------|:----|
153
+ | Word count | 10 | 37.08 | 98 |
154
+
155
+ | Label | Training Sample Count |
156
+ |:--------|:----------------------|
157
+ | pit | 50 |
158
+ | peak | 50 |
159
+ | neither | 50 |
160
+
161
+ ### Training Hyperparameters
162
+ - batch_size: (16, 16)
163
+ - num_epochs: (3, 3)
164
+ - max_steps: -1
165
+ - sampling_strategy: oversampling
166
+ - body_learning_rate: (2e-05, 1e-05)
167
+ - head_learning_rate: 0.01
168
+ - loss: CosineSimilarityLoss
169
+ - distance_metric: cosine_distance
170
+ - margin: 0.25
171
+ - end_to_end: False
172
+ - use_amp: False
173
+ - warmup_proportion: 0.1
174
+ - seed: 42
175
+ - eval_max_steps: -1
176
+ - load_best_model_at_end: False
177
+
178
+ ### Training Results
179
+ | Epoch | Step | Training Loss | Validation Loss |
180
+ |:------:|:----:|:-------------:|:---------------:|
181
+ | 0.0011 | 1 | 0.2299 | - |
182
+ | 0.0533 | 50 | 0.1604 | - |
183
+ | 0.1066 | 100 | 0.0071 | - |
184
+ | 0.1599 | 150 | 0.0016 | - |
185
+ | 0.2132 | 200 | 0.0012 | - |
186
+ | 0.2665 | 250 | 0.0012 | - |
187
+ | 0.3198 | 300 | 0.0011 | - |
188
+ | 0.3731 | 350 | 0.0009 | - |
189
+ | 0.4264 | 400 | 0.0008 | - |
190
+ | 0.4797 | 450 | 0.0009 | - |
191
+ | 0.5330 | 500 | 0.0007 | - |
192
+ | 0.5864 | 550 | 0.0008 | - |
193
+ | 0.6397 | 600 | 0.0007 | - |
194
+ | 0.6930 | 650 | 0.0007 | - |
195
+ | 0.7463 | 700 | 0.0007 | - |
196
+ | 0.7996 | 750 | 0.0006 | - |
197
+ | 0.8529 | 800 | 0.0006 | - |
198
+ | 0.9062 | 850 | 0.0006 | - |
199
+ | 0.9595 | 900 | 0.0006 | - |
200
+ | 0.0011 | 1 | 0.0006 | - |
201
+ | 0.0533 | 50 | 0.0005 | - |
202
+ | 0.1066 | 100 | 0.0005 | - |
203
+ | 0.1599 | 150 | 0.0005 | - |
204
+ | 0.2132 | 200 | 0.0004 | - |
205
+ | 0.2665 | 250 | 0.0003 | - |
206
+ | 0.3198 | 300 | 0.0004 | - |
207
+ | 0.3731 | 350 | 0.0003 | - |
208
+ | 0.4264 | 400 | 0.0004 | - |
209
+ | 0.4797 | 450 | 0.0004 | - |
210
+ | 0.5330 | 500 | 0.0002 | - |
211
+ | 0.5864 | 550 | 0.0002 | - |
212
+ | 0.6397 | 600 | 0.0002 | - |
213
+ | 0.6930 | 650 | 0.0002 | - |
214
+ | 0.7463 | 700 | 0.0002 | - |
215
+ | 0.7996 | 750 | 0.0003 | - |
216
+ | 0.8529 | 800 | 0.0002 | - |
217
+ | 0.9062 | 850 | 0.0002 | - |
218
+ | 0.9595 | 900 | 0.0001 | - |
219
+ | 1.0128 | 950 | 0.0002 | - |
220
+ | 1.0661 | 1000 | 0.0002 | - |
221
+ | 1.1194 | 1050 | 0.0002 | - |
222
+ | 1.1727 | 1100 | 0.0001 | - |
223
+ | 1.2260 | 1150 | 0.0001 | - |
224
+ | 1.2793 | 1200 | 0.0001 | - |
225
+ | 1.3326 | 1250 | 0.0001 | - |
226
+ | 1.3859 | 1300 | 0.0001 | - |
227
+ | 1.4392 | 1350 | 0.0001 | - |
228
+ | 1.4925 | 1400 | 0.0001 | - |
229
+ | 1.5458 | 1450 | 0.0001 | - |
230
+ | 1.5991 | 1500 | 0.0001 | - |
231
+ | 1.6525 | 1550 | 0.0001 | - |
232
+ | 1.7058 | 1600 | 0.0001 | - |
233
+ | 1.7591 | 1650 | 0.0001 | - |
234
+ | 1.8124 | 1700 | 0.0001 | - |
235
+ | 1.8657 | 1750 | 0.0001 | - |
236
+ | 1.9190 | 1800 | 0.0001 | - |
237
+ | 1.9723 | 1850 | 0.0001 | - |
238
+ | 2.0256 | 1900 | 0.0001 | - |
239
+ | 2.0789 | 1950 | 0.0001 | - |
240
+ | 2.1322 | 2000 | 0.0001 | - |
241
+ | 2.1855 | 2050 | 0.0001 | - |
242
+ | 2.2388 | 2100 | 0.0001 | - |
243
+ | 2.2921 | 2150 | 0.0001 | - |
244
+ | 2.3454 | 2200 | 0.0001 | - |
245
+ | 2.3987 | 2250 | 0.0001 | - |
246
+ | 2.4520 | 2300 | 0.0001 | - |
247
+ | 2.5053 | 2350 | 0.0001 | - |
248
+ | 2.5586 | 2400 | 0.0001 | - |
249
+ | 2.6119 | 2450 | 0.0001 | - |
250
+ | 2.6652 | 2500 | 0.0001 | - |
251
+ | 2.7186 | 2550 | 0.0001 | - |
252
+ | 2.7719 | 2600 | 0.0001 | - |
253
+ | 2.8252 | 2650 | 0.0001 | - |
254
+ | 2.8785 | 2700 | 0.0001 | - |
255
+ | 2.9318 | 2750 | 0.0001 | - |
256
+ | 2.9851 | 2800 | 0.0001 | - |
257
+
258
+ ### Framework Versions
259
+ - Python: 3.10.12
260
+ - SetFit: 1.0.3
261
+ - Sentence Transformers: 2.5.1
262
+ - Transformers: 4.38.2
263
+ - PyTorch: 2.1.0+cu121
264
+ - Datasets: 2.18.0
265
+ - Tokenizers: 0.15.2
266
+
267
+ ## Citation
268
+
269
+ ### BibTeX
270
+ ```bibtex
271
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
272
+ doi = {10.48550/ARXIV.2209.11055},
273
+ url = {https://arxiv.org/abs/2209.11055},
274
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
275
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
276
+ title = {Efficient Few-Shot Learning Without Prompts},
277
+ publisher = {arXiv},
278
+ year = {2022},
279
+ copyright = {Creative Commons Attribution 4.0 International}
280
+ }
281
+ ```
282
+
283
+ <!--
284
+ ## Glossary
285
+
286
+ *Clearly define terms in order to be accessible across audiences.*
287
+ -->
288
+
289
+ <!--
290
+ ## Model Card Authors
291
+
292
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
293
+ -->
294
+
295
+ <!--
296
+ ## Model Card Contact
297
+
298
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
299
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-large-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 1024,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 4096,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 16,
24
+ "num_hidden_layers": 24,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.38.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.28.1",
5
+ "pytorch": "1.13.0+cu117"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null
9
+ }
config_setfit.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "normalize_embeddings": false,
3
+ "labels": [
4
+ "pit",
5
+ "peak",
6
+ "neither"
7
+ ]
8
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f701fcf0bd58ca793e19478593c49dc306cd0a1dad2d85f81b07d3aa8bd92da1
3
+ size 1340612432
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d11005ddd53614fb3e7cf263f02909d8ed274cb5c25eecdbbf2c7f84d65cdf9e
3
+ size 25471
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff