Upload 13 files
Browse files- 1_Pooling/config.json +10 -0
- README.md +128 -0
- config.json +25 -0
- config_sentence_transformers.json +9 -0
- eval/mse_evaluation__results.csv +29 -0
- eval/similarity_evaluation_sts-dev_results.csv +29 -0
- model.safetensors +3 -0
- modules.json +14 -0
- sentence_bert_config.json +4 -0
- special_tokens_map.json +44 -0
- tokenizer.json +0 -0
- tokenizer_config.json +71 -0
- vocab.txt +0 -0
1_Pooling/config.json
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 384,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": true,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false,
|
7 |
+
"pooling_mode_weightedmean_tokens": false,
|
8 |
+
"pooling_mode_lasttoken": false,
|
9 |
+
"include_prompt": true
|
10 |
+
}
|
README.md
ADDED
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: sentence-transformers
|
3 |
+
pipeline_tag: sentence-similarity
|
4 |
+
tags:
|
5 |
+
- sentence-transformers
|
6 |
+
- feature-extraction
|
7 |
+
- sentence-similarity
|
8 |
+
- transformers
|
9 |
+
|
10 |
+
---
|
11 |
+
|
12 |
+
# {MODEL_NAME}
|
13 |
+
|
14 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
15 |
+
|
16 |
+
<!--- Describe your model here -->
|
17 |
+
|
18 |
+
## Usage (Sentence-Transformers)
|
19 |
+
|
20 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
21 |
+
|
22 |
+
```
|
23 |
+
pip install -U sentence-transformers
|
24 |
+
```
|
25 |
+
|
26 |
+
Then you can use the model like this:
|
27 |
+
|
28 |
+
```python
|
29 |
+
from sentence_transformers import SentenceTransformer
|
30 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
31 |
+
|
32 |
+
model = SentenceTransformer('{MODEL_NAME}')
|
33 |
+
embeddings = model.encode(sentences)
|
34 |
+
print(embeddings)
|
35 |
+
```
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
## Usage (HuggingFace Transformers)
|
40 |
+
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
41 |
+
|
42 |
+
```python
|
43 |
+
from transformers import AutoTokenizer, AutoModel
|
44 |
+
import torch
|
45 |
+
|
46 |
+
|
47 |
+
#Mean Pooling - Take attention mask into account for correct averaging
|
48 |
+
def mean_pooling(model_output, attention_mask):
|
49 |
+
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
50 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
51 |
+
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
52 |
+
|
53 |
+
|
54 |
+
# Sentences we want sentence embeddings for
|
55 |
+
sentences = ['This is an example sentence', 'Each sentence is converted']
|
56 |
+
|
57 |
+
# Load model from HuggingFace Hub
|
58 |
+
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
|
59 |
+
model = AutoModel.from_pretrained('{MODEL_NAME}')
|
60 |
+
|
61 |
+
# Tokenize sentences
|
62 |
+
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
63 |
+
|
64 |
+
# Compute token embeddings
|
65 |
+
with torch.no_grad():
|
66 |
+
model_output = model(**encoded_input)
|
67 |
+
|
68 |
+
# Perform pooling. In this case, mean pooling.
|
69 |
+
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
|
70 |
+
|
71 |
+
print("Sentence embeddings:")
|
72 |
+
print(sentence_embeddings)
|
73 |
+
```
|
74 |
+
|
75 |
+
|
76 |
+
|
77 |
+
## Evaluation Results
|
78 |
+
|
79 |
+
<!--- Describe how your model was evaluated -->
|
80 |
+
|
81 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
|
82 |
+
|
83 |
+
|
84 |
+
## Training
|
85 |
+
The model was trained with the parameters:
|
86 |
+
|
87 |
+
**DataLoader**:
|
88 |
+
|
89 |
+
`torch.utils.data.dataloader.DataLoader` of length 137553 with parameters:
|
90 |
+
```
|
91 |
+
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
|
92 |
+
```
|
93 |
+
|
94 |
+
**Loss**:
|
95 |
+
|
96 |
+
`sentence_transformers.losses.MSELoss.MSELoss`
|
97 |
+
|
98 |
+
Parameters of the fit()-Method:
|
99 |
+
```
|
100 |
+
{
|
101 |
+
"epochs": 1,
|
102 |
+
"evaluation_steps": 5000,
|
103 |
+
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
|
104 |
+
"max_grad_norm": 1,
|
105 |
+
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
|
106 |
+
"optimizer_params": {
|
107 |
+
"eps": 1e-06,
|
108 |
+
"lr": 0.0001
|
109 |
+
},
|
110 |
+
"scheduler": "WarmupLinear",
|
111 |
+
"steps_per_epoch": null,
|
112 |
+
"warmup_steps": 1000,
|
113 |
+
"weight_decay": 0.01
|
114 |
+
}
|
115 |
+
```
|
116 |
+
|
117 |
+
|
118 |
+
## Full Model Architecture
|
119 |
+
```
|
120 |
+
SentenceTransformer(
|
121 |
+
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
|
122 |
+
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
123 |
+
)
|
124 |
+
```
|
125 |
+
|
126 |
+
## Citing & Authors
|
127 |
+
|
128 |
+
<!--- Describe where people can find more information -->
|
config.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "TaylorAI/gte-tiny",
|
3 |
+
"architectures": [
|
4 |
+
"BertModel"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"classifier_dropout": null,
|
8 |
+
"hidden_act": "gelu",
|
9 |
+
"hidden_dropout_prob": 0.1,
|
10 |
+
"hidden_size": 384,
|
11 |
+
"initializer_range": 0.02,
|
12 |
+
"intermediate_size": 1536,
|
13 |
+
"layer_norm_eps": 1e-12,
|
14 |
+
"max_position_embeddings": 512,
|
15 |
+
"model_type": "bert",
|
16 |
+
"num_attention_heads": 12,
|
17 |
+
"num_hidden_layers": 3,
|
18 |
+
"pad_token_id": 0,
|
19 |
+
"position_embedding_type": "absolute",
|
20 |
+
"torch_dtype": "float32",
|
21 |
+
"transformers_version": "4.40.0",
|
22 |
+
"type_vocab_size": 2,
|
23 |
+
"use_cache": true,
|
24 |
+
"vocab_size": 30522
|
25 |
+
}
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "2.2.2",
|
4 |
+
"transformers": "4.34.0",
|
5 |
+
"pytorch": "2.0.1+cu118"
|
6 |
+
},
|
7 |
+
"prompts": {},
|
8 |
+
"default_prompt_name": null
|
9 |
+
}
|
eval/mse_evaluation__results.csv
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
epoch,steps,MSE
|
2 |
+
0,5000,0.47166235744953156
|
3 |
+
0,10000,0.37070035468786955
|
4 |
+
0,15000,0.32915491610765457
|
5 |
+
0,20000,0.3022111253812909
|
6 |
+
0,25000,0.2860546577721834
|
7 |
+
0,30000,0.27226749807596207
|
8 |
+
0,35000,0.2621056279167533
|
9 |
+
0,40000,0.25275147054344416
|
10 |
+
0,45000,0.24559586308896542
|
11 |
+
0,50000,0.2409756649285555
|
12 |
+
0,55000,0.23480404634028673
|
13 |
+
0,60000,0.23111277259886265
|
14 |
+
0,65000,0.2271266421303153
|
15 |
+
0,70000,0.22308838088065386
|
16 |
+
0,75000,0.2202761359512806
|
17 |
+
0,80000,0.21781811956316233
|
18 |
+
0,85000,0.21458840928971767
|
19 |
+
0,90000,0.21309000439941883
|
20 |
+
0,95000,0.2095935633406043
|
21 |
+
0,100000,0.20842656958848238
|
22 |
+
0,105000,0.20773126743733883
|
23 |
+
0,110000,0.2059056656435132
|
24 |
+
0,115000,0.20338515751063824
|
25 |
+
0,120000,0.20260869059711695
|
26 |
+
0,125000,0.2014710335060954
|
27 |
+
0,130000,0.20016569178551435
|
28 |
+
0,135000,0.199802010320127
|
29 |
+
0,-1,0.19947525579482317
|
eval/similarity_evaluation_sts-dev_results.csv
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
2 |
+
0,5000,0.8482784841781801,0.8524673285103249,0.8527363303816113,0.8536974318032373,0.8507429053520061,0.8518087596830178,0.7290294796138388,0.7222457751199094
|
3 |
+
0,10000,0.851522289059487,0.8550575570365678,0.8561895858778579,0.8564992433428477,0.8549097711624272,0.8553545699560491,0.7413520865451514,0.7373012959517407
|
4 |
+
0,15000,0.853710809631439,0.8565147992750579,0.8577158540045073,0.8577098527181514,0.8565144787975028,0.8568546919864072,0.7504689821086896,0.7449558272692272
|
5 |
+
0,20000,0.8540319978452439,0.8569741180973768,0.8578535262979087,0.8580078785972628,0.8565983775145072,0.8570460462108194,0.7479221251875051,0.7434152209949655
|
6 |
+
0,25000,0.8546565882275716,0.8578572076747778,0.8590097969084035,0.8589626258235827,0.8578829955242684,0.8580656447783175,0.7465150443680822,0.7418695315450354
|
7 |
+
0,30000,0.8566672105901525,0.8594213535205055,0.8602676272434086,0.8602408135372777,0.8590713876871994,0.8593383146946152,0.7526094791593237,0.748224769026938
|
8 |
+
0,35000,0.8571100252746132,0.8600871771852339,0.8608204232615566,0.8610102144234375,0.8595434622508469,0.8599233613611874,0.753956316112839,0.7502490202011034
|
9 |
+
0,40000,0.8559139037907579,0.8588256318306422,0.859604173569988,0.8597011761546718,0.8584894417578237,0.8587051292048292,0.7474515413384551,0.741869366064494
|
10 |
+
0,45000,0.856265008742947,0.8592787334409433,0.8601104783499282,0.8601181309432502,0.8592395715659137,0.8594094204897488,0.7484793142063589,0.7435075575864867
|
11 |
+
0,50000,0.8569211165237697,0.8598292756363145,0.8605543442052597,0.8606103458204372,0.8595676020012094,0.8597390195971244,0.7515427958024276,0.7472764189537775
|
12 |
+
0,55000,0.8562075563924645,0.8589091799310042,0.859841781260059,0.8598799538579605,0.8587882409340232,0.8590060600172196,0.7475246493261491,0.7428149005248468
|
13 |
+
0,60000,0.8569707723246641,0.8595952806871071,0.860566323618586,0.8604651304178604,0.8596098187861387,0.8597964769320989,0.7535645460750052,0.7486540573552126
|
14 |
+
0,65000,0.8569103452131722,0.8600268782112365,0.8614217305429412,0.8608495424800336,0.8603240057571976,0.8602177921690977,0.7460511702193234,0.7402588649530701
|
15 |
+
0,70000,0.8573467610257368,0.8598835247841208,0.8610375445882468,0.861163641575446,0.8600034121583955,0.8602742993254814,0.7495966667469552,0.7445113463075477
|
16 |
+
0,75000,0.8569202261382182,0.8596530950351353,0.8607016007767625,0.8604840379042015,0.8596498923735942,0.8596676423237303,0.7515624025047947,0.7468403686053563
|
17 |
+
0,80000,0.8576730599037291,0.8604011756225682,0.8615432661698413,0.8612236797489058,0.8605588884907639,0.8603870894388059,0.750107686339764,0.7452585641329533
|
18 |
+
0,85000,0.8583390040790053,0.8611601059858207,0.8621337513282632,0.8619397257402335,0.861063106730213,0.8609504973004285,0.7546225161201876,0.7497468572659596
|
19 |
+
0,90000,0.8575744131143879,0.8607364409784936,0.8615518120348374,0.8614321310843385,0.8605467563725214,0.8605709159532309,0.7542955531758224,0.7494734075655113
|
20 |
+
0,95000,0.8581535214603767,0.8608581047635226,0.8614857625667244,0.861603823373491,0.8604308998084226,0.8605555341452761,0.7562258408218162,0.7519859032951348
|
21 |
+
0,100000,0.8575272792375968,0.8604531151601916,0.8616026317998216,0.861469841840493,0.8607229105317097,0.8604864062332358,0.7509198586787145,0.7465743710908277
|
22 |
+
0,105000,0.8575445480655535,0.8604985030918701,0.8615044926265819,0.8612746594455564,0.8605518068670269,0.8605125841872215,0.7522032211150207,0.7479008962525083
|
23 |
+
0,110000,0.8576966144385598,0.8605407451139044,0.8614088201804478,0.8614305510120751,0.8603840346381387,0.8605051251073474,0.7517030043173099,0.7470494201254125
|
24 |
+
0,115000,0.857477524915541,0.8604521454086326,0.8612827456575336,0.8611852366618776,0.8603034722973986,0.8603012520660652,0.7558904929775052,0.7514950677688355
|
25 |
+
0,120000,0.8577417338422507,0.8606742469853947,0.8615645142474043,0.861536146156713,0.8604828695956938,0.8605137336542058,0.7514754637331444,0.7470229770466421
|
26 |
+
0,125000,0.8579218216815234,0.860733086883652,0.8618307953661968,0.8615596098198863,0.8608479412078883,0.8606167594012378,0.7522352230909667,0.7479105813125642
|
27 |
+
0,130000,0.858032479066429,0.8608713662140498,0.8618483310090321,0.8616822014176498,0.8609555683008904,0.8608600630743983,0.7529330321732546,0.748948000177042
|
28 |
+
0,135000,0.8578263948202207,0.860584293186662,0.8617266127627646,0.8614490090859027,0.8607919607942601,0.8606299472574345,0.7530837013484516,0.7488705563911164
|
29 |
+
0,-1,0.8578309282683841,0.8606207432650688,0.8617281651065495,0.86147026990066,0.8607882122917774,0.8605566340396457,0.7527337207540649,0.7485823749213384
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0af1f4811878b5fa99a1fd8e4f27877bcb1ac0869d3ac73cc0a70620aae55af6
|
3 |
+
size 69565312
|
modules.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
}
|
14 |
+
]
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 512,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"[PAD]",
|
4 |
+
"[UNK]",
|
5 |
+
"[CLS]",
|
6 |
+
"[SEP]",
|
7 |
+
"[MASK]"
|
8 |
+
],
|
9 |
+
"cls_token": {
|
10 |
+
"content": "[CLS]",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"mask_token": {
|
17 |
+
"content": "[MASK]",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
},
|
23 |
+
"pad_token": {
|
24 |
+
"content": "[PAD]",
|
25 |
+
"lstrip": false,
|
26 |
+
"normalized": false,
|
27 |
+
"rstrip": false,
|
28 |
+
"single_word": false
|
29 |
+
},
|
30 |
+
"sep_token": {
|
31 |
+
"content": "[SEP]",
|
32 |
+
"lstrip": false,
|
33 |
+
"normalized": false,
|
34 |
+
"rstrip": false,
|
35 |
+
"single_word": false
|
36 |
+
},
|
37 |
+
"unk_token": {
|
38 |
+
"content": "[UNK]",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false
|
43 |
+
}
|
44 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {
|
3 |
+
"0": {
|
4 |
+
"content": "[PAD]",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false,
|
9 |
+
"special": true
|
10 |
+
},
|
11 |
+
"100": {
|
12 |
+
"content": "[UNK]",
|
13 |
+
"lstrip": false,
|
14 |
+
"normalized": false,
|
15 |
+
"rstrip": false,
|
16 |
+
"single_word": false,
|
17 |
+
"special": true
|
18 |
+
},
|
19 |
+
"101": {
|
20 |
+
"content": "[CLS]",
|
21 |
+
"lstrip": false,
|
22 |
+
"normalized": false,
|
23 |
+
"rstrip": false,
|
24 |
+
"single_word": false,
|
25 |
+
"special": true
|
26 |
+
},
|
27 |
+
"102": {
|
28 |
+
"content": "[SEP]",
|
29 |
+
"lstrip": false,
|
30 |
+
"normalized": false,
|
31 |
+
"rstrip": false,
|
32 |
+
"single_word": false,
|
33 |
+
"special": true
|
34 |
+
},
|
35 |
+
"103": {
|
36 |
+
"content": "[MASK]",
|
37 |
+
"lstrip": false,
|
38 |
+
"normalized": false,
|
39 |
+
"rstrip": false,
|
40 |
+
"single_word": false,
|
41 |
+
"special": true
|
42 |
+
}
|
43 |
+
},
|
44 |
+
"additional_special_tokens": [
|
45 |
+
"[PAD]",
|
46 |
+
"[UNK]",
|
47 |
+
"[CLS]",
|
48 |
+
"[SEP]",
|
49 |
+
"[MASK]"
|
50 |
+
],
|
51 |
+
"clean_up_tokenization_spaces": true,
|
52 |
+
"cls_token": "[CLS]",
|
53 |
+
"do_basic_tokenize": true,
|
54 |
+
"do_lower_case": true,
|
55 |
+
"mask_token": "[MASK]",
|
56 |
+
"max_length": 128,
|
57 |
+
"model_max_length": 512,
|
58 |
+
"never_split": null,
|
59 |
+
"pad_to_multiple_of": null,
|
60 |
+
"pad_token": "[PAD]",
|
61 |
+
"pad_token_type_id": 0,
|
62 |
+
"padding_side": "right",
|
63 |
+
"sep_token": "[SEP]",
|
64 |
+
"stride": 0,
|
65 |
+
"strip_accents": null,
|
66 |
+
"tokenize_chinese_chars": true,
|
67 |
+
"tokenizer_class": "BertTokenizer",
|
68 |
+
"truncation_side": "right",
|
69 |
+
"truncation_strategy": "longest_first",
|
70 |
+
"unk_token": "[UNK]"
|
71 |
+
}
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|