vodkaslime commited on
Commit
98e6f21
1 Parent(s): 04e2343

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -25,7 +25,6 @@
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
  *.tflite filter=lfs diff=lfs merge=lfs -text
30
  *.tgz filter=lfs diff=lfs merge=lfs -text
31
  *.wasm filter=lfs diff=lfs merge=lfs -text
 
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
 
28
  *.tflite filter=lfs diff=lfs merge=lfs -text
29
  *.tgz filter=lfs diff=lfs merge=lfs -text
30
  *.wasm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ ---
4
+
5
+ # CodeT5+ 220M
6
+
7
+ ## Model description
8
+
9
+ [CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
10
+ It is introduced in the paper:
11
+
12
+ [CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
13
+ by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
14
+
15
+ Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
16
+ Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
17
+ Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
18
+
19
+ ## How to use
20
+
21
+ This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
22
+
23
+ ```python
24
+ from transformers import T5ForConditionalGeneration, AutoTokenizer
25
+
26
+ checkpoint = "Salesforce/codet5p-220m"
27
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
28
+
29
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
30
+ model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
31
+
32
+ inputs = tokenizer.encode("def print_hello_world():<extra_id_0>", return_tensors="pt").to(device)
33
+ outputs = model.generate(inputs, max_length=10)
34
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
35
+ # ==> print "Hello World"
36
+ ```
37
+
38
+ ## Pretraining data
39
+
40
+ This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
41
+ The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
42
+ Supported languages (9 in total) are as follows:
43
+ `c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
44
+
45
+ ## Training procedure
46
+
47
+ This checkpoint is trained on the unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
48
+ Please refer to the paper for more details.
49
+
50
+ ## Evaluation results
51
+
52
+ CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
53
+ Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
54
+ 8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
55
+ In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
56
+ Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
57
+ Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
58
+
59
+
60
+ ## BibTeX entry and citation info
61
+
62
+ ```bibtex
63
+ @article{wang2023codet5plus,
64
+ title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
65
+ author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
66
+ journal={arXiv preprint},
67
+ year={2023}
68
+ }
69
+ ```
added_tokens.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Salesforce/codet5p-220m",
3
+ "architectures": [
4
+ "T5ForConditionalGeneration"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "d_ff": 3072,
8
+ "d_kv": 64,
9
+ "d_model": 768,
10
+ "decoder_start_token_id": 0,
11
+ "dense_act_fn": "relu",
12
+ "dropout_rate": 0.1,
13
+ "eos_token_id": 2,
14
+ "feed_forward_proj": "relu",
15
+ "initializer_factor": 1.0,
16
+ "is_encoder_decoder": true,
17
+ "is_gated_act": false,
18
+ "layer_norm_epsilon": 1e-06,
19
+ "model_type": "t5",
20
+ "n_positions": 512,
21
+ "num_decoder_layers": 12,
22
+ "num_heads": 12,
23
+ "num_layers": 12,
24
+ "output_past": true,
25
+ "pad_token_id": 0,
26
+ "relative_attention_max_distance": 128,
27
+ "relative_attention_num_buckets": 32,
28
+ "torch_dtype": "float16",
29
+ "transformers_version": "4.21.3",
30
+ "use_cache": true,
31
+ "vocab_size": 32100
32
+ }
ctranslate2/config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_source_bos": false,
3
+ "add_source_eos": false,
4
+ "bos_token": "<pad>",
5
+ "decoder_start_token": "<pad>",
6
+ "eos_token": "</s>",
7
+ "layer_norm_epsilon": null,
8
+ "unk_token": "<unk>"
9
+ }
ctranslate2/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6273a5c2792269054a4b116b1300f731c272e8daab62ff8ac35b872aaf398b02
3
+ size 445782212
ctranslate2/shared_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27d09f5323c322333d0d846d701e865f9663ee58b5aaf201092f0ca518d067f0
3
+ size 445800957
special_tokens_map.json ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "single_word": false,
5
+ "lstrip": false,
6
+ "rstrip": false,
7
+ "normalized": true
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "single_word": false,
12
+ "lstrip": false,
13
+ "rstrip": false,
14
+ "normalized": true
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "single_word": false,
19
+ "lstrip": false,
20
+ "rstrip": false,
21
+ "normalized": true
22
+ },
23
+ "sep_token": {
24
+ "content": "</s>",
25
+ "single_word": false,
26
+ "lstrip": false,
27
+ "rstrip": false,
28
+ "normalized": true
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "single_word": false,
33
+ "lstrip": false,
34
+ "rstrip": false,
35
+ "normalized": true
36
+ },
37
+ "cls_token": {
38
+ "content": "<s>",
39
+ "single_word": false,
40
+ "lstrip": false,
41
+ "rstrip": false,
42
+ "normalized": true
43
+ },
44
+ "mask_token": { "content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
45
+ "additional_special_tokens": [
46
+ { "content":"<extra_id_99>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
47
+ { "content":"<extra_id_98>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
48
+ { "content":"<extra_id_97>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
49
+ { "content":"<extra_id_96>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
50
+ { "content":"<extra_id_95>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
51
+ { "content":"<extra_id_94>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
52
+ { "content":"<extra_id_93>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
53
+ { "content":"<extra_id_92>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
54
+ { "content":"<extra_id_91>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
55
+ { "content":"<extra_id_90>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
56
+ { "content":"<extra_id_89>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
57
+ { "content":"<extra_id_88>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
58
+ { "content":"<extra_id_87>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
59
+ { "content":"<extra_id_86>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
60
+ { "content":"<extra_id_85>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
61
+ { "content":"<extra_id_84>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
62
+ { "content":"<extra_id_83>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
63
+ { "content":"<extra_id_82>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
64
+ { "content":"<extra_id_81>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
65
+ { "content":"<extra_id_80>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
66
+ { "content":"<extra_id_79>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
67
+ { "content":"<extra_id_78>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
68
+ { "content":"<extra_id_77>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
69
+ { "content":"<extra_id_76>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
70
+ { "content":"<extra_id_75>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
71
+ { "content":"<extra_id_74>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
72
+ { "content":"<extra_id_73>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
73
+ { "content":"<extra_id_72>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
74
+ { "content":"<extra_id_71>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
75
+ { "content":"<extra_id_70>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
76
+ { "content":"<extra_id_69>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
77
+ { "content":"<extra_id_68>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
78
+ { "content":"<extra_id_67>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
79
+ { "content":"<extra_id_66>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
80
+ { "content":"<extra_id_65>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
81
+ { "content":"<extra_id_64>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
82
+ { "content":"<extra_id_63>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
83
+ { "content":"<extra_id_62>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
84
+ { "content":"<extra_id_61>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
85
+ { "content":"<extra_id_60>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
86
+ { "content":"<extra_id_59>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
87
+ { "content":"<extra_id_58>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
88
+ { "content":"<extra_id_57>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
89
+ { "content":"<extra_id_56>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
90
+ { "content":"<extra_id_55>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
91
+ { "content":"<extra_id_54>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
92
+ { "content":"<extra_id_53>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
93
+ { "content":"<extra_id_52>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
94
+ { "content":"<extra_id_51>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
95
+ { "content":"<extra_id_50>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
96
+ { "content":"<extra_id_49>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
97
+ { "content":"<extra_id_48>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
98
+ { "content":"<extra_id_47>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
99
+ { "content":"<extra_id_46>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
100
+ { "content":"<extra_id_45>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
101
+ { "content":"<extra_id_44>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
102
+ { "content":"<extra_id_43>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
103
+ { "content":"<extra_id_42>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
104
+ { "content":"<extra_id_41>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
105
+ { "content":"<extra_id_40>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
106
+ { "content":"<extra_id_39>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
107
+ { "content":"<extra_id_38>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
108
+ { "content":"<extra_id_37>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
109
+ { "content":"<extra_id_36>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
110
+ { "content":"<extra_id_35>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
111
+ { "content":"<extra_id_34>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
112
+ { "content":"<extra_id_33>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
113
+ { "content":"<extra_id_32>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
114
+ { "content":"<extra_id_31>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
115
+ { "content":"<extra_id_30>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
116
+ { "content":"<extra_id_29>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
117
+ { "content":"<extra_id_28>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
118
+ { "content":"<extra_id_27>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
119
+ { "content":"<extra_id_26>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
120
+ { "content":"<extra_id_25>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
121
+ { "content":"<extra_id_24>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
122
+ { "content":"<extra_id_23>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
123
+ { "content":"<extra_id_22>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
124
+ { "content":"<extra_id_21>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
125
+ { "content":"<extra_id_20>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
126
+ { "content":"<extra_id_19>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
127
+ { "content":"<extra_id_18>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
128
+ { "content":"<extra_id_17>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
129
+ { "content":"<extra_id_16>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
130
+ { "content":"<extra_id_15>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
131
+ { "content":"<extra_id_14>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
132
+ { "content":"<extra_id_13>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
133
+ { "content":"<extra_id_12>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
134
+ { "content":"<extra_id_11>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
135
+ { "content":"<extra_id_10>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
136
+ { "content":"<extra_id_9>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
137
+ { "content":"<extra_id_8>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
138
+ { "content":"<extra_id_7>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
139
+ { "content":"<extra_id_6>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
140
+ { "content":"<extra_id_5>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
141
+ { "content":"<extra_id_4>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
142
+ { "content":"<extra_id_3>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
143
+ { "content":"<extra_id_2>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
144
+ { "content":"<extra_id_1>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true },
145
+ { "content":"<extra_id_0>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true }
146
+ ]
147
+ }
tabby.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "auto_model": "AutoModelForSeq2SeqLM"
3
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "errors": "replace",
3
+ "unk_token": {
4
+ "content": "<unk>",
5
+ "single_word": false,
6
+ "lstrip": false,
7
+ "rstrip": false,
8
+ "normalized": true,
9
+ "__type": "AddedToken"
10
+ },
11
+ "bos_token": {
12
+ "content": "<s>",
13
+ "single_word": false,
14
+ "lstrip": false,
15
+ "rstrip": false,
16
+ "normalized": true,
17
+ "__type": "AddedToken"
18
+ },
19
+ "eos_token": {
20
+ "content": "</s>",
21
+ "single_word": false,
22
+ "lstrip": false,
23
+ "rstrip": false,
24
+ "normalized": true,
25
+ "__type": "AddedToken"
26
+ },
27
+ "add_prefix_space": false,
28
+ "sep_token": {
29
+ "content": "</s>",
30
+ "single_word": false,
31
+ "lstrip": false,
32
+ "rstrip": false,
33
+ "normalized": true,
34
+ "__type": "AddedToken"
35
+ },
36
+ "cls_token": {
37
+ "content": "<s>",
38
+ "single_word": false,
39
+ "lstrip": false,
40
+ "rstrip": false,
41
+ "normalized": true,
42
+ "__type": "AddedToken"
43
+ },
44
+ "pad_token": {
45
+ "content": "<pad>",
46
+ "single_word": false,
47
+ "lstrip": false,
48
+ "rstrip": false,
49
+ "normalized": true,
50
+ "__type": "AddedToken"
51
+ },
52
+ "mask_token": {
53
+ "content": "<mask>",
54
+ "single_word": false,
55
+ "lstrip": true,
56
+ "rstrip": false,
57
+ "normalized": true,
58
+ "__type": "AddedToken"
59
+ },
60
+ "model_max_length": 512,
61
+ "tokenizer_class": "RobertaTokenizer"
62
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff