bhaskars113 commited on
Commit
8ded446
1 Parent(s): db32373

Add SetFit model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1536,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
2_Dense/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"in_features": 1536, "out_features": 1024, "bias": true, "activation_function": "torch.nn.modules.linear.Identity"}
2_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53f45a38be44d9b247345217963439137621760d2a54b4ec3eb3dad17ce72ba3
3
+ size 6295712
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: dunzhang/stella_en_1.5B_v5
3
+ library_name: setfit
4
+ metrics:
5
+ - accuracy
6
+ pipeline_tag: text-classification
7
+ tags:
8
+ - setfit
9
+ - sentence-transformers
10
+ - text-classification
11
+ - generated_from_setfit_trainer
12
+ widget:
13
+ - text: I have never owned a F-150. I fell in love with them in 2015 and really like
14
+ the idea of a rust free body on a truck.
15
+ - text: No rust. A few scratches on the front bumper cover. A few chips from rocks
16
+ and other things, but other than that I’d say it’s pretty flawless. No swirls
17
+ or fading.
18
+ - text: I wouldn’t cal it bad ownership at all. Hyundai’s paint is notoriously crappy,
19
+ and rust issues are quite common. Just consider yourself lucky.
20
+ - text: Our white Atlas CS has SHIT paint. It’s covered in rock chips and rust spots.
21
+ - text: 'Mines a work in progress: 1979 Ranger XLT 5.4L supercharged (From an 03 Lightning)
22
+ 4R100 auto trans 2015 f-150 chassis w\ 3.73 diffs Orig paint (rough and faded
23
+ but no rust)'
24
+ inference: true
25
+ ---
26
+
27
+ # SetFit with dunzhang/stella_en_1.5B_v5
28
+
29
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
30
+
31
+ The model has been trained using an efficient few-shot learning technique that involves:
32
+
33
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
34
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
35
+
36
+ ## Model Details
37
+
38
+ ### Model Description
39
+ - **Model Type:** SetFit
40
+ - **Sentence Transformer body:** [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5)
41
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
42
+ - **Maximum Sequence Length:** 512 tokens
43
+ - **Number of Classes:** 2 classes
44
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
45
+ <!-- - **Language:** Unknown -->
46
+ <!-- - **License:** Unknown -->
47
+
48
+ ### Model Sources
49
+
50
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
51
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
52
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
53
+
54
+ ### Model Labels
55
+ | Label | Examples |
56
+ |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
57
+ | 0 | <ul><li>'the other day I happened to be standing taller than the roof and saw that I have two very large rust spots on the roof and tons of little rust spots on the hood and hatch. The ones on the roof look like the paint is washing away, like it was hit with acid. I went to the dealership I purchased it from today and the service tech was pretty useless, referring me to Toyota USA.'</li><li>'Anything but a Corolla. My sister got one new for her graduation, the paint started peeling after just 6 months with rust spots underneath'</li><li>'My dealer hasn’t activated the Qmerit install benefits, I guess I need to call them tomorrow, my sales guy not responding to me at all. I still need to get the paint issue taken care of, found rusted paint when taking delivery.'</li></ul> |
58
+ | 1 | <ul><li>'I have never owned a F-150. I fell in love with them in 2015 and really like the idea of a rust free body on a truck.'</li><li>"2009 Honda Civic I bought it brand new, it has way over 200k miles on it and while it looks a little worn and the paint is faded there's no rust or dents or anything that would make you look twice at it. I love the damn car."</li><li>'Mine still has no rust but I take preventative measures each winter. Your paint still looks amazing.'</li></ul> |
59
+
60
+ ## Uses
61
+
62
+ ### Direct Use for Inference
63
+
64
+ First install the SetFit library:
65
+
66
+ ```bash
67
+ pip install setfit
68
+ ```
69
+
70
+ Then you can load this model and run inference.
71
+
72
+ ```python
73
+ from setfit import SetFitModel
74
+
75
+ # Download from the 🤗 Hub
76
+ model = SetFitModel.from_pretrained("bhaskars113/toyota-corrosion")
77
+ # Run inference
78
+ preds = model("Our white Atlas CS has SHIT paint. It’s covered in rock chips and rust spots.")
79
+ ```
80
+
81
+ <!--
82
+ ### Downstream Use
83
+
84
+ *List how someone could finetune this model on their own dataset.*
85
+ -->
86
+
87
+ <!--
88
+ ### Out-of-Scope Use
89
+
90
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
91
+ -->
92
+
93
+ <!--
94
+ ## Bias, Risks and Limitations
95
+
96
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
97
+ -->
98
+
99
+ <!--
100
+ ### Recommendations
101
+
102
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
103
+ -->
104
+
105
+ ## Training Details
106
+
107
+ ### Training Set Metrics
108
+ | Training set | Min | Median | Max |
109
+ |:-------------|:----|:-------|:----|
110
+ | Word count | 15 | 35.875 | 98 |
111
+
112
+ | Label | Training Sample Count |
113
+ |:------|:----------------------|
114
+ | 0 | 16 |
115
+ | 1 | 16 |
116
+
117
+ ### Training Hyperparameters
118
+ - batch_size: (8, 8)
119
+ - num_epochs: (1, 1)
120
+ - max_steps: -1
121
+ - sampling_strategy: oversampling
122
+ - num_iterations: 20
123
+ - body_learning_rate: (2e-05, 2e-05)
124
+ - head_learning_rate: 2e-05
125
+ - loss: CosineSimilarityLoss
126
+ - distance_metric: cosine_distance
127
+ - margin: 0.25
128
+ - end_to_end: False
129
+ - use_amp: False
130
+ - warmup_proportion: 0.1
131
+ - l2_weight: 0.01
132
+ - seed: 42
133
+ - eval_max_steps: -1
134
+ - load_best_model_at_end: False
135
+
136
+ ### Training Results
137
+ | Epoch | Step | Training Loss | Validation Loss |
138
+ |:------:|:----:|:-------------:|:---------------:|
139
+ | 0.0063 | 1 | 0.2731 | - |
140
+ | 0.3125 | 50 | 0.1076 | - |
141
+ | 0.625 | 100 | 0.0002 | - |
142
+ | 0.9375 | 150 | 0.0 | - |
143
+
144
+ ### Framework Versions
145
+ - Python: 3.10.12
146
+ - SetFit: 1.1.0
147
+ - Sentence Transformers: 3.2.1
148
+ - Transformers: 4.44.2
149
+ - PyTorch: 2.4.1+cu121
150
+ - Datasets: 3.0.1
151
+ - Tokenizers: 0.19.1
152
+
153
+ ## Citation
154
+
155
+ ### BibTeX
156
+ ```bibtex
157
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
158
+ doi = {10.48550/ARXIV.2209.11055},
159
+ url = {https://arxiv.org/abs/2209.11055},
160
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
161
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
162
+ title = {Efficient Few-Shot Learning Without Prompts},
163
+ publisher = {arXiv},
164
+ year = {2022},
165
+ copyright = {Creative Commons Attribution 4.0 International}
166
+ }
167
+ ```
168
+
169
+ <!--
170
+ ## Glossary
171
+
172
+ *Clearly define terms in order to be accessible across audiences.*
173
+ -->
174
+
175
+ <!--
176
+ ## Model Card Authors
177
+
178
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
179
+ -->
180
+
181
+ <!--
182
+ ## Model Card Contact
183
+
184
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
185
+ -->
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "dunzhang/stella_en_1.5B_v5",
3
+ "architectures": [
4
+ "Qwen2Model"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoModel": "dunzhang/stella_en_1.5B_v5--modeling_qwen.Qwen2Model",
9
+ "AutoModelForCausalLM": "dunzhang/stella_en_1.5B_v5--modeling_qwen.Qwen2ForCausalLM",
10
+ "AutoModelForSequenceClassification": "dunzhang/stella_en_1.5B_v5--modeling_qwen.Qwen2ForSequenceClassification"
11
+ },
12
+ "bos_token_id": 151643,
13
+ "eos_token_id": 151643,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 1536,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 8960,
18
+ "max_position_embeddings": 131072,
19
+ "max_window_layers": 21,
20
+ "model_type": "qwen2",
21
+ "num_attention_heads": 12,
22
+ "num_hidden_layers": 28,
23
+ "num_key_value_heads": 2,
24
+ "rms_norm_eps": 1e-06,
25
+ "rope_theta": 1000000.0,
26
+ "sliding_window": null,
27
+ "tie_word_embeddings": false,
28
+ "torch_dtype": "float32",
29
+ "transformers_version": "4.44.2",
30
+ "use_cache": true,
31
+ "use_sliding_window": false,
32
+ "vocab_size": 151646
33
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.2.1",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "2.4.1+cu121"
6
+ },
7
+ "prompts": {
8
+ "s2p_query": "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: ",
9
+ "s2s_query": "Instruct: Retrieve semantically similar text.\nQuery: "
10
+ },
11
+ "default_prompt_name": null,
12
+ "similarity_fn_name": "cosine"
13
+ }
config_setfit.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "normalize_embeddings": false,
3
+ "labels": null
4
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da21efc28a3f707b833f2be7df2940f5fb09c89dabc32d5f6d84012eb5908b5e
3
+ size 4994887136
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddcda8a569aee266a5cbae76ebed20453bc96365164ddc03374968132d0e4a13
3
+ size 1178224504
model.safetensors.index.json ADDED
@@ -0,0 +1,345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 6173075456
4
+ },
5
+ "weight_map": {
6
+ "embed_tokens.weight": "model-00001-of-00002.safetensors",
7
+ "layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
8
+ "layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
9
+ "layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
10
+ "layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
11
+ "layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
12
+ "layers.0.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
13
+ "layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
14
+ "layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
15
+ "layers.0.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
16
+ "layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
17
+ "layers.0.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
18
+ "layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
19
+ "layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
20
+ "layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
21
+ "layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
22
+ "layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
23
+ "layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
24
+ "layers.1.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
25
+ "layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
26
+ "layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
27
+ "layers.1.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
28
+ "layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
29
+ "layers.1.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
30
+ "layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
31
+ "layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
32
+ "layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
33
+ "layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
34
+ "layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
35
+ "layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
36
+ "layers.10.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
37
+ "layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
38
+ "layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
39
+ "layers.10.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
40
+ "layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
41
+ "layers.10.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
42
+ "layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
43
+ "layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
44
+ "layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
45
+ "layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
46
+ "layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
47
+ "layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
48
+ "layers.11.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
49
+ "layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
50
+ "layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
51
+ "layers.11.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
52
+ "layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
53
+ "layers.11.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
54
+ "layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
55
+ "layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
56
+ "layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
57
+ "layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
58
+ "layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
59
+ "layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
60
+ "layers.12.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
61
+ "layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
62
+ "layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
63
+ "layers.12.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
64
+ "layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
65
+ "layers.12.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
66
+ "layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
67
+ "layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
68
+ "layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
69
+ "layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
70
+ "layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
71
+ "layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
72
+ "layers.13.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
73
+ "layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
74
+ "layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
75
+ "layers.13.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
76
+ "layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
77
+ "layers.13.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
78
+ "layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
79
+ "layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
80
+ "layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
81
+ "layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
82
+ "layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
83
+ "layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
84
+ "layers.14.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
85
+ "layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
86
+ "layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
87
+ "layers.14.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
88
+ "layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
89
+ "layers.14.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
90
+ "layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
91
+ "layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
92
+ "layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
93
+ "layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
94
+ "layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
95
+ "layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
96
+ "layers.15.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
97
+ "layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
98
+ "layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
99
+ "layers.15.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
100
+ "layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
101
+ "layers.15.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
102
+ "layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
103
+ "layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
104
+ "layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
105
+ "layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
106
+ "layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
107
+ "layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
108
+ "layers.16.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
109
+ "layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
110
+ "layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
111
+ "layers.16.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
112
+ "layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
113
+ "layers.16.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
114
+ "layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
115
+ "layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
116
+ "layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
117
+ "layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
118
+ "layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
119
+ "layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
120
+ "layers.17.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
121
+ "layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
122
+ "layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
123
+ "layers.17.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
124
+ "layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
125
+ "layers.17.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
126
+ "layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
127
+ "layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
128
+ "layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
129
+ "layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
130
+ "layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
131
+ "layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
132
+ "layers.18.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
133
+ "layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
134
+ "layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
135
+ "layers.18.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
136
+ "layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
137
+ "layers.18.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
138
+ "layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
139
+ "layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
140
+ "layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
141
+ "layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
142
+ "layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
143
+ "layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
144
+ "layers.19.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
145
+ "layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
146
+ "layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
147
+ "layers.19.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
148
+ "layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
149
+ "layers.19.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
150
+ "layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
151
+ "layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
152
+ "layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
153
+ "layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
154
+ "layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
155
+ "layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
156
+ "layers.2.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
157
+ "layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
158
+ "layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
159
+ "layers.2.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
160
+ "layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
161
+ "layers.2.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
162
+ "layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
163
+ "layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
164
+ "layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
165
+ "layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
166
+ "layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
167
+ "layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
168
+ "layers.20.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
169
+ "layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
170
+ "layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
171
+ "layers.20.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
172
+ "layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
173
+ "layers.20.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
174
+ "layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
175
+ "layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
176
+ "layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
177
+ "layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
178
+ "layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
179
+ "layers.21.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
180
+ "layers.21.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
181
+ "layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
182
+ "layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
183
+ "layers.21.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
184
+ "layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
185
+ "layers.21.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
186
+ "layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
187
+ "layers.22.input_layernorm.weight": "model-00002-of-00002.safetensors",
188
+ "layers.22.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
189
+ "layers.22.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
190
+ "layers.22.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
191
+ "layers.22.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
192
+ "layers.22.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
193
+ "layers.22.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
194
+ "layers.22.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
195
+ "layers.22.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
196
+ "layers.22.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
197
+ "layers.22.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
198
+ "layers.22.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
199
+ "layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
200
+ "layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
201
+ "layers.23.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
202
+ "layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
203
+ "layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
204
+ "layers.23.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
205
+ "layers.23.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
206
+ "layers.23.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
207
+ "layers.23.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
208
+ "layers.23.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
209
+ "layers.23.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
210
+ "layers.23.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
211
+ "layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
212
+ "layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
213
+ "layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
214
+ "layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
215
+ "layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
216
+ "layers.24.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
217
+ "layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
218
+ "layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
219
+ "layers.24.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
220
+ "layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
221
+ "layers.24.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
222
+ "layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
223
+ "layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
224
+ "layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
225
+ "layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
226
+ "layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
227
+ "layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
228
+ "layers.25.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
229
+ "layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
230
+ "layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
231
+ "layers.25.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
232
+ "layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
233
+ "layers.25.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
234
+ "layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
235
+ "layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
236
+ "layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
237
+ "layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
238
+ "layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
239
+ "layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
240
+ "layers.26.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
241
+ "layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
242
+ "layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
243
+ "layers.26.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
244
+ "layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
245
+ "layers.26.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
246
+ "layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
247
+ "layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
248
+ "layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
249
+ "layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
250
+ "layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
251
+ "layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
252
+ "layers.27.self_attn.k_proj.bias": "model-00002-of-00002.safetensors",
253
+ "layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
254
+ "layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
255
+ "layers.27.self_attn.q_proj.bias": "model-00002-of-00002.safetensors",
256
+ "layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
257
+ "layers.27.self_attn.v_proj.bias": "model-00002-of-00002.safetensors",
258
+ "layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
259
+ "layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
260
+ "layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
261
+ "layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
262
+ "layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
263
+ "layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
264
+ "layers.3.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
265
+ "layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
266
+ "layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
267
+ "layers.3.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
268
+ "layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
269
+ "layers.3.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
270
+ "layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
271
+ "layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
272
+ "layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
273
+ "layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
274
+ "layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
275
+ "layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
276
+ "layers.4.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
277
+ "layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
278
+ "layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
279
+ "layers.4.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
280
+ "layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
281
+ "layers.4.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
282
+ "layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
283
+ "layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
284
+ "layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
285
+ "layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
286
+ "layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
287
+ "layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
288
+ "layers.5.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
289
+ "layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
290
+ "layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
291
+ "layers.5.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
292
+ "layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
293
+ "layers.5.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
294
+ "layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
295
+ "layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
296
+ "layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
297
+ "layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
298
+ "layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
299
+ "layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
300
+ "layers.6.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
301
+ "layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
302
+ "layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
303
+ "layers.6.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
304
+ "layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
305
+ "layers.6.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
306
+ "layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
307
+ "layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
308
+ "layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
309
+ "layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
310
+ "layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
311
+ "layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
312
+ "layers.7.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
313
+ "layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
314
+ "layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
315
+ "layers.7.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
316
+ "layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
317
+ "layers.7.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
318
+ "layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
319
+ "layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
320
+ "layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
321
+ "layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
322
+ "layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
323
+ "layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
324
+ "layers.8.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
325
+ "layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
326
+ "layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
327
+ "layers.8.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
328
+ "layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
329
+ "layers.8.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
330
+ "layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
331
+ "layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
332
+ "layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
333
+ "layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
334
+ "layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
335
+ "layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
336
+ "layers.9.self_attn.k_proj.bias": "model-00001-of-00002.safetensors",
337
+ "layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
338
+ "layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
339
+ "layers.9.self_attn.q_proj.bias": "model-00001-of-00002.safetensors",
340
+ "layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
341
+ "layers.9.self_attn.v_proj.bias": "model-00001-of-00002.safetensors",
342
+ "layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
343
+ "norm.weight": "model-00002-of-00002.safetensors"
344
+ }
345
+ }
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:789bac699cc5ffee8e212066ad7fdb91d8a8ce8383c1e0bfce4b65243f08a41c
3
+ size 9055
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Dense",
18
+ "type": "sentence_transformers.models.Dense"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "eos_token": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "pad_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ }
20
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_eos_token": true,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [
31
+ "<|im_start|>",
32
+ "<|im_end|>"
33
+ ],
34
+ "auto_map": {
35
+ "AutoTokenizer": [
36
+ "dunzhang/stella_en_1.5B_v5--tokenization_qwen.Qwen2Tokenizer",
37
+ "dunzhang/stella_en_1.5B_v5--tokenization_qwen.Qwen2TokenizerFast"
38
+ ]
39
+ },
40
+ "bos_token": null,
41
+ "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
42
+ "clean_up_tokenization_spaces": false,
43
+ "eos_token": "<|endoftext|>",
44
+ "errors": "replace",
45
+ "model_max_length": 512,
46
+ "pad_token": "<|endoftext|>",
47
+ "split_special_tokens": false,
48
+ "tokenizer_class": "Qwen2Tokenizer",
49
+ "unk_token": null
50
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff