anismahmahi commited on
Commit
e726dca
1 Parent(s): 0adcd13

Add SetFit model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: setfit
3
+ tags:
4
+ - setfit
5
+ - sentence-transformers
6
+ - text-classification
7
+ - generated_from_setfit_trainer
8
+ metrics:
9
+ - f1
10
+ widget:
11
+ - text: This also goes for bigger issues like foreign policy as well; multiple full-scale
12
+ invasions of Syria have been prevented because of information that the alternative
13
+ media made viral.
14
+ - text: 'Yesterday’s State of the Union address issued by Donald Trump represented
15
+ a refreshing break from the eight years of pusillanimous foreign policies pursued
16
+ by past administration.
17
+
18
+ '
19
+ - text: There are 2 trillion Google searches per day.
20
+ - text: Westerville Officers Eric Joering, 39, and Anthony Morelli, 54, were killed
21
+ shortly after noon Saturday in this normally quiet suburb while responding to
22
+ a 911 hang-up call.
23
+ - text: 'Trump was right, Acosta is a "rude, terrible person."
24
+
25
+ '
26
+ pipeline_tag: text-classification
27
+ inference: true
28
+ model-index:
29
+ - name: SetFit
30
+ results:
31
+ - task:
32
+ type: text-classification
33
+ name: Text Classification
34
+ dataset:
35
+ name: Unknown
36
+ type: unknown
37
+ split: test
38
+ metrics:
39
+ - type: f1
40
+ value: 0.3371824480369515
41
+ name: F1
42
+ ---
43
+
44
+ # SetFit
45
+
46
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
47
+
48
+ The model has been trained using an efficient few-shot learning technique that involves:
49
+
50
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
51
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
52
+
53
+ ## Model Details
54
+
55
+ ### Model Description
56
+ - **Model Type:** SetFit
57
+ <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
58
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
59
+ - **Maximum Sequence Length:** 256 tokens
60
+ - **Number of Classes:** 2 classes
61
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
62
+ <!-- - **Language:** Unknown -->
63
+ <!-- - **License:** Unknown -->
64
+
65
+ ### Model Sources
66
+
67
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
68
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
69
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
70
+
71
+ ### Model Labels
72
+ | Label | Examples |
73
+ |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
74
+ | 0.0 | <ul><li>'Pamela Geller and Robert Spencer co-founded anti-Muslim group Stop Islamization of America.\n'</li><li>'He added: "We condemn all those whose behaviours and views run counter to our shared values and will not stand for extremism in any form."\n'</li><li>'Ms Geller, of the Atlas Shrugs blog, and Mr Spencer, of Jihad Watch, are also co-founders of the American Freedom Defense Initiative, best known for a pro-Israel "Defeat Jihad" poster campaign on the New York subway.\n'</li></ul> |
75
+ | 1.0 | <ul><li>'On both of their blogs the pair called their bans from entering the UK "a striking blow against freedom" and said the "the nation that gave the world the Magna Carta is dead".\n'</li><li>'A researcher with the organisation, Matthew Collins, said it was "delighted" with the decision.\n'</li><li>'Lead attorney Matt Gonzalez has argued that the weapon was a SIG Sauer with a "hair trigger in single-action mode" — a model well-known for accidental discharges even among experienced shooters.\n'</li></ul> |
76
+
77
+ ## Evaluation
78
+
79
+ ### Metrics
80
+ | Label | F1 |
81
+ |:--------|:-------|
82
+ | **all** | 0.3372 |
83
+
84
+ ## Uses
85
+
86
+ ### Direct Use for Inference
87
+
88
+ First install the SetFit library:
89
+
90
+ ```bash
91
+ pip install setfit
92
+ ```
93
+
94
+ Then you can load this model and run inference.
95
+
96
+ ```python
97
+ from setfit import SetFitModel
98
+
99
+ # Download from the 🤗 Hub
100
+ model = SetFitModel.from_pretrained("anismahmahi/Roberta-large-G3-setfit-model")
101
+ # Run inference
102
+ preds = model("There are 2 trillion Google searches per day.")
103
+ ```
104
+
105
+ <!--
106
+ ### Downstream Use
107
+
108
+ *List how someone could finetune this model on their own dataset.*
109
+ -->
110
+
111
+ <!--
112
+ ### Out-of-Scope Use
113
+
114
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
115
+ -->
116
+
117
+ <!--
118
+ ## Bias, Risks and Limitations
119
+
120
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
121
+ -->
122
+
123
+ <!--
124
+ ### Recommendations
125
+
126
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
127
+ -->
128
+
129
+ ## Training Details
130
+
131
+ ### Training Set Metrics
132
+ | Training set | Min | Median | Max |
133
+ |:-------------|:----|:--------|:----|
134
+ | Word count | 1 | 26.8625 | 105 |
135
+
136
+ | Label | Training Sample Count |
137
+ |:------|:----------------------|
138
+ | 0 | 200 |
139
+ | 1 | 200 |
140
+
141
+ ### Training Hyperparameters
142
+ - batch_size: (8, 8)
143
+ - num_epochs: (3, 3)
144
+ - max_steps: -1
145
+ - sampling_strategy: oversampling
146
+ - num_iterations: 5
147
+ - body_learning_rate: (2e-05, 1e-05)
148
+ - head_learning_rate: 0.01
149
+ - loss: CosineSimilarityLoss
150
+ - distance_metric: cosine_distance
151
+ - margin: 0.25
152
+ - end_to_end: False
153
+ - use_amp: False
154
+ - warmup_proportion: 0.1
155
+ - seed: 42
156
+ - eval_max_steps: -1
157
+ - load_best_model_at_end: True
158
+
159
+ ### Training Results
160
+ | Epoch | Step | Training Loss | Validation Loss |
161
+ |:-------:|:--------:|:-------------:|:---------------:|
162
+ | 0.002 | 1 | 0.3467 | - |
163
+ | 0.1 | 50 | 0.2333 | - |
164
+ | 0.2 | 100 | 0.237 | - |
165
+ | 0.3 | 150 | 0.2466 | - |
166
+ | 0.4 | 200 | 0.208 | - |
167
+ | 0.5 | 250 | 0.2121 | - |
168
+ | 0.6 | 300 | 0.0076 | - |
169
+ | 0.7 | 350 | 0.0011 | - |
170
+ | 0.8 | 400 | 0.0007 | - |
171
+ | 0.9 | 450 | 0.0002 | - |
172
+ | 1.0 | 500 | 0.0015 | 0.3342 |
173
+ | 1.1 | 550 | 0.0001 | - |
174
+ | 1.2 | 600 | 0.0002 | - |
175
+ | 1.3 | 650 | 0.0003 | - |
176
+ | 1.4 | 700 | 0.0003 | - |
177
+ | 1.5 | 750 | 0.0002 | - |
178
+ | 1.6 | 800 | 0.0002 | - |
179
+ | 1.7 | 850 | 0.0001 | - |
180
+ | 1.8 | 900 | 0.0001 | - |
181
+ | 1.9 | 950 | 0.0001 | - |
182
+ | **2.0** | **1000** | **0.0001** | **0.3303** |
183
+ | 2.1 | 1050 | 0.0 | - |
184
+ | 2.2 | 1100 | 0.0 | - |
185
+ | 2.3 | 1150 | 0.0001 | - |
186
+ | 2.4 | 1200 | 0.0 | - |
187
+ | 2.5 | 1250 | 0.0 | - |
188
+ | 2.6 | 1300 | 0.0 | - |
189
+ | 2.7 | 1350 | 0.0001 | - |
190
+ | 2.8 | 1400 | 0.0001 | - |
191
+ | 2.9 | 1450 | 0.0 | - |
192
+ | 3.0 | 1500 | 0.0 | 0.3327 |
193
+
194
+ * The bold row denotes the saved checkpoint.
195
+ ### Framework Versions
196
+ - Python: 3.10.12
197
+ - SetFit: 1.0.2
198
+ - Sentence Transformers: 2.2.2
199
+ - Transformers: 4.35.2
200
+ - PyTorch: 2.1.0+cu121
201
+ - Datasets: 2.16.1
202
+ - Tokenizers: 0.15.0
203
+
204
+ ## Citation
205
+
206
+ ### BibTeX
207
+ ```bibtex
208
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
209
+ doi = {10.48550/ARXIV.2209.11055},
210
+ url = {https://arxiv.org/abs/2209.11055},
211
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
212
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
213
+ title = {Efficient Few-Shot Learning Without Prompts},
214
+ publisher = {arXiv},
215
+ year = {2022},
216
+ copyright = {Creative Commons Attribution 4.0 International}
217
+ }
218
+ ```
219
+
220
+ <!--
221
+ ## Glossary
222
+
223
+ *Clearly define terms in order to be accessible across audiences.*
224
+ -->
225
+
226
+ <!--
227
+ ## Model Card Authors
228
+
229
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
230
+ -->
231
+
232
+ <!--
233
+ ## Model Card Contact
234
+
235
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
236
+ -->
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "checkpoints/step_1000/",
3
+ "architectures": [
4
+ "RobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 1024,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 4096,
16
+ "layer_norm_eps": 1e-05,
17
+ "max_position_embeddings": 514,
18
+ "model_type": "roberta",
19
+ "num_attention_heads": 16,
20
+ "num_hidden_layers": 24,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.35.2",
25
+ "type_vocab_size": 1,
26
+ "use_cache": true,
27
+ "vocab_size": 50265
28
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.6.1",
5
+ "pytorch": "1.8.1"
6
+ }
7
+ }
config_setfit.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "normalize_embeddings": false,
3
+ "labels": [
4
+ 0,
5
+ 1
6
+ ]
7
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01f75b2de7dbbd19d624a3b435928112504357dd0e5b0b7eb052e1a7304d7c13
3
+ size 1421483904
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f52b17d700321e64c7dddd510d9d9021e6fabacdd4fccd638e615ed0e3873a6d
3
+ size 9023
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": true,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "mask_token": "<mask>",
51
+ "max_length": 128,
52
+ "model_max_length": 512,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "<pad>",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "</s>",
58
+ "stride": 0,
59
+ "tokenizer_class": "RobertaTokenizer",
60
+ "trim_offsets": true,
61
+ "truncation_side": "right",
62
+ "truncation_strategy": "longest_first",
63
+ "unk_token": "<unk>"
64
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff