SOUMYADEEPSAR commited on
Commit
85318c5
1 Parent(s): b07ae5f

Add SetFit model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: setfit
3
+ tags:
4
+ - setfit
5
+ - sentence-transformers
6
+ - text-classification
7
+ - generated_from_setfit_trainer
8
+ metrics:
9
+ - accuracy
10
+ widget:
11
+ - text: Now that the baffling, elongated, hyperreal coronation has occurred—no, not
12
+ that one—and Liz Truss has become Prime Minister, a degree of intervention and
13
+ action on energy bills has emerged, ahead of the looming socioeconomic catastrophe
14
+ facing the country this winter.
15
+ - text: But it needs to go much further.
16
+ - text: What could possibly go wrong?
17
+ - text: If you are White you might feel bad about hurting others or you might feel
18
+ afraid to lose this privilege….Overcoming White privilege is a job that must start
19
+ with the White community….
20
+ - text: '[JF: Obviously, immigration wasn’t stopped: the current population of the
21
+ United States is 329.5 million—it passed 300 million in 2006.'
22
+ pipeline_tag: text-classification
23
+ inference: true
24
+ model-index:
25
+ - name: SetFit
26
+ results:
27
+ - task:
28
+ type: text-classification
29
+ name: Text Classification
30
+ dataset:
31
+ name: Unknown
32
+ type: unknown
33
+ split: test
34
+ metrics:
35
+ - type: accuracy
36
+ value: 0.7736625514403292
37
+ name: Accuracy
38
+ ---
39
+
40
+ # SetFit
41
+
42
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A LinearSVC instance is used for classification.
43
+
44
+ The model has been trained using an efficient few-shot learning technique that involves:
45
+
46
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
47
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
48
+
49
+ ## Model Details
50
+
51
+ ### Model Description
52
+ - **Model Type:** SetFit
53
+ <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
54
+ - **Classification head:** a LinearSVC instance
55
+ - **Maximum Sequence Length:** 512 tokens
56
+ - **Number of Classes:** 2 classes
57
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
58
+ <!-- - **Language:** Unknown -->
59
+ <!-- - **License:** Unknown -->
60
+
61
+ ### Model Sources
62
+
63
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
64
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
65
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
66
+
67
+ ### Model Labels
68
+ | Label | Examples |
69
+ |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
70
+ | SUBJ | <ul><li>'Now suppose that under stress of abnormal public revenue the structure of government is somewhat rationalized and that by such means as economy and efficiency the cost of government by measure is much reduced.'</li><li>'Modern Russia is a propaganda state, but not in the same way as the Soviet Union.'</li><li>'The spender of public money will never want followers.'</li></ul> |
71
+ | OBJ | <ul><li>'But a top buying agent tells me that access to 13 can be gained if you know the right people.'</li><li>'“Normally, the majority opinion would speak for itself.” The decision is “really about policy—our state has values of inclusion and diversity.” The ruling is based “on policy, which is the definition of judicial activism.'</li><li>'asked American Federation of Teachers President Randi Weingarten.'</li></ul> |
72
+
73
+ ## Evaluation
74
+
75
+ ### Metrics
76
+ | Label | Accuracy |
77
+ |:--------|:---------|
78
+ | **all** | 0.7737 |
79
+
80
+ ## Uses
81
+
82
+ ### Direct Use for Inference
83
+
84
+ First install the SetFit library:
85
+
86
+ ```bash
87
+ pip install setfit
88
+ ```
89
+
90
+ Then you can load this model and run inference.
91
+
92
+ ```python
93
+ from setfit import SetFitModel
94
+
95
+ # Download from the 🤗 Hub
96
+ model = SetFitModel.from_pretrained("SOUMYADEEPSAR/SetFit_SubjectivityDetection")
97
+ # Run inference
98
+ preds = model("What could possibly go wrong?")
99
+ ```
100
+
101
+ <!--
102
+ ### Downstream Use
103
+
104
+ *List how someone could finetune this model on their own dataset.*
105
+ -->
106
+
107
+ <!--
108
+ ### Out-of-Scope Use
109
+
110
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
111
+ -->
112
+
113
+ <!--
114
+ ## Bias, Risks and Limitations
115
+
116
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
117
+ -->
118
+
119
+ <!--
120
+ ### Recommendations
121
+
122
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
123
+ -->
124
+
125
+ ## Training Details
126
+
127
+ ### Training Set Metrics
128
+ | Training set | Min | Median | Max |
129
+ |:-------------|:----|:-------|:----|
130
+ | Word count | 3 | 22.085 | 77 |
131
+
132
+ | Label | Training Sample Count |
133
+ |:------|:----------------------|
134
+ | OBJ | 100 |
135
+ | SUBJ | 100 |
136
+
137
+ ### Training Hyperparameters
138
+ - batch_size: (32, 32)
139
+ - num_epochs: (3, 3)
140
+ - max_steps: -1
141
+ - sampling_strategy: oversampling
142
+ - body_learning_rate: (2e-05, 1e-05)
143
+ - head_learning_rate: 0.01
144
+ - loss: CosineSimilarityLoss
145
+ - distance_metric: cosine_distance
146
+ - margin: 0.25
147
+ - end_to_end: False
148
+ - use_amp: False
149
+ - warmup_proportion: 0.1
150
+ - seed: 42
151
+ - eval_max_steps: -1
152
+ - load_best_model_at_end: False
153
+
154
+ ### Training Results
155
+ | Epoch | Step | Training Loss | Validation Loss |
156
+ |:------:|:----:|:-------------:|:---------------:|
157
+ | 0.0016 | 1 | 0.2686 | - |
158
+ | 0.0791 | 50 | 0.2494 | - |
159
+ | 0.1582 | 100 | 0.2639 | - |
160
+ | 0.2373 | 150 | 0.2258 | - |
161
+ | 0.3165 | 200 | 0.0176 | - |
162
+ | 0.3956 | 250 | 0.0027 | - |
163
+ | 0.4747 | 300 | 0.0017 | - |
164
+ | 0.5538 | 350 | 0.0013 | - |
165
+ | 0.6329 | 400 | 0.0016 | - |
166
+ | 0.7120 | 450 | 0.001 | - |
167
+ | 0.7911 | 500 | 0.0009 | - |
168
+ | 0.8703 | 550 | 0.001 | - |
169
+ | 0.9494 | 600 | 0.001 | - |
170
+ | 1.0285 | 650 | 0.0009 | - |
171
+ | 1.1076 | 700 | 0.0008 | - |
172
+ | 1.1867 | 750 | 0.0008 | - |
173
+ | 1.2658 | 800 | 0.0006 | - |
174
+ | 1.3449 | 850 | 0.0007 | - |
175
+ | 1.4241 | 900 | 0.0006 | - |
176
+ | 1.5032 | 950 | 0.0007 | - |
177
+ | 1.5823 | 1000 | 0.0006 | - |
178
+ | 1.6614 | 1050 | 0.0005 | - |
179
+ | 1.7405 | 1100 | 0.0006 | - |
180
+ | 1.8196 | 1150 | 0.0007 | - |
181
+ | 1.8987 | 1200 | 0.0005 | - |
182
+ | 1.9778 | 1250 | 0.0006 | - |
183
+ | 2.0570 | 1300 | 0.0005 | - |
184
+ | 2.1361 | 1350 | 0.0005 | - |
185
+ | 2.2152 | 1400 | 0.0004 | - |
186
+ | 2.2943 | 1450 | 0.0005 | - |
187
+ | 2.3734 | 1500 | 0.0004 | - |
188
+ | 2.4525 | 1550 | 0.0004 | - |
189
+ | 2.5316 | 1600 | 0.0004 | - |
190
+ | 2.6108 | 1650 | 0.0004 | - |
191
+ | 2.6899 | 1700 | 0.0005 | - |
192
+ | 2.7690 | 1750 | 0.0005 | - |
193
+ | 2.8481 | 1800 | 0.0004 | - |
194
+ | 2.9272 | 1850 | 0.0005 | - |
195
+
196
+ ### Framework Versions
197
+ - Python: 3.10.12
198
+ - SetFit: 1.0.3
199
+ - Sentence Transformers: 2.4.0
200
+ - Transformers: 4.37.2
201
+ - PyTorch: 2.1.0+cu121
202
+ - Datasets: 2.17.1
203
+ - Tokenizers: 0.15.2
204
+
205
+ ## Citation
206
+
207
+ ### BibTeX
208
+ ```bibtex
209
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
210
+ doi = {10.48550/ARXIV.2209.11055},
211
+ url = {https://arxiv.org/abs/2209.11055},
212
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
213
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
214
+ title = {Efficient Few-Shot Learning Without Prompts},
215
+ publisher = {arXiv},
216
+ year = {2022},
217
+ copyright = {Creative Commons Attribution 4.0 International}
218
+ }
219
+ ```
220
+
221
+ <!--
222
+ ## Glossary
223
+
224
+ *Clearly define terms in order to be accessible across audiences.*
225
+ -->
226
+
227
+ <!--
228
+ ## Model Card Authors
229
+
230
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
231
+ -->
232
+
233
+ <!--
234
+ ## Model Card Contact
235
+
236
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
237
+ -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-small-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.37.2",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.28.1",
5
+ "pytorch": "1.13.0+cu117"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null
9
+ }
config_setfit.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "labels": [
3
+ "OBJ",
4
+ "SUBJ"
5
+ ],
6
+ "normalize_embeddings": false
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5db57d93df4c231a01f5fbea4cd7cdfe55f7b2500396ca6473afacc0345a8d07
3
+ size 133462128
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e317176e43ebfd70ba3fef050063c1c550e0041daa6c951f535c5d03aad5a9dc
3
+ size 3803
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff