tjmooney98 commited on
Commit
ff3870b
1 Parent(s): 3996b2e

Push model using huggingface_hub.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: setfit
3
+ tags:
4
+ - setfit
5
+ - sentence-transformers
6
+ - text-classification
7
+ - generated_from_setfit_trainer
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ - precision
12
+ - recall
13
+ widget:
14
+ - text: ' I''ll pay $1,000 if anyone can find a published study that ChatGPT confirms
15
+ merely attempts to refute the OPV AIDS theory without desperately resorting to
16
+ a pathetic strawman.
17
+
18
+
19
+ '
20
+ - text: my disappointment is immeasurable and my day is ruined. any idea if they will
21
+ ever fix it or is it just permanent? i feel like just wow man just freaking wow
22
+ - text: The stuff chatgpt gives is entirely too scripted *and* impractical, which
23
+ is what I'm trying to avoid :/
24
+ - text: 'my experience with product product and brand: it''s amazing and not a bit
25
+ scary. despite the articles about product''s personality, my experience shows
26
+ the opposite: it''s useful, friendly, and truly amazing technology.'
27
+ - text: product is a massive energy hog. have a bunch of tabs open and your computer
28
+ will come to a crawl. also, ad blocking is terrible on product company ads) because
29
+ product apparently has a "whitelist" of ads that it refuses to be blocked. company
30
+ is way better
31
+ pipeline_tag: text-classification
32
+ inference: true
33
+ base_model: BAAI/bge-small-en-v1.5
34
+ model-index:
35
+ - name: SetFit with BAAI/bge-small-en-v1.5
36
+ results:
37
+ - task:
38
+ type: text-classification
39
+ name: Text Classification
40
+ dataset:
41
+ name: Unknown
42
+ type: unknown
43
+ split: test
44
+ metrics:
45
+ - type: accuracy
46
+ value: 0.5192307692307693
47
+ name: Accuracy
48
+ - type: f1
49
+ value:
50
+ - 0.2641509433962264
51
+ - 0.1553398058252427
52
+ - 0.6593406593406593
53
+ name: F1
54
+ - type: precision
55
+ value:
56
+ - 0.1590909090909091
57
+ - 0.09090909090909091
58
+ - 0.9375
59
+ name: Precision
60
+ - type: recall
61
+ value:
62
+ - 0.7777777777777778
63
+ - 0.5333333333333333
64
+ - 0.5084745762711864
65
+ name: Recall
66
+ ---
67
+
68
+ # SetFit with BAAI/bge-small-en-v1.5
69
+
70
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
71
+
72
+ The model has been trained using an efficient few-shot learning technique that involves:
73
+
74
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
75
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
76
+
77
+ ## Model Details
78
+
79
+ ### Model Description
80
+ - **Model Type:** SetFit
81
+ - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
82
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
83
+ - **Maximum Sequence Length:** 512 tokens
84
+ - **Number of Classes:** 3 classes
85
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
86
+ <!-- - **Language:** Unknown -->
87
+ <!-- - **License:** Unknown -->
88
+
89
+ ### Model Sources
90
+
91
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
92
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
93
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
94
+
95
+ ### Model Labels
96
+ | Label | Examples |
97
+ |:--------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
98
+ | peak | <ul><li>" I used Word on Microsoft 10 on my laptop to type up my manuscript, and when I uploaded it onto KDP, it was automatically formatted perfectly for Kindle E-book. I didn't need to make any adjustments (thankfully)."</li><li>'feeling myself getting obsessed with/addicted to ChatGPT and the entire generative AI universe and its evolution. \n\ndelightful to have another really big, seemingly biggest yet tech to go deep on and obsess over and think about implications of for the foreseeable future'</li><li>'1/2 obsidian translate amazing plugin currently in beta. it can translate text in to multiple languages using multiple services. i just hooked it up to a free product translation account, and i am stunned by its accuracy. tft'</li></ul> |
99
+ | pit | <ul><li>"Looks like I got a new Microsoft 365 update last night. Now when I go to Options or Print, I crash. It's happening on multiple files. Probably other issues too, but haven't experimented much beyond that. Windows 11 and, obviously, the most up-to-date PPT. Fortunately I don't need PowerPoint right now - except to answer questions here - so I guess I'll just stick it out to see what happens before I do a repair/reinstall. Update: Quick repair didn't work. Full repair that I believe is a full reinstall didn't work."</li><li>'my disappointment is immeasurable and my day is ruined. any idea if they will ever fix it or is it just permanent? i feel like just wow man just freaking wow'</li><li>'between 100 pages of the packet devoted to some crumbly looking old house and the powerpoint about the importance of the military industrial complex, this meeting has me feeling hostile.'</li></ul> |
100
+ | neither | <ul><li>" Elevate your game with these mind-blowing ChatGPT prompts! \n\nWhether you're diving into knowledge, refining your skills, or making decisions, let be your guide to excellence. \n\nReady to unlock the power of AI? \n\n "</li><li>"As an alternative you can always use Ask Sage ( Basically the gov version of ChatGPT and allowed to be used for CUI. It's what I use on NMCI and I've never had any problems!"</li><li>" I'll pay $1,000 if anyone can find a published study that ChatGPT confirms merely attempts to refute the OPV AIDS theory without desperately resorting to a pathetic strawman.\n\n"</li></ul> |
101
+
102
+ ## Evaluation
103
+
104
+ ### Metrics
105
+ | Label | Accuracy | F1 | Precision | Recall |
106
+ |:--------|:---------|:-------------------------------------------------------------|:--------------------------------------------------|:-------------------------------------------------------------|
107
+ | **all** | 0.5192 | [0.2641509433962264, 0.1553398058252427, 0.6593406593406593] | [0.1590909090909091, 0.09090909090909091, 0.9375] | [0.7777777777777778, 0.5333333333333333, 0.5084745762711864] |
108
+
109
+ ## Uses
110
+
111
+ ### Direct Use for Inference
112
+
113
+ First install the SetFit library:
114
+
115
+ ```bash
116
+ pip install setfit
117
+ ```
118
+
119
+ Then you can load this model and run inference.
120
+
121
+ ```python
122
+ from setfit import SetFitModel
123
+
124
+ # Download from the 🤗 Hub
125
+ model = SetFitModel.from_pretrained("tjmooney98/725_test_model")
126
+ # Run inference
127
+ preds = model("The stuff chatgpt gives is entirely too scripted *and* impractical, which is what I'm trying to avoid :/")
128
+ ```
129
+
130
+ <!--
131
+ ### Downstream Use
132
+
133
+ *List how someone could finetune this model on their own dataset.*
134
+ -->
135
+
136
+ <!--
137
+ ### Out-of-Scope Use
138
+
139
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
140
+ -->
141
+
142
+ <!--
143
+ ## Bias, Risks and Limitations
144
+
145
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
146
+ -->
147
+
148
+ <!--
149
+ ### Recommendations
150
+
151
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
152
+ -->
153
+
154
+ ## Training Details
155
+
156
+ ### Training Set Metrics
157
+ | Training set | Min | Median | Max |
158
+ |:-------------|:----|:--------|:----|
159
+ | Word count | 18 | 38.0667 | 91 |
160
+
161
+ | Label | Training Sample Count |
162
+ |:--------|:----------------------|
163
+ | pit | 5 |
164
+ | peak | 5 |
165
+ | neither | 5 |
166
+
167
+ ### Training Hyperparameters
168
+ - batch_size: (5, 5)
169
+ - num_epochs: (1, 1)
170
+ - max_steps: -1
171
+ - sampling_strategy: oversampling
172
+ - body_learning_rate: (2e-05, 1e-05)
173
+ - head_learning_rate: 0.01
174
+ - loss: CosineSimilarityLoss
175
+ - distance_metric: cosine_distance
176
+ - margin: 0.25
177
+ - end_to_end: False
178
+ - use_amp: False
179
+ - warmup_proportion: 0.1
180
+ - seed: 42
181
+ - eval_max_steps: -1
182
+ - load_best_model_at_end: False
183
+
184
+ ### Training Results
185
+ | Epoch | Step | Training Loss | Validation Loss |
186
+ |:------:|:----:|:-------------:|:---------------:|
187
+ | 0.0333 | 1 | 0.1809 | - |
188
+
189
+ ### Framework Versions
190
+ - Python: 3.10.12
191
+ - SetFit: 1.0.3
192
+ - Sentence Transformers: 2.5.1
193
+ - Transformers: 4.38.1
194
+ - PyTorch: 2.1.0+cu121
195
+ - Datasets: 2.18.0
196
+ - Tokenizers: 0.15.2
197
+
198
+ ## Citation
199
+
200
+ ### BibTeX
201
+ ```bibtex
202
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
203
+ doi = {10.48550/ARXIV.2209.11055},
204
+ url = {https://arxiv.org/abs/2209.11055},
205
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
206
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
207
+ title = {Efficient Few-Shot Learning Without Prompts},
208
+ publisher = {arXiv},
209
+ year = {2022},
210
+ copyright = {Creative Commons Attribution 4.0 International}
211
+ }
212
+ ```
213
+
214
+ <!--
215
+ ## Glossary
216
+
217
+ *Clearly define terms in order to be accessible across audiences.*
218
+ -->
219
+
220
+ <!--
221
+ ## Model Card Authors
222
+
223
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
224
+ -->
225
+
226
+ <!--
227
+ ## Model Card Contact
228
+
229
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
230
+ -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-small-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.38.1",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.28.1",
5
+ "pytorch": "1.13.0+cu117"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null
9
+ }
config_setfit.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "normalize_embeddings": false,
3
+ "labels": [
4
+ "pit",
5
+ "peak",
6
+ "neither"
7
+ ]
8
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a38d597ca68aa11e5b88671d552e958a1adf8c2f5cb14741b26dab7428ecaa0
3
+ size 133462128
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:344449fe0e7ac1d98b47a32bf0d9c3e21c0a3a4fc6f528cfa6ca72016a7c812e
3
+ size 10111
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff