PLB commited on
Commit
7f1b9eb
1 Parent(s): f8d0f1a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +15 -180
README.md CHANGED
@@ -1,195 +1,30 @@
1
  ---
2
- library_name: setfit
3
- tags:
4
- - setfit
5
- - sentence-transformers
6
- - text-classification
7
- - generated_from_setfit_trainer
8
- base_model: sentence-transformers/paraphrase-mpnet-base-v2
9
- metrics:
10
- - accuracy
11
- widget:
12
- - text: What happens if I drive with low tire pressure?
13
- - text: How often should I rotate my tires?
14
- - text: How can I tell if my tire is properly balanced?
15
- - text: How does tire pressure affect handling and braking?
16
- pipeline_tag: text-classification
17
- inference: true
18
- model-index:
19
- - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
20
- results:
21
- - task:
22
- type: text-classification
23
- name: Text Classification
24
- dataset:
25
- name: Unknown
26
- type: unknown
27
- split: test
28
- metrics:
29
- - type: accuracy
30
- value: 1.0
31
- name: Accuracy
32
  ---
33
 
34
- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2
35
 
36
- This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
37
 
38
- The model has been trained using an efficient few-shot learning technique that involves:
39
 
40
- 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
41
- 2. Training a classification head with features from the fine-tuned Sentence Transformer.
42
 
43
- ## Model Details
 
44
 
45
- ### Model Description
46
- - **Model Type:** SetFit
47
- - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
48
- - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
49
- - **Maximum Sequence Length:** 512 tokens
50
- - **Number of Classes:** 2 classes
51
- <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
52
- <!-- - **Language:** Unknown -->
53
- <!-- - **License:** Unknown -->
54
 
55
- ### Model Sources
56
-
57
- - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
58
- - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
59
- - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
60
-
61
- ### Model Labels
62
- | Label | Examples |
63
- |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
64
- | True | <ul><li>'Can I use nitrogen instead of air to inflate my tires?'</li><li>'What should I do if my tire pressure warning light comes on?'</li><li>'Is it okay to slightly overinflate my tires?'</li></ul> |
65
- | False | <ul><li>'Can I mix different brands of tires on my vehicle?'</li><li>'How do I know if my tire has a slow leak?'</li><li>'How can I extend the life of my tires?'</li></ul> |
66
-
67
- ## Evaluation
68
-
69
- ### Metrics
70
- | Label | Accuracy |
71
- |:--------|:---------|
72
- | **all** | 1.0 |
73
-
74
- ## Uses
75
-
76
- ### Direct Use for Inference
77
-
78
- First install the SetFit library:
79
-
80
- ```bash
81
- pip install setfit
82
- ```
83
-
84
- Then you can load this model and run inference.
85
-
86
- ```python
87
- from setfit import SetFitModel
88
-
89
- # Download from the 🤗 Hub
90
- model = SetFitModel.from_pretrained("phospho-app/phospho-small-06b6145")
91
- # Run inference
92
- preds = model("How often should I rotate my tires?")
93
- ```
94
-
95
- <!--
96
- ### Downstream Use
97
-
98
- *List how someone could finetune this model on their own dataset.*
99
- -->
100
-
101
- <!--
102
- ### Out-of-Scope Use
103
-
104
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
105
- -->
106
-
107
- <!--
108
- ## Bias, Risks and Limitations
109
-
110
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
111
- -->
112
-
113
- <!--
114
- ### Recommendations
115
-
116
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
117
- -->
118
-
119
- ## Training Details
120
-
121
- ### Training Set Metrics
122
- | Training set | Min | Median | Max |
123
- |:-------------|:----|:-------|:----|
124
- | Word count | 7 | 10.25 | 13 |
125
-
126
- | Label | Training Sample Count |
127
- |:------|:----------------------|
128
- | False | 7 |
129
- | True | 9 |
130
-
131
- ### Training Hyperparameters
132
- - batch_size: (16, 16)
133
- - num_epochs: (1, 1)
134
- - max_steps: -1
135
- - sampling_strategy: oversampling
136
- - num_iterations: 20
137
- - body_learning_rate: (2e-05, 2e-05)
138
- - head_learning_rate: 2e-05
139
- - loss: CosineSimilarityLoss
140
- - distance_metric: cosine_distance
141
- - margin: 0.25
142
- - end_to_end: False
143
- - use_amp: False
144
- - warmup_proportion: 0.1
145
- - seed: 42
146
- - eval_max_steps: -1
147
- - load_best_model_at_end: False
148
-
149
- ### Training Results
150
- | Epoch | Step | Training Loss | Validation Loss |
151
- |:-----:|:----:|:-------------:|:---------------:|
152
- | 0.025 | 1 | 0.1766 | - |
153
-
154
- ### Framework Versions
155
- - Python: 3.11.9
156
- - SetFit: 1.0.3
157
- - Sentence Transformers: 2.7.0
158
- - Transformers: 4.40.1
159
- - PyTorch: 2.3.0+cu121
160
- - Datasets: 2.19.0
161
- - Tokenizers: 0.19.1
162
-
163
- ## Citation
164
-
165
- ### BibTeX
166
- ```bibtex
167
- @article{https://doi.org/10.48550/arxiv.2209.11055,
168
- doi = {10.48550/ARXIV.2209.11055},
169
- url = {https://arxiv.org/abs/2209.11055},
170
- author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
171
- keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
172
- title = {Efficient Few-Shot Learning Without Prompts},
173
- publisher = {arXiv},
174
- year = {2022},
175
- copyright = {Creative Commons Attribution 4.0 International}
176
- }
177
  ```
178
 
179
- <!--
180
- ## Glossary
181
-
182
- *Clearly define terms in order to be accessible across audiences.*
183
- -->
184
 
185
- <!--
186
- ## Model Card Authors
187
 
188
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
189
- -->
190
 
191
- <!--
192
- ## Model Card Contact
193
 
194
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
195
- -->
 
1
  ---
2
+ language: en
3
+ license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
+ # phospho-small
7
 
8
+ This is a SetFit model that can be used for Text Classification on CPU.
9
 
10
+ The model has been trained using an efficient few-shot learning technique.
11
 
12
+ ## Usage
 
13
 
14
+ ```python
15
+ from setfit import SetFitModel
16
 
17
+ model = SetFitModel.from_pretrained("phospho-small-06b6145")
 
 
 
 
 
 
 
 
18
 
19
+ outputs = model.predict(["This is a sentence to classify", "Another sentence"])
20
+ # tensor([1, 0])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ```
22
 
23
+ ## References
 
 
 
 
24
 
25
+ This work was possible thanks to the SetFit library and the work of:
 
26
 
27
+ Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts.
 
28
 
29
+ ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
 
30