krish2505 commited on
Commit
3926dcd
·
verified ·
1 Parent(s): 53533ca

Add SetFit model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: setfit
3
+ tags:
4
+ - setfit
5
+ - sentence-transformers
6
+ - text-classification
7
+ - generated_from_setfit_trainer
8
+ metrics:
9
+ - accuracy
10
+ widget:
11
+ - text: 'Please Find Enclosed The Press Release Titled ''Energy Transition Among The
12
+ Top 3 Priorities For 73 Percent Of Companies: Infosys-HFS Research Study'''
13
+ - text: Financial Results For The Quarter Ended June 30, 2023, And Declaration Of
14
+ Interim Dividend
15
+ - text: successfully started
16
+ - text: Board Meeting Intimation for Notice Of The Board Meeting Dt. August 03, 2023
17
+ - text: 'Board Meeting Intimation for Intimation Regarding Holding Of Meeting Of The
18
+ Board Of Directors: - Un-Audited Financial Results For The Quarter Ended June
19
+ 30, 2023'
20
+ pipeline_tag: text-classification
21
+ inference: true
22
+ base_model: sentence-transformers/all-mpnet-base-v2
23
+ model-index:
24
+ - name: SetFit with sentence-transformers/all-mpnet-base-v2
25
+ results:
26
+ - task:
27
+ type: text-classification
28
+ name: Text Classification
29
+ dataset:
30
+ name: Unknown
31
+ type: unknown
32
+ split: test
33
+ metrics:
34
+ - type: accuracy
35
+ value: 0.8807339449541285
36
+ name: Accuracy
37
+ ---
38
+
39
+ # SetFit with sentence-transformers/all-mpnet-base-v2
40
+
41
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
42
+
43
+ The model has been trained using an efficient few-shot learning technique that involves:
44
+
45
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
46
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
47
+
48
+ ## Model Details
49
+
50
+ ### Model Description
51
+ - **Model Type:** SetFit
52
+ - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
53
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
54
+ - **Maximum Sequence Length:** 384 tokens
55
+ - **Number of Classes:** 9 classes
56
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
57
+ <!-- - **Language:** Unknown -->
58
+ <!-- - **License:** Unknown -->
59
+
60
+ ### Model Sources
61
+
62
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
63
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
64
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
65
+
66
+ ### Model Labels
67
+ | Label | Examples |
68
+ |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
69
+ | 2 | <ul><li>'Board Meeting Outcome for Board Meeting - Unaudited Financial Results For The Quarter And Nine Months Ended December 31, 2022'</li><li>'Board Meeting Outcome for Outcome Of Board Meeting Held On 20Th July, 2023'</li><li>'Board Meeting Outcome for Financial Results For The Fourth Quarter (Q4) And Year Ended March 31, 2023 And Recommendation Of Dividend'</li></ul> |
70
+ | 6 | <ul><li>'Results - Financial Results For Quarter And Nine Months Ended December 31, 2022'</li><li>"Updated Independent Auditor'S Report On The Consolidated Financial Statements As At And For The Year Ended March 31, 2023, Prepared Under Indian Accounting Standards, Issued On April 13, 2023"</li><li>'Financial Results For The Quarter And Nine Month Period Ended December 31, 2022 And Declaration Of Third Interim Dividend'</li></ul> |
71
+ | 5 | <ul><li>'Regulation 30 Of The SEBI (Listing Obligations And Disclosure Requirements) Regulations 2015: Disclosure Of Change in Accounting Policies'</li><li>'Regulation 30 Of The SEBI (Listing Obligations And Disclosure Requirements) Regulations 2015: Disclosure Of Appointment of Key Managerial Personnel'</li><li>'Regulation 30 Of The SEBI (Listing Obligations And Disclosure Requirements) Regulations 2015: Disclosure Of Change in Listing Status'</li></ul> |
72
+ | 3 | <ul><li>'Earnings Call For Q1 And Half-Yearly Financial Results - FY 2023'</li><li>'Earnings Call Of ABC Holdings - Emerging Markets Perspective'</li><li>'Audio / Video Recording - Earnings Call - Technology and Innovation Highlights'</li></ul> |
73
+ | 0 | <ul><li>'Transcripts of Town Hall Meeting with Stakeholders'</li><li>'Clarification on Market Rumors Regarding Product Recall'</li><li>'Media Release By Reliance Jio Infocomm Limited'</li></ul> |
74
+ | 1 | <ul><li>"Order Passed By The Hon'Ble National Company Law Tribunal, Mumbai Bench, Sanctioning The Scheme Of Arrangement Between Reliance Projects & Property Management Services Limited And Its Shareholders And Creditors & Reliance Industries Limited And Its Shareholders And Creditors ('Scheme') - Further Update"</li><li>'Update To The Disclosure Dated August 23, 2023 On Investment By Qatar Holding LLC In Reliance Retail Ventures Limited, A Subsidiary Of The Company'</li><li>'Announcement under Regulation 30 (LODR)-Updates on Acquisition'</li></ul> |
75
+ | 7 | <ul><li>'Cloud For Organizational Growth And Transformation Is Three Times More Important Than Cloud For Cost Optimization: Infosys Research'</li><li>'Infosys Rated A Leader In Multicloud Managed Services Providers And Cloud Migration And Managed Service Partners By Independent Research Firm'</li><li>'Infosys Collaborates with Leading Universities for Research and Development'</li></ul> |
76
+ | 4 | <ul><li>'In accordance with SEBI (LODR) regulations an intimation has been officially conveyed regarding the record date for Shareholders and ESOP Holders of NNL following the approval of the Merger Scheme by the National Company Law Tribunal Chennai Bench.'</li><li>'An official announcement under SEBI (LODR) has been made declaring the notification of the record date for ESOP Holders and Shareholders post the successful completion of the Amalgamation between XYZ Systems Ltd and our Company.'</li><li>'Grant Of Stock Options Under The Employee Stock Option Scheme Of The Bank (ESOP Scheme).'</li></ul> |
77
+ | 8 | <ul><li>'Announcement under Regulation 30 (LODR)-Resignation of Head of Marketing'</li><li>'Resignation Of Shri Rajesh B. Ambani From The Board Of The Company - Disclosure Dated September 5'</li><li>'Announcement under Regulation 30 (LODR)-Resignation of Chief Operating Officer (COO)'</li></ul> |
78
+
79
+ ## Evaluation
80
+
81
+ ### Metrics
82
+ | Label | Accuracy |
83
+ |:--------|:---------|
84
+ | **all** | 0.8807 |
85
+
86
+ ## Uses
87
+
88
+ ### Direct Use for Inference
89
+
90
+ First install the SetFit library:
91
+
92
+ ```bash
93
+ pip install setfit
94
+ ```
95
+
96
+ Then you can load this model and run inference.
97
+
98
+ ```python
99
+ from setfit import SetFitModel
100
+
101
+ # Download from the 🤗 Hub
102
+ model = SetFitModel.from_pretrained("krish2505/setfitmkrt")
103
+ # Run inference
104
+ preds = model("successfully started")
105
+ ```
106
+
107
+ <!--
108
+ ### Downstream Use
109
+
110
+ *List how someone could finetune this model on their own dataset.*
111
+ -->
112
+
113
+ <!--
114
+ ### Out-of-Scope Use
115
+
116
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
117
+ -->
118
+
119
+ <!--
120
+ ## Bias, Risks and Limitations
121
+
122
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
123
+ -->
124
+
125
+ <!--
126
+ ### Recommendations
127
+
128
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
129
+ -->
130
+
131
+ ## Training Details
132
+
133
+ ### Training Set Metrics
134
+ | Training set | Min | Median | Max |
135
+ |:-------------|:----|:--------|:----|
136
+ | Word count | 1 | 15.0265 | 70 |
137
+
138
+ | Label | Training Sample Count |
139
+ |:------|:----------------------|
140
+ | 0 | 142 |
141
+ | 1 | 130 |
142
+ | 2 | 310 |
143
+ | 3 | 61 |
144
+ | 4 | 42 |
145
+ | 5 | 61 |
146
+ | 6 | 191 |
147
+ | 7 | 6 |
148
+ | 8 | 38 |
149
+
150
+ ### Training Hyperparameters
151
+ - batch_size: (64, 64)
152
+ - num_epochs: (2, 2)
153
+ - max_steps: -1
154
+ - sampling_strategy: oversampling
155
+ - num_iterations: 20
156
+ - body_learning_rate: (2e-05, 2e-05)
157
+ - head_learning_rate: 2e-05
158
+ - loss: CosineSimilarityLoss
159
+ - distance_metric: cosine_distance
160
+ - margin: 0.25
161
+ - end_to_end: False
162
+ - use_amp: False
163
+ - warmup_proportion: 0.1
164
+ - seed: 42
165
+ - eval_max_steps: -1
166
+ - load_best_model_at_end: False
167
+
168
+ ### Training Results
169
+ | Epoch | Step | Training Loss | Validation Loss |
170
+ |:------:|:----:|:-------------:|:---------------:|
171
+ | 0.0016 | 1 | 0.1833 | - |
172
+ | 0.0814 | 50 | 0.125 | - |
173
+ | 0.1629 | 100 | 0.0628 | - |
174
+ | 0.2443 | 150 | 0.0361 | - |
175
+ | 0.3257 | 200 | 0.0333 | - |
176
+ | 0.4072 | 250 | 0.0116 | - |
177
+ | 0.4886 | 300 | 0.0253 | - |
178
+ | 0.5700 | 350 | 0.0231 | - |
179
+ | 0.6515 | 400 | 0.0037 | - |
180
+ | 0.7329 | 450 | 0.0144 | - |
181
+ | 0.8143 | 500 | 0.0095 | - |
182
+ | 0.8958 | 550 | 0.0161 | - |
183
+ | 0.9772 | 600 | 0.0104 | - |
184
+ | 1.0586 | 650 | 0.0064 | - |
185
+ | 1.1401 | 700 | 0.0018 | - |
186
+ | 1.2215 | 750 | 0.0107 | - |
187
+ | 1.3029 | 800 | 0.0035 | - |
188
+ | 1.3844 | 850 | 0.0056 | - |
189
+ | 1.4658 | 900 | 0.0142 | - |
190
+ | 1.5472 | 950 | 0.014 | - |
191
+ | 1.6287 | 1000 | 0.0109 | - |
192
+ | 1.7101 | 1050 | 0.0252 | - |
193
+ | 1.7915 | 1100 | 0.0093 | - |
194
+ | 1.8730 | 1150 | 0.0048 | - |
195
+ | 1.9544 | 1200 | 0.0063 | - |
196
+
197
+ ### Framework Versions
198
+ - Python: 3.10.12
199
+ - SetFit: 1.0.3
200
+ - Sentence Transformers: 2.2.2
201
+ - Transformers: 4.36.2
202
+ - PyTorch: 2.0.0
203
+ - Datasets: 2.16.1
204
+ - Tokenizers: 0.15.0
205
+
206
+ ## Citation
207
+
208
+ ### BibTeX
209
+ ```bibtex
210
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
211
+ doi = {10.48550/ARXIV.2209.11055},
212
+ url = {https://arxiv.org/abs/2209.11055},
213
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
214
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
215
+ title = {Efficient Few-Shot Learning Without Prompts},
216
+ publisher = {arXiv},
217
+ year = {2022},
218
+ copyright = {Creative Commons Attribution 4.0 International}
219
+ }
220
+ ```
221
+
222
+ <!--
223
+ ## Glossary
224
+
225
+ *Clearly define terms in order to be accessible across audiences.*
226
+ -->
227
+
228
+ <!--
229
+ ## Model Card Authors
230
+
231
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
232
+ -->
233
+
234
+ <!--
235
+ ## Model Card Contact
236
+
237
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
238
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/root/.cache/torch/sentence_transformers/sentence-transformers_all-mpnet-base-v2/",
3
+ "architectures": [
4
+ "MPNetModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.36.2",
23
+ "vocab_size": 30527
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.6.1",
5
+ "pytorch": "1.8.1"
6
+ }
7
+ }
config_setfit.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "normalize_embeddings": false,
3
+ "labels": null
4
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4b7ef5ef69f21bb9f1d1d2024f198eb33d8b1ec150153f005740e07295d4263
3
+ size 437967672
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53ac8e70397f75d5b7740f5abb1275b1b12664ad798d7817b8c0138855ae358c
3
+ size 56271
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 384,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": true,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "mask_token": "<mask>",
58
+ "max_length": 128,
59
+ "model_max_length": 512,
60
+ "pad_to_multiple_of": null,
61
+ "pad_token": "<pad>",
62
+ "pad_token_type_id": 0,
63
+ "padding_side": "right",
64
+ "sep_token": "</s>",
65
+ "stride": 0,
66
+ "strip_accents": null,
67
+ "tokenize_chinese_chars": true,
68
+ "tokenizer_class": "MPNetTokenizer",
69
+ "truncation_side": "right",
70
+ "truncation_strategy": "longest_first",
71
+ "unk_token": "[UNK]"
72
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff