BartekSadlej commited on
Commit
29652a4
1 Parent(s): 302af7f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +229 -197
README.md CHANGED
@@ -1,199 +1,231 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
-
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - mteb
5
+ language:
6
+ - en
7
+ - pl
8
+ model-index:
9
+ - name: 2024-06-19_10-03-38
10
+ results:
11
+ - dataset:
12
+ config: pl
13
+ name: MTEB MassiveIntentClassification
14
+ revision: 4672e20407010da34463acc759c162ca9734bca6
15
+ split: test
16
+ type: mteb/amazon_massive_intent
17
+ metrics:
18
+ - type: accuracy
19
+ value: 0.11735036987222598
20
+ task:
21
+ type: Classification
22
+ - dataset:
23
+ config: pl
24
+ name: MTEB MassiveIntentClassification
25
+ revision: 4672e20407010da34463acc759c162ca9734bca6
26
+ split: validation
27
+ type: mteb/amazon_massive_intent
28
+ metrics:
29
+ - type: accuracy
30
+ value: 0.1175110673880964
31
+ task:
32
+ type: Classification
33
+ - dataset:
34
+ config: pl
35
+ name: MTEB MassiveScenarioClassification
36
+ revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
37
+ split: test
38
+ type: mteb/amazon_massive_scenario
39
+ metrics:
40
+ - type: accuracy
41
+ value: 0.20023537323470073
42
+ task:
43
+ type: Classification
44
+ - dataset:
45
+ config: pl
46
+ name: MTEB MassiveScenarioClassification
47
+ revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
48
+ split: validation
49
+ type: mteb/amazon_massive_scenario
50
+ metrics:
51
+ - type: accuracy
52
+ value: 0.1970486965076242
53
+ task:
54
+ type: Classification
55
+ - dataset:
56
+ config: default
57
+ name: MTEB CBD
58
+ revision: 36ddb419bcffe6a5374c3891957912892916f28d
59
+ split: test
60
+ type: PL-MTEB/cbd
61
+ metrics:
62
+ - type: accuracy
63
+ value: 0.5475
64
+ task:
65
+ type: Classification
66
+ - dataset:
67
+ config: default
68
+ name: MTEB PolEmo2.0-IN
69
+ revision: d90724373c70959f17d2331ad51fb60c71176b03
70
+ split: test
71
+ type: PL-MTEB/polemo2_in
72
+ metrics:
73
+ - type: accuracy
74
+ value: 0.3630193905817175
75
+ task:
76
+ type: Classification
77
+ - dataset:
78
+ config: default
79
+ name: MTEB PolEmo2.0-OUT
80
+ revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
81
+ split: test
82
+ type: PL-MTEB/polemo2_out
83
+ metrics:
84
+ - type: accuracy
85
+ value: 0.2866396761133604
86
+ task:
87
+ type: Classification
88
+ - dataset:
89
+ config: default
90
+ name: MTEB AllegroReviews
91
+ revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
92
+ split: test
93
+ type: PL-MTEB/allegro-reviews
94
+ metrics:
95
+ - type: accuracy
96
+ value: 0.2630218687872764
97
+ task:
98
+ type: Classification
99
+ - dataset:
100
+ config: default
101
+ name: MTEB PAC
102
+ revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
103
+ split: test
104
+ type: laugustyniak/abusive-clauses-pl
105
+ metrics:
106
+ - type: accuracy
107
+ value: 0.5829423689545322
108
+ task:
109
+ type: Classification
110
+ - dataset:
111
+ config: default
112
+ name: MTEB EightTagsClustering
113
+ revision: 78b962b130c6690659c65abf67bf1c2f030606b6
114
+ split: test
115
+ type: PL-MTEB/8tags-clustering
116
+ metrics:
117
+ - type: v_measure
118
+ value: 0.12741289557382568
119
+ task:
120
+ type: Clustering
121
+ - dataset:
122
+ config: default
123
+ name: MTEB SICK-E-PL
124
+ revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
125
+ split: test
126
+ type: PL-MTEB/sicke-pl-pairclassification
127
+ metrics:
128
+ - type: ap
129
+ value: 0.5180729044483321
130
+ task:
131
+ type: PairClassification
132
+ - dataset:
133
+ config: default
134
+ name: MTEB CDSC-E
135
+ revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
136
+ split: test
137
+ type: PL-MTEB/cdsce-pairclassification
138
+ metrics:
139
+ - type: ap
140
+ value: 0.4375493105330498
141
+ task:
142
+ type: PairClassification
143
+ - dataset:
144
+ config: default
145
+ name: MTEB PSC
146
+ revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
147
+ split: test
148
+ type: PL-MTEB/psc-pairclassification
149
+ metrics:
150
+ - type: ap
151
+ value: 0.754642295153583
152
+ task:
153
+ type: PairClassification
154
+ - dataset:
155
+ config: pl
156
+ name: MTEB STS22
157
+ revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
158
+ split: test
159
+ type: mteb/sts22-crosslingual-sts
160
+ metrics:
161
+ - type: cosine_spearman
162
+ value: 0.12825302710239375
163
+ task:
164
+ type: STS
165
+ - dataset:
166
+ config: pl
167
+ name: MTEB STSBenchmarkMultilingualSTS
168
+ revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
169
+ split: dev
170
+ type: mteb/stsb_multi_mt
171
+ metrics:
172
+ - type: cosine_spearman
173
+ value: 0.4482502250845704
174
+ task:
175
+ type: STS
176
+ - dataset:
177
+ config: pl
178
+ name: MTEB STSBenchmarkMultilingualSTS
179
+ revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
180
+ split: test
181
+ type: mteb/stsb_multi_mt
182
+ metrics:
183
+ - type: cosine_spearman
184
+ value: 0.43991068385096105
185
+ task:
186
+ type: STS
187
+ - dataset:
188
+ config: default
189
+ name: MTEB SICK-R-PL
190
+ revision: fd5c2441b7eeff8676768036142af4cfa42c1339
191
+ split: test
192
+ type: PL-MTEB/sickr-pl-sts
193
+ metrics:
194
+ - type: cosine_spearman
195
+ value: 0.4167200279854063
196
+ task:
197
+ type: STS
198
+ - dataset:
199
+ config: default
200
+ name: MTEB CDSC-R
201
+ revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
202
+ split: test
203
+ type: PL-MTEB/cdscr-sts
204
+ metrics:
205
+ - type: cosine_spearman
206
+ value: 0.6206040112941377
207
+ task:
208
+ type: STS
209
+ - dataset:
210
+ config: default
211
+ name: MTEB PlscClusteringS2S
212
+ revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
213
+ split: test
214
+ type: PL-MTEB/plsc-clustering-s2s
215
+ metrics:
216
+ - type: v_measure
217
+ value: 0.27197701340559366
218
+ task:
219
+ type: Clustering
220
+ - dataset:
221
+ config: default
222
+ name: MTEB PlscClusteringP2P
223
+ revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
224
+ split: test
225
+ type: PL-MTEB/plsc-clustering-p2p
226
+ metrics:
227
+ - type: v_measure
228
+ value: 0.33147393108715384
229
+ task:
230
+ type: Clustering
231
  ---