Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Russian
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
41c2b25
·
verified ·
1 Parent(s): fe1bad5

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +19 -28
README.md CHANGED
@@ -8,7 +8,10 @@ multilinguality: monolingual
8
  task_categories:
9
  - text-classification
10
  task_ids:
11
- - Sentiment/Hate speech
 
 
 
12
  dataset_info:
13
  features:
14
  - name: text
@@ -40,7 +43,7 @@ tags:
40
  <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
41
 
42
  <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
43
- <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">RuToxicOKMLCUPMultilabelClassification</h1>
44
  <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
45
  <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
46
  </div>
@@ -61,7 +64,7 @@ You can evaluate an embedding model on this dataset using the following code:
61
  ```python
62
  import mteb
63
 
64
- task = mteb.get_tasks(["RuToxicOKMLCUPMultilabelClassification"])
65
  evaluator = mteb.MTEB(task)
66
 
67
  model = mteb.get_model(YOUR_MODEL)
@@ -108,7 +111,7 @@ The following code contains the descriptive statistics from the task. These can
108
  ```python
109
  import mteb
110
 
111
- task = mteb.get_task("RuToxicOKMLCUPMultilabelClassification")
112
 
113
  desc_stats = task.metadata.descriptive_stats
114
  ```
@@ -124,21 +127,15 @@ desc_stats = task.metadata.descriptive_stats
124
  "max_text_length": 790,
125
  "unique_texts": 2000,
126
  "min_labels_per_text": 1,
127
- "average_label_per_text": 1.0885,
128
- "max_labels_per_text": 3,
129
- "unique_labels": 4,
130
  "labels": {
131
- "1": {
132
- "count": 1000
133
- },
134
  "0": {
135
- "count": 810
136
- },
137
- "3": {
138
- "count": 275
139
  },
140
- "2": {
141
- "count": 92
142
  }
143
  }
144
  },
@@ -151,21 +148,15 @@ desc_stats = task.metadata.descriptive_stats
151
  "max_text_length": 965,
152
  "unique_texts": 2000,
153
  "min_labels_per_text": 1,
154
- "average_label_per_text": 1.093,
155
- "max_labels_per_text": 3,
156
- "unique_labels": 4,
157
  "labels": {
158
- "1": {
159
- "count": 1000
160
- },
161
  "0": {
162
- "count": 824
163
- },
164
- "3": {
165
- "count": 260
166
  },
167
- "2": {
168
- "count": 102
169
  }
170
  }
171
  }
 
8
  task_categories:
9
  - text-classification
10
  task_ids:
11
+ - sentiment-analysis
12
+ - sentiment-scoring
13
+ - sentiment-classification
14
+ - hate-speech-detection
15
  dataset_info:
16
  features:
17
  - name: text
 
43
  <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
44
 
45
  <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
46
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">RuToxicOKMLCUPClassification</h1>
47
  <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
48
  <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
49
  </div>
 
64
  ```python
65
  import mteb
66
 
67
+ task = mteb.get_tasks(["RuToxicOKMLCUPClassification"])
68
  evaluator = mteb.MTEB(task)
69
 
70
  model = mteb.get_model(YOUR_MODEL)
 
111
  ```python
112
  import mteb
113
 
114
+ task = mteb.get_task("RuToxicOKMLCUPClassification")
115
 
116
  desc_stats = task.metadata.descriptive_stats
117
  ```
 
127
  "max_text_length": 790,
128
  "unique_texts": 2000,
129
  "min_labels_per_text": 1,
130
+ "average_label_per_text": 1.0,
131
+ "max_labels_per_text": 1,
132
+ "unique_labels": 2,
133
  "labels": {
 
 
 
134
  "0": {
135
+ "count": 1000
 
 
 
136
  },
137
+ "1": {
138
+ "count": 1000
139
  }
140
  }
141
  },
 
148
  "max_text_length": 965,
149
  "unique_texts": 2000,
150
  "min_labels_per_text": 1,
151
+ "average_label_per_text": 1.0,
152
+ "max_labels_per_text": 1,
153
+ "unique_labels": 2,
154
  "labels": {
 
 
 
155
  "0": {
156
+ "count": 1000
 
 
 
157
  },
158
+ "1": {
159
+ "count": 1000
160
  }
161
  }
162
  }