Update README.md
Browse files
README.md
CHANGED
@@ -26,14 +26,21 @@ configs:
|
|
26 |
This dataset was created by scrapping data from the HAL platform.
|
27 |
Over 80,000 articles have been scrapped to keep their id, title and category.
|
28 |
|
29 |
-
It was originally used for the French version of [MTEB](https://github.com/embeddings-benchmark/mteb), but it can also be used for various clustering or classification tasks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
### Usage
|
32 |
|
33 |
To use this dataset, you can run the following code :
|
34 |
```py
|
35 |
from datasets import load_dataset
|
36 |
-
dataset = load_dataset("lyon-nlp/clustering-hal-s2s", name="
|
37 |
```
|
38 |
|
39 |
### Citation
|
|
|
26 |
This dataset was created by scrapping data from the HAL platform.
|
27 |
Over 80,000 articles have been scrapped to keep their id, title and category.
|
28 |
|
29 |
+
It was originally used for the French version of [MTEB](https://github.com/embeddings-benchmark/mteb), but it can also be used for various clustering or classification tasks, or even evaluate the general knowledge of a model.
|
30 |
+
|
31 |
+
⚠️ This dataset contains 2 subsets:
|
32 |
+
- ***"raw"*** subset : contains the data originally scrapped, without any cleaning. The data contains mostly titles in French, but also titles in other languages (english, italian, ...)
|
33 |
+
- ***"mteb_eval"*** subset : is the subset used for the MTEB evaluation. It is a cleaned up version of the raw dataset. Notably, samples have been removed if :
|
34 |
+
- their "domain" were in a minor class (less than 500 samples were available)
|
35 |
+
- their "title" were less than or equal 2 words
|
36 |
+
- the language was not French
|
37 |
|
38 |
### Usage
|
39 |
|
40 |
To use this dataset, you can run the following code :
|
41 |
```py
|
42 |
from datasets import load_dataset
|
43 |
+
dataset = load_dataset("lyon-nlp/clustering-hal-s2s", name="mteb_eval", split="test") # for MTEB eval subset
|
44 |
```
|
45 |
|
46 |
### Citation
|