|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- text-classification |
|
language: |
|
- fr |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
|
|
## Clustering HAL |
|
|
|
This dataset was created by scrapping data from the HAL platform. |
|
Over 80,000 articles have been scrapped to keep their id, title and category. |
|
|
|
It was originally used for the French version of [MTEB](https://github.com/embeddings-benchmark/mteb), but it can also be used for various clustering or classification tasks. |
|
|
|
### Usage |
|
|
|
To use this dataset, you can run the following code : |
|
```py |
|
from datasets import load_dataset |
|
dataset = load_dataset("lyon-nlp/clustering-hal-s2s", "test") |
|
``` |
|
|
|
### Citation |
|
If you use this dataset in your work, please consider citing: |
|
``` |
|
@misc{ciancone2024extending, |
|
title={Extending the Massive Text Embedding Benchmark to French}, |
|
author={Mathieu Ciancone and Imene Kerboua and Marion Schaeffer and Wissam Siblini}, |
|
year={2024}, |
|
eprint={2405.20468}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |