Datasets:

Modalities:
Text
Formats:
json
Languages:
French
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,356 Bytes
8266a89
 
 
 
 
 
 
 
 
115cd11
 
 
 
 
 
ace38b4
 
 
 
 
 
3e394f9
 
 
 
 
 
 
06a87d8
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
task_categories:
- text-classification
- question-answering
language:
- fr
pretty_name: alloprof
size_categories:
- 1K<n<10K
configs:
- config_name: documents
  data_files: documents.json
- config_name: queries
  data_files: queries.json

---

This is a re-edit from the Alloprof dataset (which can be found here : https://huggingface.co/datasets/antoinelb7/alloprof).

For more information about the data source and the features, please refer to the original dataset card made by the authors, along with their paper available here : https://arxiv.org/abs/2302.07738

This re-edition of the dataset is a preprocessed version of the original, **in a more ready-to-use format**. Essentially, the texts have been cleaned, and data not usable for retrieval has been discarded.

### Why a re-edition ?

It has been made for easier usage in the MTEB benchmarking pipeline in order to contribute in the MTEB leaderboard : https://huggingface.co/spaces/mteb/leaderboard.

For more information about the project, please refer to the associated paper : https://arxiv.org/pdf/2210.07316.pdf

### Usage

To use the dataset, you need to specify the subset you want (documents or queries) when calling the load_dataset() method.
For example, to get the queries use :
```py
from datasets import load_dataset
dataset = load_dataset("lyon-nlp/alloprof", "queries")
```