|
--- |
|
dataset_info: |
|
- config_name: corpus-en |
|
features: |
|
- name: _id |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: corpus |
|
num_bytes: 85781507 |
|
num_examples: 90000 |
|
download_size: 48916377 |
|
dataset_size: 85781507 |
|
- config_name: corpus-ru |
|
features: |
|
- name: _id |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: corpus |
|
num_bytes: 150466041 |
|
num_examples: 90000 |
|
download_size: 71713875 |
|
dataset_size: 150466041 |
|
- config_name: en |
|
features: |
|
- name: query-id |
|
dtype: string |
|
- name: corpus-id |
|
dtype: string |
|
- name: score |
|
dtype: int64 |
|
splits: |
|
- name: test |
|
num_bytes: 479912 |
|
num_examples: 15000 |
|
download_size: 190544 |
|
dataset_size: 479912 |
|
- config_name: queries-en |
|
features: |
|
- name: _id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: queries |
|
num_bytes: 3124999 |
|
num_examples: 3000 |
|
download_size: 1758575 |
|
dataset_size: 3124999 |
|
- config_name: queries-ru |
|
features: |
|
- name: _id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: queries |
|
num_bytes: 5550462 |
|
num_examples: 3000 |
|
download_size: 2606302 |
|
dataset_size: 5550462 |
|
- config_name: ru |
|
features: |
|
- name: query-id |
|
dtype: string |
|
- name: corpus-id |
|
dtype: string |
|
- name: score |
|
dtype: int64 |
|
splits: |
|
- name: test |
|
num_bytes: 479912 |
|
num_examples: 15000 |
|
download_size: 190544 |
|
dataset_size: 479912 |
|
configs: |
|
- config_name: corpus-en |
|
data_files: |
|
- split: corpus |
|
path: corpus-en/corpus-* |
|
- config_name: corpus-ru |
|
data_files: |
|
- split: corpus |
|
path: corpus-ru/corpus-* |
|
- config_name: en |
|
data_files: |
|
- split: test |
|
path: en/test-* |
|
- config_name: queries-en |
|
data_files: |
|
- split: queries |
|
path: queries-en/queries-* |
|
- config_name: queries-ru |
|
data_files: |
|
- split: queries |
|
path: queries-ru/queries-* |
|
- config_name: ru |
|
data_files: |
|
- split: test |
|
path: ru/test-* |
|
language: |
|
- ru |
|
- en |
|
tags: |
|
- benchmark |
|
- mteb |
|
- retrieval |
|
--- |
|
# RuSciBench Dataset Collection |
|
|
|
This repository contains the datasets for the **RuSciBench** benchmark, designed for evaluating semantic vector representations of scientific texts in Russian and English. |
|
|
|
## Dataset Description |
|
|
|
**RuSciBench** is the first benchmark specifically targeting scientific documents in the Russian language, alongside their English counterparts (abstracts and titles). The data is sourced from [eLibrary.ru](https://www.elibrary.ru), the largest Russian electronic library of scientific publications, integrated with the Russian Science Citation Index (RSCI). |
|
|
|
The dataset comprises approximately 182,000 scientific paper abstracts and titles. All papers included in the benchmark have open licenses. |
|
|
|
## Tasks |
|
|
|
The benchmark includes a variety of tasks grouped into Classification, Regression, and Retrieval categories, designed for both Russian and English texts based on paper abstracts. |
|
|
|
### Classification Tasks |
|
|
|
([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_mteb)) |
|
|
|
1. **Topic Classification (OECD):** Classify papers based on the first two levels of the Organization for Economic Co-operation and Development (OECD) rubricator (29 classes). |
|
* `RuSciBenchOecdRuClassification` (subset `oecd_ru`) |
|
* `RuSciBenchOecdEnClassification` (subset `oecd_en`) |
|
2. **Topic Classification (GRNTI/SRSTI):** Classify papers based on the first level of the State Rubricator of Scientific and Technical Information (GRNTI/SRSTI) (29 classes). |
|
* `RuSciBenchGrntiRuClassification` (subset `grnti_ru`) |
|
* `RuSciBenchGrntiEnClassification` (subset `grnti_en`) |
|
3. **Core RISC Affiliation:** Binary classification task to determine if a paper belongs to the Core of the Russian Index of Science Citation (RISC). |
|
* `RuSciBenchCoreRiscRuClassification` (subset `corerisc_ru`) |
|
* `RuSciBenchCoreRiscEnClassification` (subset `corerisc_en`) |
|
4. **Publication Type Classification:** Classify documents into types like 'article', 'conference proceedings', 'survey', etc. (7 classes, balanced subset used). |
|
* `RuSciBenchPubTypesRuClassification` (subset `pub_type_ru`) |
|
* `RuSciBenchPubTypesEnClassification` (subset `pub_type_en`) |
|
|
|
### Regression Tasks |
|
|
|
([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_mteb)) |
|
|
|
1. **Year of Publication Prediction:** Predict the publication year of the paper. |
|
* `RuSciBenchYearPublRuRegression` (subset `yearpubl_ru`) |
|
* `RuSciBenchYearPublEnRegression` (subset `yearpubl_en`) |
|
2. **Citation Count Prediction:** Predict the number of times a paper has been cited. |
|
* `RuSciBenchCitedCountRuRegression` (subset `cited_count_ru`) |
|
* `RuSciBenchCitedCountEnRegression` (subset `cited_count_en`) |
|
|
|
### Retrieval Tasks |
|
|
|
1. **Direct Citation Prediction:** Given a query paper abstract, retrieve abstracts of papers it directly cites from the corpus. Uses a retrieval setup (all non-positive documents are negative). ([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_cite_retrieval)) |
|
* `RuSciBenchCiteRuRetrieval` |
|
* `RuSciBenchCiteEnRetrieval` |
|
2. **Co-Citation Prediction:** Given a query paper abstract, retrieve abstracts of papers that are co-cited with it (cited by at least 5 common papers). Uses a retrieval setup. |
|
* `RuSciBenchCociteRuRetrieval` |
|
* `RuSciBenchCociteEnRetrieval` |
|
3. **Translation Search:** Given an abstract in one language (e.g., Russian), retrieve its corresponding translation (e.g., English abstract of the same paper) from the corpus of abstracts in the target language. ([Dataset Link](https://huggingface.co/datasets/mlsa-iai-msu-lab/ru_sci_bench_translation_search)) |
|
* `RuSciBenchTranslationSearchEnRetrieval` (Query: En, Corpus: Ru) |
|
* `RuSciBenchTranslationSearchRuRetrieval` (Query: Ru, Corpus: En) |
|
|
|
## Usage |
|
|
|
These datasets are designed to be used with the MTEB library. **First, you need to install the MTEB fork containing the RuSciBench tasks:** |
|
|
|
```bash |
|
pip install git+https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb |
|
``` |
|
|
|
Then you can evaluate sentence-transformer models easily: |
|
|
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
from mteb import MTEB |
|
|
|
# Example: Evaluate on Russian GRNTI classification |
|
model_name = "mlsa-iai-msu-lab/sci-rus-tiny3.1" # Or any other sentence transformer |
|
model = SentenceTransformer(model_name) |
|
|
|
evaluation = MTEB(tasks=["RuSciBenchGrntiRuClassification"]) # Select tasks |
|
results = evaluation.run(model, output_folder=f"results/{model_name.split('/')[-1]}") |
|
|
|
print(results) |
|
``` |
|
|
|
For more details on the benchmark, tasks, and baseline model evaluations, please refer to the associated paper and code repository. |
|
|
|
* **Code Repository:** [https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb](https://github.com/mlsa-iai-msu-lab/ru_sci_bench_mteb) |
|
* **Paper:** https://doi.org/10.1134/S1064562424602191 |
|
|
|
## Citation |
|
|
|
If you use RuSciBench in your research, please cite the following paper: |
|
|
|
```bibtex |
|
@article{Vatolin2024, |
|
author = {Vatolin, A. and Gerasimenko, N. and Ianina, A. and Vorontsov, K.}, |
|
title = {RuSciBench: Open Benchmark for Russian and English Scientific Document Representations}, |
|
journal = {Doklady Mathematics}, |
|
year = {2024}, |
|
volume = {110}, |
|
number = {1}, |
|
pages = {S251--S260}, |
|
month = dec, |
|
doi = {10.1134/S1064562424602191}, |
|
url = {https://doi.org/10.1134/S1064562424602191}, |
|
issn = {1531-8362} |
|
} |
|
``` |