Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Italian
Libraries:
Datasets
Dask
License:
File size: 4,492 Bytes
b8b3971
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e28c7de
b8b3971
02ee5ff
 
 
 
 
f80568c
02ee5ff
e28c7de
3167fd3
e28c7de
 
 
 
 
3167fd3
e28c7de
 
b8b3971
 
f80568c
02ee5ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b8b3971
02ee5ff
b19eccb
02ee5ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
pretty_name: BioBERT-ITA
license: cc-by-sa-4.0
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 27319024484
    num_examples: 17203146
  download_size: 14945984639
  dataset_size: 27319024484
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
task_categories:
- text-generation
language:
- it
tags:
- medical
- biology
size_categories:
- 1B<n<10B
---

From this repository you can download the **BioBERT_Italian** dataset.

**BioBERT_Italian** is the Italian translation of the original BioBERT dataset, composed by millions of abstracts of PubMed papers.

Due to the unavailability of an Italian equivalent for the millions of abstracts and full-text scientific papers used by English, BERT-based biomedical models, we leveraged machine translation to obtain an Italian biomedical corpus based on PubMed abstracts and train [**BioBIT**](https://www.sciencedirect.com/science/article/pii/S1532046423001521).

Corpus statistics:
- Total Tokens^: 6.2 billions
- Average tokens per example: 359
- Max tokens per example: 2132
- Min tokens per example: 5
- Standard deviation: 137

^Tokenization with [**BioBIT**](https://huggingface.co/IVN-RIN/bioBIT) tokenizer


**BioBIT Model**

[**BioBIT**](https://www.sciencedirect.com/science/article/pii/S1532046423001521) has been evaluated on 3 downstream tasks: **NER** (Named Entity Recognition), extractive **QA** (Question Answering), **RE** (Relation Extraction).
Here are the results, summarized:
- NER:
  - [BC2GM](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb32) = 82.14%
  - [BC4CHEMD](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb35) = 80.70%
  - [BC5CDR(CDR)](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb31) = 82.15%
  - [BC5CDR(DNER)](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb31) = 76.27%
  - [NCBI_DISEASE](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb33) = 65.06%
  - [SPECIES-800](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb34) = 61.86%
- QA:
  - [BioASQ 4b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 68.49%
  - [BioASQ 5b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 78.33%
  - [BioASQ 6b](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb30) = 75.73%
- RE:
  - [CHEMPROT](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb36) = 38.16%
  - [BioRED](http://refhub.elsevier.com/S1532-0464(23)00152-1/sb37) = 67.15%

**MedPsyNIT Model**

We also [**fine-tuned BioBIT**](https://www.sciencedirect.com/science/article/pii/S1532046423002782) on [**PsyNIT**](https://huggingface.co/datasets/IVN-RIN/PsyNIT) (Psychiatric Ner for ITalian), a native Italian **NER** (Named Entity Recognition) dataset, composed by [Italian Research Hospital Centro San Giovanni Di Dio Fatebenefratelli](https://www.fatebenefratelli.it/strutture/irccs-brescia).

**Correspondence to**

Claudio Crema (ccrema@fatebenefratelli.eu), Tommaso Mario Buonocore (tommaso.buonocore@unipv.it)

**Citation**

    @article{BUONOCORE2023104431,
    title = {Localizing in-domain adaptation of transformer-based biomedical language models},
    journal = {Journal of Biomedical Informatics},
    volume = {144},
    pages = {104431},
    year = {2023},
    issn = {1532-0464},
    doi = {https://doi.org/10.1016/j.jbi.2023.104431},
    url = {https://www.sciencedirect.com/science/article/pii/S1532046423001521},
    author = {Tommaso Mario Buonocore and Claudio Crema and Alberto Redolfi and Riccardo Bellazzi and Enea Parimbelli},
    keywords = {Natural language processing, Deep learning, Language model, Biomedical text mining, Transformer}
    }

    @article{CREMA2023104557,
    title = {Advancing Italian biomedical information extraction with transformers-based models: Methodological insights and multicenter practical application},
    journal = {Journal of Biomedical Informatics},
    volume = {148},
    pages = {104557},
    year = {2023},
    issn = {1532-0464},
    doi = {https://doi.org/10.1016/j.jbi.2023.104557},
    url = {https://www.sciencedirect.com/science/article/pii/S1532046423002782},
    author = {Claudio Crema and Tommaso Mario Buonocore and Silvia Fostinelli and Enea Parimbelli and Federico Verde and Cira Fundarò and Marina Manera and Matteo Cotta Ramusino and Marco Capelli and Alfredo Costa and Giuliano Binetti and Riccardo Bellazzi and Alberto Redolfi},
    keywords = {Natural language processing, Deep learning, Biomedical text mining, Language model, Transformer}
    }