armandviolle's picture
Adding contribution paragraph?
3075bf3 verified
metadata
license: cc-by-nc-4.0
language:
  - fr
tags:
  - medical
configs:
  - config_name: default
    data_files:
      - split: train
        path: finetuning/train-*
  - config_name: finetuning
    data_files:
      - split: train
        path: finetuning/*.parquet
  - config_name: instruction-tuning
    data_files:
      - split: train
        path: instruction-tuning/*.parquet
dataset_info:
  - config_name: finetuning
    features:
      - name: input
        dtype: string
      - name: source
        dtype: string
      - name: document_type
        dtype: string
    splits:
      - name: train
        num_examples: 905342
  - config_name: instruction-tuning
    features:
      - name: input
        dtype: string
      - name: instruction
        dtype: string
      - name: output
        dtype: string
      - name: source
        dtype: string
      - name: document_type
        dtype: string
    splits:
      - name: train
        num_examples: 22390

PARCOMED - PARTAGES Corpus of Open MEdical Documents

This document describes the first version of the research-only corpus.

Overview

The availability of French biomedical data remains a major challenge for improving the multilingual capabilities of large language models (LLMs) in the medical domain. We introduce and release the PARCOMED_research_only corpus, a collection of French biomedical texts compiled from a wide range of sources for research-only use.

While similar datasets have been released in the past couple of years (NACHOS from DrBERT, JARGON), ours is the result of a greater scrutiny of the licensing terms of each source. Therefore, the PARTAGES corpus is fully compatible with research usage and is also distributed with a version compatible with commercial usage. Here, we present the research-only corpus released.

Document types and data sources

The selected datasets for our corpus come from a variety of sources which can be categorized as follows:

Clinical

E3C: E3C corpus of clinical cases in French, used for training and evaluating medical models. License 'libre for research'.

CAS: Corpus built from clinical cases reported in the scientific literature published in French, of which a subset of the corpus is annotated. NACHOS versioning. Visible at https://huggingface.co/datasets/bigbio/cas/tree/main and available upon request to the author. Research-only license.

FRASIMED: Annotated corpus of synthetic clinical cases written in French. Available at https://zenodo.org/records/8355629. License CC-BY-4.0.

ESSAI: Dataset ESSAI containing annotations of medical texts in French. Not available online but possible upon request. Research-only license.

Dialogue

PXCORPUS: French corpus of medical dialogues on prescriptions, transcripted and annotated. Available at https://doi.org/10.5281/zenodo.6482586. License CC-BY-4.0.

MQC: Annotated corpus of medical dialogues in French, simulating consultations between doctor and patient. Available at https://github.com/kleag/labforsims2-corpus. License CC-BY-SA-NC-4.0.

Education

CERIMES: Indexing of digital pedagogical resources proposed by higher education institutions and research organizations in France. NACHOS versioning. Available at https://data.enseignementsup-recherche.gouv.fr/explore/dataset/fr_esr_ressources-pedagogiques/export/?flg=en-gb&refine.lom_lifecycle_contribute_entity_fn=CERIMES. License Etalab.

Encyclopedic

WIKIPEDIA: Corpus extracted from Wikipedia in French, collected via the python wikipediaapi on medical, pharmaceutical and biological categories. License CC-BY-SA 3.0, GNU Free Documentation License.

Medical

ECDC_TM: Corpus of medical texts from the European Centre for Disease Prevention and Control (ECDC) for machine translation tasks. NACHOS versioning. Available at https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction. Free License.

Medicinal

EMEA_V3: Corpus of multilingual medical documents from the European Medicines Agency (EMEA), 3rd version. NACHOS versioning. Available at https://huggingface.co/datasets/qanastek/EMEA-V3. License CC-BY-4.0.

BDPM: Public database of medicines. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/base-de-donnees-publique-des-medicaments-base-officielle/. License Etalab.

Question Answering

DEFT2021: Corpus from the DEFT challenge for three tasks: extraction of clinical profiles, evaluation of student responses and existing ratings. Available at https://huggingface.co/datasets/DrBenchmark/DEFT2021. License CC-BY-4.0.

FRENCHMEDMCQA (INSTRUCT): Francophone corpus of questions in the medical domain with 5 response options (single or multiple choice) and their manual corrections. Available at https://huggingface.co/datasets/qanastek/frenchmedmcqa. License Apache 2.0.

MEDIQAL (INSTRUCT): MediQAl is a French medical question answering dataset designed to evaluate the capabilities of language models in factual medical recall and clinical reasoning. Disponible à https://huggingface.co/datasets/ANR-MALADES/MediQAl. Licence CC-BY-4.0

Regulation

QUALISCOPE: Data on the quality of healthcare establishments in France, extracted from Scope Santé. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/base-sur-la-qualite-et-la-securite-des-soins-anciennement-scope-sante/. License Etalab.

CNEDIMTS: Dataset from a specialized commission of the HAS that evaluates individual medical devices as well as diagnostic, therapeutic or assistive products (excluding medications), as well as associated services. NACHOS versioning. Available at https://www.data.gouv.fr/datasets/evaluation-des-dispositifs-medicaux/. License Etalab.

Scientific

WMT16: Biomedical variant of the WMT16 corpus built from PubMed scientific publications, containing multilingual data used for machine translation. Available at https://huggingface.co/datasets/qanastek/WMT-16-PubMed. License CC-BY-4.0.

HAL: Corpus extracted from the HAL platform, grouping French scientific publications in the biomedical domain. NACHOS versioning. Available via harvesting following the api protocol https://api.documentation-administrative.gouv.fr/oai. License Etalab.

HAS: Data from the High Authority of Health. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/textes-des-publications-de-la-has-7/. License Etalab.

QUAERO: Corpus of multilingual medical documents from MEDLINE titles and documents from the European Medicines Agency (EMEA-V3), used for training and evaluating models of automatic medical language processing. NACHOS versioning. Available at https://huggingface.co/datasets/DrBenchmark/QUAERO. License GNU Free Documentation License.

WMT18_MEDLINE: Corpus of biomedical texts from Medline, used in the context of the WMT18 challenge for automatic translation. NACHOS versioning. Available at https://www.statmt.org/wmt18/biomedical-translation-task.html. License CC BY-NC-SA 3.0, CC BY-NC-ND 4.0.

ISTEX: Corpus of scientific publications from the ISTEX platform, gathering French scientific literature. NACHOS versioning. Available at https://data.istex.fr/. License Etalab.

CLEAR: Corpus containing texts from 3 sources: encyclopedia, pharmaceutical notices and medical article abstracts. NACHOS versioning. Available at https://shs.hal.science/halshs-01968355. Research-only license.

MANTRA_GSC: Dataset extracted from biomedical corpora (Medline abstract titles, pharmaceutical notices, biomedical patents), with independent concept annotation according to a subset of the UMLS. NACHOS versioning. Available at https://huggingface.co/datasets/bigbio/mantra_gsc. License CC-BY-4.0.

Preprocessing steps

Text cleaning

All the documents were preprocessed using a pipeline inspired by FlauBERT (Le et al., 2020), including Unicode conversion and normalization, removal of characters outside standard French encoding, removal of multiple spaces, and removal of URLs.

To this initial cleaning script, additional steps were added due to the lack of relevant content in some documents included in the corpus. These were based on criteria such as a minimum word count (=5; a higher number would have been too restrictive for dialogues) in the texts that were retained.

De-duplication

To avoid overfitting on redundant samples in our dataset, we added an additional deduplication step during preprocessing. We used a very “classic” method based on MinHash similarity, with a similarity threshold of 0.85 and a number of permitted permutations set to 128.

This deduplication was applied during the transfer of the sourced datasets to the ready-to-use, unsourced corpus. Indeed, since some corpora intersect, the granularity of the source becomes less relevant because the documents are compared in an inter-corpus manner.

Features Scheme

Column Name Data Type Description
instruction string instruction-tuning only feature, corresponding to the system prompt for instruction-tuning samples.
input string input text, regardless of the adaptation method (e.g., finetuning or instruction-tuning). For instruction-tuning, this is the "user prompt" or "question".
output string instruction-tuning only feature gold standard output for supervised instruction-tuning.
source string dataset name of the data sample.
document_type string typology of document (e.g., Scientific, Encyclopedic, Clinical, Medication, Question-Answering, Dialogue, Regulation).

Statistics

Document-type granularity

FINETUNING data

nb_docs nb_words mean_words std_words nb_chars mean_chars std_chars
Total 905342 9.00141e+08 994.255 6719.46 5.61243e+09 6199.24 41099.6
Scientific 640313 8.49585e+08 1326.83 7932.88 5.27754e+09 8242.13 48478.3
Medicinal 233960 2.44849e+07 104.654 647.2 1.63167e+08 697.415 4332.35
Clinical 16100 1.75665e+07 1091.08 1290.35 1.15255e+08 7158.72 8430.4
Encyclopedic 9957 6.53102e+06 655.923 1252.04 4.32721e+07 4345.89 8209.94
Education 22 1.71519e+06 77963.1 47413.5 1.16235e+07 528341 321525
Question Answering 275 111792 406.516 264.436 626549 2278.36 1402.57
Regulation 1111 70081 63.0792 54.7356 478447 430.645 365.089
Medical 2152 42460 19.7305 13.3516 280626 130.402 92.0109
Dialogue 1452 34044 23.4463 73.5192 188202 129.616 394.801

INSTRUCTION-TUNING data

nb_docs nb_words mean_words std_words nb_chars mean_chars std_chars
Question Answering 22390 1.78385e+06 79.6716 59.3966 1.17989e+07 526.971 372.088
Total 22390 1.78385e+06 79.6716 59.3966 1.17989e+07 526.971 372.088

Source-wise granularity

FINETUNING data

nb_docs nb_words mean_words std_words nb_chars mean_chars std_chars
Total 905342 9.00141e+08 994.255 6719.46 5.61243e+09 6199.24 41099.6
HAL 26987 7.03474e+08 26067.1 26603.8 4.32567e+09 160287 160053
HAS 11334 9.61734e+07 8485.39 16098.9 6.20009e+08 54703.4 102858
ISTEX 12179 4.31384e+07 3542.03 2156.57 2.82624e+08 23205.9 14238.5
BDPM 11023 2.00358e+07 1817.63 2409.58 1.35081e+08 12254.5 16062.4
E3C 7499 1.58646e+07 2115.57 1222.36 1.0414e+08 13887.2 7923.95
WIKIPEDIA 9957 6.53102e+06 655.923 1252.04 4.32721e+07 4345.89 8209.94
WMT16 587563 6.49552e+06 11.055 5.40785 4.73973e+07 80.6676 37.5056
EMEA_V3 222937 4.44909e+06 19.9567 15.5252 2.80864e+07 125.984 99.953
CERIMES 22 1.71519e+06 77963.1 47413.5 1.16235e+07 528341 321525
FRASIMED 2048 1.3229e+06 645.945 333.9 8.73338e+06 4264.34 2207.72
CAS 712 232389 326.389 242.842 1.52772e+06 2145.68 1501.74
CLEAR 6 226123 37687.2 46388.3 1.36912e+06 228188 280743
ESSAI 5841 146530 25.0865 14.2491 854518 146.297 83.1409
DEFT2021 275 111792 406.516 264.436 626549 2278.36 1402.57
QUAERO 2083 66877 32.1061 161.208 394933 189.598 905.512
CNEDIMTS 813 58345 71.7651 60.599 398478 490.133 403.23
ECDC_TM 2152 42460 19.7305 13.3516 280626 130.402 92.0109
PXCORPUS 1414 18372 12.9929 6.0802 103531 73.2185 33.7791
MQC 38 15672 412.421 223.131 84671 2228.18 1179.41
QUALISCOPE 298 11736 39.3826 19.5879 79969 268.352 131.707
WMT18_MEDLINE 49 7719 157.531 65.3727 51627 1053.61 416.966
MANTRA_GSC 112 3085 27.5446 39.6518 22356 199.607 306.097

INSTRUCTION-TUNING data

nb_docs nb_words mean_words std_words nb_chars mean_chars std_chars
Total 22390 1.78385e+06 79.6716 59.3966 1.17989e+07 526.971 372.088
MEDIQAL 19907 1.6593e+06 83.3526 61.6255 1.09334e+07 549.225 386.325
FRENCHMEDMCQA 2483 124547 50.1599 19.6412 865475 348.56 126.799

File Organization

PARCOMED_research_only/
├── fine-tuning/
│   ├── dataset1_part1.parquet
│   ├── dataset1_part2.parquet
│   └── ...
├── instruction-tuning/
│   ├── dataset2_part1.parquet
│   ├── dataset2_part2.parquet
│   └── ...
└── README.md

Usage

from dataset import load_dataset

data = load_dataset(
    "HealthDataHub/PARCOMED_research_only", 
    split="train", 
    data_dir="finetuning"  # or "instruction-tuning"
    download_mode="force_redownload", 
    verification_mode="no_checks", 
)

Contribution

This dataset was created thanks to the the collaborative effort of PARTAGES development teams, including data identification, collection and licensing analysis. We acknowledge and thank all individuals and teams involved in its creation, more specifically:

  • Armand VIOLLE, Stéphane OHAYON, Chaïma ABDELLAOUI and Xavier TANNIER from LIMICS (Sorbonne Université)
  • Aidan MANNION, Cécile MACAIRE, Didier SCHWAB, Lorraine GOEURIOT and François PORTET from LIG (Université Grenoble Alpes, CNRS, Grenoble INP)