HiTZ
/

Text Generation
PEFT
Safetensors



Mistral 7B fine-tuned for Medical QA in MedExpQA benchmark

We provide a Mistral7B fine-tuned model on MedExpQA, the first multilingual benchmark for Medical QA which includes reference gold explanations.

The model has been fine-tuned using the Clinical Case and Question + automatically obtained RAG using the MedCorp and MedRAG method with 32 snippets. The model generates as output a prediction of the correct answer to the multiple choice exam and has been evaluated on 4 languages: English, French, Italian and Spanish.

For details about fine-tuning and evaluation please check the paper and the repository for usage.

Model Description

  • Developed by: Iñigo Alonso, Maite Oronoz, Rodrigo Agerri
  • Contact: Iñigo Alonso and Rodrigo Agerri
  • Website: https://univ-cotedazur.eu/antidote
  • Funding: CHIST-ERA XAI 2019 call. Antidote (PCI2020-120717-2) funded by MCIN/AEI /10.13039/501100011033 and by European Union NextGenerationEU/PRTR
  • Model type: text-generation
  • Language(s) (NLP): English, Spanish, French, Italian
  • License: apache-2.0
  • Finetuned from model: mistralai/Mistral-7B-v0.1

Citation

If you use MedExpQA data then please cite the following paper:

@misc{alonso2024medexpqa,
      title={MedExpQA: Multilingual Benchmarking of Large Language Models for Medical Question Answering}, 
      author={Iñigo Alonso and Maite Oronoz and Rodrigo Agerri},
      year={2024},
      eprint={2404.05590},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train HiTZ/Mistral-7B-MedExpQA-EN

Space using HiTZ/Mistral-7B-MedExpQA-EN 1

Collection including HiTZ/Mistral-7B-MedExpQA-EN