--- license: apache-2.0 --- # Multilingual Medicine: Model, Dataset, Benchmark, Code Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far

👨🏻‍💻Github •📃 Paper • 🌐 Demo • 🤗 ApolloCorpus • 🤗 XMedBench
中文 | English

![Apollo](assets/apollo_medium_final.png) ## 🌈 Update * **[2024.02.12]** ApolloCorpus and XMedBench is published!🎉 * **[2024.01.23]** Apollo repo is published!🎉 ## Results Apollo-0.5B • 🤗 Apollo-1.8B • 🤗 Apollo-2B • 🤗 Apollo-6B • 🤗 Apollo-7B ![Apollo](assets/result.png) ## Usage - [Zip File](https://huggingface.co/datasets/FreedomIntelligence/Medbase_data/blob/main/Medbase_data-datasets.zip) - [Data category](https://huggingface.co/datasets/FreedomIntelligence/Medbase_data/tree/main/train) - Pretrain: - json_name: {data_source}_{language}_{data_type}.json - data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki - language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi) - data_type: qa(generated qa from text) - data item: - data_type==text: list of string ``` [ "string1", "string2", ... ] ``` - data_type==qa: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` - SFT: - json_name: {data_source}_{language}.json - data_type: code, general, math, medicalExam, medicalPatient - data item: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` ## **Data:** Huge, Diverse, Clean, Multilingual | Data Type | Description | Source(ZH) | Source(EN) | Source(FR) | Source(ES) | Source(AR) | Source(HI) | | ------------------ | ---------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | | Continue Pretrain | | | | | | | | | Medical Books | Medical related Books | MedQA-books | Pile-Books | - | | | | | Medical Guidelines | Clinical Medicine Guide | Chinese Medical Association | [Medtron guideline](https://huggingface.co/datasets/epfl-llm/guidelines) | - | | | | | Medical Wiki | Medical related wikipedia | Wikipedia & Wikidoc | Wikipedia & Wikidoc | [CLEAR - Simple Corpus for Medical French](http://natalia.grabar.free.fr/resources.php#clear) | - | - | [Hindi_health](https://www.kaggle.com/datasets/aijain/hindi-health-dataset/data?select=Symptom+Gazetteer.txt) | | Medical Paper | Medical related paper | Papers abstract | PubMed Abstract | [MORFITT](https://huggingface.co/datasets/qanastek/MORFITT?row=98): Pubmed-french Cochrane: [CLEAR-](http://natalia.grabar.free.fr/resources.php#clear)abs | [Mesinesp](https://zenodo.org/records/3826492) | - | - | | Medical Web | Medical related web data | Wudao | C4 | [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA)_train | [CoWeSe](https://zenodo.org/records/5513237) | - | - | | SFT | | | | | | | | | Medical Exam | Medical related exams | MedQA CExam CMB (Train Set) | MedQA MedmcQA PubMedQA (Train Set) | - | [Head_qa](https://huggingface.co/datasets/head_qa)_train | - | - | | Medical Patient | Doctor-patient dialogue data | [HuatuoGPT-I](https://huggingface.co/datasets/FreedomIntelligence/HuatuoGPT-sft-data-v1) | [PMC_patients](https://huggingface.co/datasets/zhengyun21/PMC-Patients?row=34) | - | - | [MAQA](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/Y2JBEZ) | - | | General_Replay | General SFT Data | Wizard & ShareGPT & Alpaca | Wizard & ShareGPT & Alpaca & [Dataset List](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2#sft-data-sources) | ShareGPT & Alpaca | ShareGPT & Alpaca | ShareGPT & Alpaca | ShareGPT & Alpaca | | Code | Code Data | [leetcode-11k](https://huggingface.co/datasets/krisfu/awesome-llm-datasets-only-Chinese) | [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca) | - | - | - | - | | Math | Math Data | | [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) | - | - | - | - | ## Citation ``` @misc{wang2024apollo, title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People}, author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang}, year={2024}, eprint={2403.03640}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{Apollo, title={Apollo, Multilingual Medicine: Model, Dataset, Benchmark, Code}, author={Xidong Wang, Junyin Chen, Nuo Chen, Yidong Wang, Zhiyi Zhang, Benyou Wang}, year = {2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/FreedomIntelligence/Apollo}}, } ```