Question Answering
Safetensors
gemma2
biology
medical

Democratizing Medical LLMs For Much More Languages

Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far.

๐Ÿ“ƒ Paper โ€ข ๐ŸŒ Demo โ€ข ๐Ÿค— ApolloMoEDataset โ€ข ๐Ÿค— ApolloMoEBench โ€ข ๐Ÿค— Models โ€ข๐ŸŒ Apollo โ€ข ๐ŸŒ ApolloMoE

Apollo

๐ŸŒˆ Update

  • [2024.10.15] ApolloMoE repo is published๏ผ๐ŸŽ‰

Languages Coverage

12 Major Languages and 38 Minor Languages

Click to view the Languages Coverage

ApolloMoE

Architecture

Click to view the MoE routing image

ApolloMoE

Results

Dense

๐Ÿค— Apollo2-0.5B โ€ข ๐Ÿค— Apollo2-1.5B โ€ข ๐Ÿค— Apollo2-2B

๐Ÿค— Apollo2-3.8B โ€ข ๐Ÿค— Apollo2-7B โ€ข ๐Ÿค— Apollo2-9B

Click to view the Dense Models Results

ApolloMoE

Post-MoE

๐Ÿค— Apollo-MoE-0.5B โ€ข ๐Ÿค— Apollo-MoE-1.5B โ€ข ๐Ÿค— Apollo-MoE-7B

Click to view the Post-MoE Models Results

ApolloMoE

Usage Format

Apollo2
  • 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
  • 2B, 9B: User:{query}\nAssistant:{response}<eos>
  • 3.8B: <|user|>\n{query}<|end|><|assisitant|>\n{response}<|end|>
Apollo-MoE
  • 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>

Dataset & Evaluation

  • Dataset ๐Ÿค— ApolloMoEDataset

    Click to expand

    ApolloMoE

  • Evaluation ๐Ÿค— ApolloMoEBench

    Click to expand
    • EN:

      • MedQA-USMLE
      • MedMCQA
      • PubMedQA: Because the results fluctuated too much, they were not used in the paper.
      • MMLU-Medical
        • Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
    • ZH:

      • MedQA-MCMLE
      • CMB-single: Not used in the paper
        • Randomly sample 2,000 multiple-choice questions with single answer.
      • CMMLU-Medical
        • Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
      • CExam: Not used in the paper
        • Randomly sample 2,000 multiple-choice questions
    • ES: Head_qa

    • FR:

      • Frenchmedmcqa
      • [MMLU_FR]
        • Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
    • HI: MMLU_HI

      • Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
    • AR: MMLU_AR

      • Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
    • JA: IgakuQA

    • KO: KorMedMCQA

    • IT:

      • MedExpQA
      • [MMLU_IT]
        • Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
    • DE: BioInstructQA: German part

    • PT: BioInstructQA: Portuguese part

    • RU: RuMedBench

Model Download and Inference

We take Apollo-MoE-0.5B as an example

  1. Login Huggingface

    huggingface-cli login --token $HUGGINGFACE_TOKEN
    
  2. Download model to local dir

    from huggingface_hub import snapshot_download
    import os
    
    local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B')
    snapshot_download(repo_id="FreedomIntelligence/Apollo-MoE-0.5B", local_dir=local_model_dir)
    
  3. Inference Example

    from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
    import os
    
    local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B')
    
    model=AutoModelForCausalLM.from_pretrained(local_model_dir,trust_remote_code=True)
    tokenizer = AutoTokenizer.from_pretrained(local_model_dir,trust_remote_code=True)
    generation_config = GenerationConfig.from_pretrained(local_model_dir, pad_token_id=tokenizer.pad_token_id, num_return_sequences=1, max_new_tokens=7, min_new_tokens=2, do_sample=False, temperature=1.0, top_k=50, top_p=1.0)
    
    inputs = tokenizer('Answer direclty.\nThe capital of Mongolia is Ulaanbaatar.\nThe capital of Iceland is Reykjavik.\nThe capital of Australia is', return_tensors='pt')
    inputs = inputs.to(model.device)
    pred = model.generate(**inputs,generation_config=generation_config)
    print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
    

Results reproduction

Click to expand

We take Apollo2-7B or Apollo-MoE-0.5B as example

  1. Download Dataset for project:

    bash 0.download_data.sh  
    
  2. Prepare test and dev data for specific model:

    • Create test data for with special token
    bash 1.data_process_test&dev.sh
    
  3. Prepare train data for specific model (Create tokenized data in advance):

    • You can adjust data Training order and Training Epoch in this step
    bash 2.data_process_train.sh
    
  4. Train the model

    • If you want to train in Multi Nodes please refer to ./src/sft/training_config/zero_multi.yaml
    bash 3.single_node_train.sh
    
  5. Evaluate your model: Generate score for benchmark

    bash 4.eval.sh
    

Citation

Please use the following citation if you intend to use our dataset for training or evaluation:

@misc{zheng2024efficientlydemocratizingmedicalllms,
      title={Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts}, 
      author={Guorui Zheng and Xidong Wang and Juhao Liang and Nuo Chen and Yuping Zheng and Benyou Wang},
      year={2024},
      eprint={2410.10626},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.10626}, 
}
Downloads last month
128
Safetensors
Model size
10.2B params
Tensor type
BF16
ยท
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for FreedomIntelligence/Apollo2-9B

Base model

google/gemma-2-9b
Finetuned
(214)
this model
Quantizations
4 models

Dataset used to train FreedomIntelligence/Apollo2-9B

Collection including FreedomIntelligence/Apollo2-9B