MonteXiaofeng commited on
Commit
535954a
1 Parent(s): d0bff7c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -4,21 +4,23 @@ license: apache-2.0
4
 
5
  ## Introduction
6
 
7
- Aquila is a large language model trained by BAAI, and AquilaMed-RL is an industry model from Aquila language model. Based on the Aquila general pre-trained model, we continued pre-training , SFT and RL in the medical domain and obtained our AquilaMed-RL model.
8
 
9
  ## Model Details
10
 
11
- The pipeline of the training procedure is bellow, for more details you can read our technical report: https://github.com/FlagAI-Open/industry-application/blob/main/Aquila_med_tech-report.pdf
12
 
13
  ![pipeline](./img/pipeline.png)
14
 
15
  ## Evaluation
16
 
 
 
17
  ![pipeline](./img/eval-result.jpeg)
18
 
19
  ## usage
20
 
21
- when you have downloaded the model, you can use the bellow code to run the model
22
 
23
  ```python
24
 
@@ -82,8 +84,8 @@ predict: 肚子疼可能是多种原因引起的,例如消化不良、胃炎
82
  If you find our work helpful, feel free to give us a cite.
83
 
84
  ```
85
- @article{AquilaMed,
86
- title={AquilaMed Technical Report},
87
  year={2024}
88
  }
89
  ```
 
4
 
5
  ## Introduction
6
 
7
+ Aquila is a large language model independently developed by BAAI. Building upon the Aquila model, we continued pre-training, SFT (Supervised Fine-Tuning), and RL (Reinforcement Learning) through a multi-stage training process, ultimately resulting in the AquilaMed-RL model. This model possesses professional capabilities in the medical field and demonstrates a significant win rate when evaluated against annotated data using the GPT-4 model. The AquilaMed-RL model can perform medical triage, medication inquiries, and general Q&A. We will open-source the SFT data and RL data required for training the model. Additionally, we will release a technical report detailing our methods in developing the model for the medical field, thereby promoting the development of the open-source community.
8
 
9
  ## Model Details
10
 
11
+ The training process of the model is described as follows. For more information, please refer to our technical report. https://github.com/FlagAI-Open/industry-application/blob/main/Aquila_med_tech-report.pdf
12
 
13
  ![pipeline](./img/pipeline.png)
14
 
15
  ## Evaluation
16
 
17
+ Using GPT-4 for evaluation, the win rates of our model compared to the reference answers in the annotated validation dataset are as follows.
18
+
19
  ![pipeline](./img/eval-result.jpeg)
20
 
21
  ## usage
22
 
23
+ Once you have downloaded the model locally, you can use the following code for inference.
24
 
25
  ```python
26
 
 
84
  If you find our work helpful, feel free to give us a cite.
85
 
86
  ```
87
+ @article{Aqulia-Med LLM,
88
+ title={Aqulia-Med LLM: Pioneering Full-Process Open-Source Medical Language Models},
89
  year={2024}
90
  }
91
  ```