instruction-pretrain commited on
Commit
12ae0fe
1 Parent(s): 9c84e5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -1
README.md CHANGED
@@ -32,12 +32,14 @@ We explore supervised multitask pre-training by proposing ***Instruction Pre-Tra
32
  - Domain-Specific Models Pre-Trained from Llama3-8B:
33
  - [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
34
  - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
 
 
35
 
36
 
37
  ## Domain-Adaptive Continued Pre-Training
38
  Following [AdaptLLM](https://huggingface.co/AdaptLLM/medicine-chat), we augment the domain-specific raw corpora with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer).
39
 
40
- For example, to chat with the biomedicine-Llama3-8B model:
41
  ```python
42
  from transformers import AutoModelForCausalLM, AutoTokenizer
43
 
@@ -63,6 +65,32 @@ pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
63
  print(pred)
64
  ```
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ## Citation
67
  If you find our work helpful, please cite us:
68
 
 
32
  - Domain-Specific Models Pre-Trained from Llama3-8B:
33
  - [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
34
  - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
35
+ - General Instruction-Augmented Corpora: [general-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/general-instruction-augmented-corpora)
36
+ - Domain-Specific Instruction-Augmented Corpora (no finance data to avoid ethical issues): [medicine-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/medicine-instruction-augmented-corpora)
37
 
38
 
39
  ## Domain-Adaptive Continued Pre-Training
40
  Following [AdaptLLM](https://huggingface.co/AdaptLLM/medicine-chat), we augment the domain-specific raw corpora with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer).
41
 
42
+ ### 1. To chat with the biomedicine-Llama3-8B model:
43
  ```python
44
  from transformers import AutoModelForCausalLM, AutoTokenizer
45
 
 
65
  print(pred)
66
  ```
67
 
68
+ ### 2. To evaluate our models on the domain-specific tasks
69
+ 1. Setup dependencies
70
+ ```bash
71
+ git clone https://github.com/microsoft/LMOps
72
+ cd LMOps/adaptllm
73
+ pip install -r requirements.txt
74
+ ```
75
+
76
+ 2. Evaluate
77
+ ```bash
78
+ DOMAIN='biomedicine'
79
+
80
+ # if the model can fit on a single GPU: set MODEL_PARALLEL=False
81
+ # elif the model is too large to fit on a single GPU: set MODEL_PARALLEL=True
82
+ MODEL_PARALLEL=False
83
+
84
+ # number of GPUs, chosen from [1,2,4,8]
85
+ N_GPU=1
86
+
87
+ # Set as True
88
+ add_bos_token=True
89
+
90
+ bash scripts/inference.sh ${DOMAIN} 'instruction-pretrain/medicine-Llama3-8B' ${add_bos_token} ${MODEL_PARALLEL} ${N_GPU}
91
+ ```
92
+
93
+
94
  ## Citation
95
  If you find our work helpful, please cite us:
96