fadliaulawi commited on
Commit
c645964
1 Parent(s): 2314080

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -177
README.md CHANGED
@@ -3,200 +3,66 @@ library_name: peft
3
  base_model: meta-llama/Llama-2-7b-hf
4
  ---
5
 
6
- # Model Card for Model ID
 
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
 
 
 
9
 
 
 
10
 
 
11
 
12
- ## Model Details
 
 
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
 
 
 
17
 
 
 
18
 
 
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Shared by [optional]:** [More Information Needed]
22
- - **Model type:** [More Information Needed]
23
- - **Language(s) (NLP):** [More Information Needed]
24
- - **License:** [More Information Needed]
25
- - **Finetuned from model [optional]:** [More Information Needed]
26
 
27
- ### Model Sources [optional]
 
28
 
29
- <!-- Provide the basic links for the model. -->
 
30
 
31
- - **Repository:** [More Information Needed]
32
- - **Paper [optional]:** [More Information Needed]
33
- - **Demo [optional]:** [More Information Needed]
34
 
35
- ## Uses
 
36
 
37
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
 
38
 
39
- ### Direct Use
40
 
41
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
-
43
- [More Information Needed]
44
-
45
- ### Downstream Use [optional]
46
-
47
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
-
49
- [More Information Needed]
50
-
51
- ### Out-of-Scope Use
52
-
53
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
-
55
- [More Information Needed]
56
-
57
- ## Bias, Risks, and Limitations
58
-
59
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
-
61
- [More Information Needed]
62
-
63
- ### Recommendations
64
-
65
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
-
67
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
-
69
- ## How to Get Started with the Model
70
-
71
- Use the code below to get started with the model.
72
-
73
- [More Information Needed]
74
-
75
- ## Training Details
76
-
77
- ### Training Data
78
-
79
- <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
80
-
81
- [More Information Needed]
82
-
83
- ### Training Procedure
84
-
85
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
-
87
- #### Preprocessing [optional]
88
-
89
- [More Information Needed]
90
-
91
-
92
- #### Training Hyperparameters
93
-
94
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
-
96
- #### Speeds, Sizes, Times [optional]
97
-
98
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
-
100
- [More Information Needed]
101
-
102
- ## Evaluation
103
-
104
- <!-- This section describes the evaluation protocols and provides the results. -->
105
-
106
- ### Testing Data, Factors & Metrics
107
-
108
- #### Testing Data
109
-
110
- <!-- This should link to a Data Card if possible. -->
111
-
112
- [More Information Needed]
113
-
114
- #### Factors
115
-
116
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
117
-
118
- [More Information Needed]
119
-
120
- #### Metrics
121
-
122
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
-
124
- [More Information Needed]
125
-
126
- ### Results
127
-
128
- [More Information Needed]
129
-
130
- #### Summary
131
-
132
-
133
-
134
- ## Model Examination [optional]
135
-
136
- <!-- Relevant interpretability work for the model goes here -->
137
-
138
- [More Information Needed]
139
-
140
- ## Environmental Impact
141
-
142
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
-
144
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
145
-
146
- - **Hardware Type:** [More Information Needed]
147
- - **Hours used:** [More Information Needed]
148
- - **Cloud Provider:** [More Information Needed]
149
- - **Compute Region:** [More Information Needed]
150
- - **Carbon Emitted:** [More Information Needed]
151
-
152
- ## Technical Specifications [optional]
153
-
154
- ### Model Architecture and Objective
155
-
156
- [More Information Needed]
157
-
158
- ### Compute Infrastructure
159
-
160
- [More Information Needed]
161
-
162
- #### Hardware
163
-
164
- [More Information Needed]
165
-
166
- #### Software
167
-
168
- [More Information Needed]
169
-
170
- ## Citation [optional]
171
-
172
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
173
-
174
- **BibTeX:**
175
-
176
- [More Information Needed]
177
-
178
- **APA:**
179
-
180
- [More Information Needed]
181
-
182
- ## Glossary [optional]
183
-
184
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
185
-
186
- [More Information Needed]
187
-
188
- ## More Information [optional]
189
-
190
- [More Information Needed]
191
-
192
- ## Model Card Authors [optional]
193
-
194
- [More Information Needed]
195
-
196
- ## Model Card Contact
197
-
198
- [More Information Needed]
199
 
 
200
 
201
  ## Training procedure
202
 
@@ -217,3 +83,22 @@ The following `bitsandbytes` quantization config was used during training:
217
 
218
 
219
  - PEFT 0.6.0.dev0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  base_model: meta-llama/Llama-2-7b-hf
4
  ---
5
 
6
+ # [Reproducing] Stanford Alpaca: An Instruction-following LLaMA Model
7
+ This is the repo for reproducing [Stanford Alpaca : An Instruction-following LLaMA Model](https://github.com/tatsu-lab/stanford_alpaca/blob/main/README.md). We finetune some of LlaMa2-based large language model using medical QA dataset. The repo contains:
8
 
9
+ - The [5K data](#dataset) conversations between patients and physicians used for fine-tuning the model.
10
+ - The code for [Preparation data](#data-preparation).
11
+ - The code for [Fine Tuning the Model](#fine-tuning).
12
+ - The link for [Testing the Model](#testing-the-model).
13
 
14
+ ## Dataset
15
+ We using the 5k generated dataset by [Chat Doctor](https://github.com/Kent0n-Li/ChatDoctor). The dataset is a generated conversations between patients and physicians from ChatGPT GenMedGPT-5k and disease database. Dataset also currated and modified to Indonesian Language Based.
16
 
17
+ [`GenMedGPT-5k-id.json`](https://github.com/gilangcy/stanford-alpaca/blob/main/GenMedGPT-5k-id.json) contains 5K instruction-following data we used for fine-tuning the LlaMa model. This JSON file is a list of dictionaries, each dictionary contains the following fields:
18
 
19
+ - `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
20
+ - `input`: `str`, optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
21
+ - `output`: `str`, the answer to the instruction as generated by `text-davinci-003`.
22
 
23
+ If you're interested in fine-tuning with your own data, it's essential to adhere to the default prompt format that the model used during its pre-training phase. The prompt for LlaMa 2 is structured similarly to this:
24
 
25
+ ```
26
+ <s>[INST] <<SYS>>
27
+ {{ instruction }}
28
+ <</SYS>>
29
 
30
+ {{ input }} [/INST] {{ output }} </s>
31
+ ```
32
 
33
+ Meanwhile, the prompt for PolyLM and InternLM (adapted to Indonesian) is structured similarly to this:
34
 
35
+ ```
36
+ Di bawah ini adalah instruksi yang menjelaskan tugas, dipasangkan dengan masukan yang memberikan konteks lebih lanjut. Tulis tanggapan yang melengkapi permintaan dengan tepat.
 
 
 
 
37
 
38
+ Instruksi:
39
+ {instruction}
40
 
41
+ Masukan:
42
+ {input}
43
 
44
+ Tanggapan:
45
+ {output}
46
+ ```
47
 
48
+ ## Finetuning the Model
49
+ We fine-tune our models based on the step from Stanford Alpaca. We choose to train some LLama-based model. The model that we finetune are PolyLM-1.7B, LlaMa-2-7B, InternLM-7B with the following hyperparameters:
50
 
51
+ | Hyperparameter | PolyLM-1.7B | LLaMA-7B | InternLM-7B |
52
+ |----------------|------------ |----------|-------------|
53
+ | Batch size | 128 | 128 | 128 |
54
+ | Learning rate | 3e-4 | 3e-4 | 3e-4 |
55
+ | Epochs | 3 | 3 | 3 |
56
+ | Max length | 256 | 256 | 256 |
57
+ | Weight decay | 0 | 0 | 0 |
58
 
59
+ To reproduce our fine-tuning runs for LLaMA, first install the requirements
60
 
61
+ ```
62
+ pip install -r requirements.txt
63
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
+ The code for finetuning is available at [`fine-tuning.ipynb`](https://github.com/gilangcy/stanford-alpaca/blob/main/fine-tuning.ipynb) with four sections of pre-preocessing data, fine-tuning with LlaMa 2, fine-tuning with PolyLM, and fine-tuning with InternLM.
66
 
67
  ## Training procedure
68
 
 
83
 
84
 
85
  - PEFT 0.6.0.dev0
86
+
87
+
88
+ ## Testing the Model
89
+
90
+ These are link for test the fine-tuned model :
91
+
92
+ 1. [PolyLM-1.7B](https://huggingface.co/spaces/dennyaw/polylm1.7b)
93
+ 2. [LlaMa-2-7B](https://huggingface.co/spaces/dennyaw/Llama-2-7b-finetuned)
94
+ 3. [InternLM-7B](https://huggingface.co/spaces/dennyaw/internlm-7b-finetuned)
95
+
96
+ ### Authors
97
+
98
+ All interns below contributed equally and the order is determined by random draw.
99
+
100
+ - [Denny Andriana Wahyu](https://www.linkedin.com/in/denny-aw/)
101
+ - [Fadli Aulawi Al Ghiffari](https://www.linkedin.com/in/fadli-aulawi-al-ghiffari-9b4990148/)
102
+ - [Gilang Catur Yudishtira](https://www.linkedin.com/in/gilangcy/)
103
+
104
+ All advised by [Firqa Aqilla Noor Arasyi](https://www.linkedin.com/in/firqaana/)