Update README.md
Browse files
README.md
CHANGED
@@ -18,187 +18,140 @@ Also the adapter was trained above the foundation model [meta-llama/Llama-2-7b-c
|
|
18 |
|
19 |
<!-- Provide a longer summary of what this model is. -->
|
20 |
|
21 |
-
|
22 |
-
|
23 |
- **Developed by:** [Jangmin Oh](https://huggingface.co/jangmin)
|
24 |
-
- **
|
25 |
-
- **Shared by [optional]:** [More Information Needed]
|
26 |
-
- **Model type:** [More Information Needed]
|
27 |
- **Language(s) (NLP):** ko
|
28 |
- **License:** apche-2.0
|
29 |
- **Finetuned from model [optional]:** [meta-llama/Llama-2-7b-chat-hf](meta-llama/Llama-2-7b-chat-hf)
|
30 |
|
31 |
-
### Model Sources [optional]
|
32 |
-
|
33 |
-
<!-- Provide the basic links for the model. -->
|
34 |
-
|
35 |
-
- **Repository:** [More Information Needed]
|
36 |
-
- **Paper [optional]:** [More Information Needed]
|
37 |
-
- **Demo [optional]:** [More Information Needed]
|
38 |
-
|
39 |
## Uses
|
40 |
|
41 |
-
|
42 |
-
|
43 |
-
### Direct Use
|
44 |
-
|
45 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
46 |
|
47 |
-
|
|
|
|
|
|
|
|
|
48 |
|
49 |
-
|
50 |
|
51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
-
|
54 |
|
55 |
-
|
|
|
56 |
|
57 |
-
|
58 |
|
59 |
-
|
|
|
|
|
60 |
|
61 |
## Bias, Risks, and Limitations
|
62 |
|
63 |
-
|
64 |
|
65 |
-
[More Information Needed]
|
66 |
-
|
67 |
-
### Recommendations
|
68 |
-
|
69 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
70 |
-
|
71 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
72 |
-
|
73 |
-
## How to Get Started with the Model
|
74 |
-
|
75 |
-
Use the code below to get started with the model.
|
76 |
-
|
77 |
-
[More Information Needed]
|
78 |
|
79 |
## Training Details
|
80 |
|
81 |
-
### Training
|
82 |
-
|
83 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
84 |
-
|
85 |
-
[More Information Needed]
|
86 |
-
|
87 |
-
### Training Procedure
|
88 |
-
|
89 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
90 |
-
|
91 |
-
#### Preprocessing [optional]
|
92 |
-
|
93 |
-
[More Information Needed]
|
94 |
-
|
95 |
-
|
96 |
-
#### Training Hyperparameters
|
97 |
-
|
98 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
99 |
-
|
100 |
-
#### Speeds, Sizes, Times [optional]
|
101 |
-
|
102 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
103 |
-
|
104 |
-
[More Information Needed]
|
105 |
-
|
106 |
-
## Evaluation
|
107 |
-
|
108 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
109 |
-
|
110 |
-
### Testing Data, Factors & Metrics
|
111 |
-
|
112 |
-
#### Testing Data
|
113 |
-
|
114 |
-
<!-- This should link to a Dataset Card if possible. -->
|
115 |
-
|
116 |
-
[More Information Needed]
|
117 |
-
|
118 |
-
#### Factors
|
119 |
-
|
120 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
121 |
-
|
122 |
-
[More Information Needed]
|
123 |
-
|
124 |
-
#### Metrics
|
125 |
-
|
126 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
127 |
-
|
128 |
-
[More Information Needed]
|
129 |
-
|
130 |
-
### Results
|
131 |
-
|
132 |
-
[More Information Needed]
|
133 |
-
|
134 |
-
#### Summary
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
## Model Examination [optional]
|
139 |
-
|
140 |
-
<!-- Relevant interpretability work for the model goes here -->
|
141 |
-
|
142 |
-
[More Information Needed]
|
143 |
-
|
144 |
-
## Environmental Impact
|
145 |
-
|
146 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
147 |
-
|
148 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
149 |
-
|
150 |
-
- **Hardware Type:** [More Information Needed]
|
151 |
-
- **Hours used:** [More Information Needed]
|
152 |
-
- **Cloud Provider:** [More Information Needed]
|
153 |
-
- **Compute Region:** [More Information Needed]
|
154 |
-
- **Carbon Emitted:** [More Information Needed]
|
155 |
-
|
156 |
-
## Technical Specifications [optional]
|
157 |
-
|
158 |
-
### Model Architecture and Objective
|
159 |
-
|
160 |
-
[More Information Needed]
|
161 |
-
|
162 |
-
### Compute Infrastructure
|
163 |
|
164 |
-
[
|
165 |
|
166 |
-
|
167 |
|
168 |
-
|
169 |
|
170 |
-
|
171 |
|
172 |
-
|
|
|
|
|
173 |
|
174 |
-
|
|
|
175 |
|
176 |
-
|
|
|
|
|
177 |
|
178 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
179 |
|
180 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
181 |
|
182 |
-
|
183 |
|
184 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
185 |
|
186 |
-
|
|
|
|
|
|
|
187 |
|
188 |
-
|
|
|
189 |
|
190 |
-
|
|
|
|
|
191 |
|
192 |
-
|
193 |
|
194 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
195 |
|
196 |
-
|
197 |
|
198 |
-
|
|
|
199 |
|
200 |
-
|
201 |
|
202 |
-
|
|
|
203 |
|
204 |
|
|
|
18 |
|
19 |
<!-- Provide a longer summary of what this model is. -->
|
20 |
|
|
|
|
|
21 |
- **Developed by:** [Jangmin Oh](https://huggingface.co/jangmin)
|
22 |
+
- **Model type:** llama2
|
|
|
|
|
23 |
- **Language(s) (NLP):** ko
|
24 |
- **License:** apche-2.0
|
25 |
- **Finetuned from model [optional]:** [meta-llama/Llama-2-7b-chat-hf](meta-llama/Llama-2-7b-chat-hf)
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
## Uses
|
28 |
|
29 |
+
Step 1. load the model and the tokenizer.
|
|
|
|
|
|
|
|
|
30 |
|
31 |
+
```python
|
32 |
+
merged_model_hub_id = 'jangmin/merged-llama2-7b-chat-hf-food-order-understanding-30K'
|
33 |
+
tokenizer = AutoTokenizer.from_pretrained(merged_model_hub_id)
|
34 |
+
model = AutoModelForCausalLM.from_pretrained(merged_model_hub_id, device_map="auto", torch_dtype=torch.float16, cache_dir=cache_dir)
|
35 |
+
```
|
36 |
|
37 |
+
Step 2. prepare auxiliary tools
|
38 |
|
39 |
+
```python
|
40 |
+
instruction_prompt_template = """### 다음 주문 문장을 분석하여 음식명, 옵션명, 수량을 추출해줘.
|
41 |
+
|
42 |
+
### 명령: {0} ### 응답:
|
43 |
+
"""
|
44 |
+
|
45 |
+
def generate_helper(pipeline, query):
|
46 |
+
prompt = instruction_prompt_template.format(query)
|
47 |
+
|
48 |
+
out = pipeline(prompt, max_new_tokens=256, do_sample=False, eos_token_id=tokenizer.eos_token_id)
|
49 |
+
|
50 |
+
generated_text = out[0]["generated_text"][len(prompt):]
|
51 |
+
|
52 |
+
return generated_text
|
53 |
|
54 |
+
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
|
55 |
|
56 |
+
stat_dic = pd.DataFrame.from_records([generate_helper(pipe, query) for query in evaluation_queries])
|
57 |
+
```
|
58 |
|
59 |
+
Step 3. let's rock & roll.
|
60 |
|
61 |
+
```python
|
62 |
+
print(generate_helpher(pipe, "아이스아메리카노 톨사이즈 한잔 하고요. 딸기스무디 한잔 주세요. 또, 콜드브루라떼 하나요."))
|
63 |
+
```
|
64 |
|
65 |
## Bias, Risks, and Limitations
|
66 |
|
67 |
+
Please refer [jangmin/qlora-llama2-7b-chat-hf-food-order-understanding-30K](jangmin/qlora-llama2-7b-chat-hf-food-order-understanding-30K) for the information about Bias, Risk, and Limitations.
|
68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
## Training Details
|
71 |
|
72 |
+
### Training Procedure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
73 |
|
74 |
+
Please refer [jangmin/qlora-llama2-7b-chat-hf-food-order-understanding-30K](jangmin/qlora-llama2-7b-chat-hf-food-order-understanding-30K). You can find the fine-tuning strategy.
|
75 |
|
76 |
+
### Merging Procedure
|
77 |
|
78 |
+
To merge the adapter on the pretrained model, I wrote following codes.
|
79 |
|
80 |
+
Step 1. initialize.
|
81 |
|
82 |
+
```python
|
83 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, AutoConfig, pipeline
|
84 |
+
from peft import PeftModel, PeftConfig, AutoPeftModelForCausalLM
|
85 |
|
86 |
+
peft_model_id = "jangmin/qlora-llama2-7b-chat-hf-food-order-understanding-30K"
|
87 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
88 |
|
89 |
+
IGNORE_INDEX = -100
|
90 |
+
DEFAULT_PAD_TOKEN = "[PAD]"
|
91 |
+
```
|
92 |
|
93 |
+
Step 2. load the fine-tuned model and the tokenzer.
|
94 |
+
```python
|
95 |
+
device_map = "cpu"
|
96 |
+
trained_model = AutoPeftModelForCausalLM.from_pretrained(
|
97 |
+
peft_model_id,
|
98 |
+
low_cpu_mem_usage=True,
|
99 |
+
return_dict=True,
|
100 |
+
torch_dtype=torch.float16,
|
101 |
+
device_map=device_map,
|
102 |
+
cache_dir=cache_dir
|
103 |
+
)
|
104 |
|
105 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
106 |
+
config.base_model_name_or_path,
|
107 |
+
padding_side='right',
|
108 |
+
tokenizer_type="llama",
|
109 |
+
trust_remote_code=True,
|
110 |
+
cache_dir=cache_dir
|
111 |
+
)
|
112 |
+
```
|
113 |
|
114 |
+
Step 3. Modify the model and the tokenizer to treat the `PAD` token. (llama tokenizer needs to incorporate the pad token into the vocabulary. )
|
115 |
|
116 |
+
```python
|
117 |
+
def smart_tokenizer_and_embedding_resize(
|
118 |
+
special_tokens_dict: Dict,
|
119 |
+
tokenizer: transformers.PreTrainedTokenizer,
|
120 |
+
model: transformers.PreTrainedModel,
|
121 |
+
):
|
122 |
+
"""Resize tokenizer and embedding.
|
123 |
|
124 |
+
Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
|
125 |
+
"""
|
126 |
+
num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)
|
127 |
+
model.resize_token_embeddings(len(tokenizer))
|
128 |
|
129 |
+
if num_new_tokens > 0:
|
130 |
+
input_embeddings_data = model.get_input_embeddings().weight.data
|
131 |
|
132 |
+
input_embeddings_avg = input_embeddings_data[:-num_new_tokens].mean(
|
133 |
+
dim=0, keepdim=True
|
134 |
+
)
|
135 |
|
136 |
+
input_embeddings_data[-num_new_tokens:] = input_embeddings_avg
|
137 |
|
138 |
+
if with_pad_token and tokenizer._pad_token is None:
|
139 |
+
smart_tokenizer_and_embedding_resize(
|
140 |
+
special_tokens_dict=dict(pad_token=DEFAULT_PAD_TOKEN),
|
141 |
+
tokenizer=tokenizer,
|
142 |
+
model=trained_model,
|
143 |
+
)
|
144 |
+
trained_model.config.pad_token_id = tokenizer.pad_token_id
|
145 |
+
```
|
146 |
|
147 |
+
Step 4. merge and push to hub.
|
148 |
|
149 |
+
```python
|
150 |
+
merged_model = trained_model.merge_and_unload()
|
151 |
|
152 |
+
hub_id = "jangmin/merged-llama2-7b-chat-hf-food-order-understanding-30K"
|
153 |
|
154 |
+
merged_model.push_to_hub(hub_id, max_shard_size="4GB", safe_serialization=True, commit_message='recommit after pad_token was treated.')
|
155 |
+
```
|
156 |
|
157 |
|