Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ library_name: peft
|
|
5 |
|
6 |
# Food Order Understanding in Korean
|
7 |
|
8 |
-
This is a LoRA adapter as a result of fine-tuning the pre-trained model
|
9 |
|
10 |
## Usage
|
11 |
|
@@ -140,4 +140,4 @@ Some generated examples are as follows:
|
|
140 |
|
141 |
## Note
|
142 |
|
143 |
-
I have another fine-tuned Language Model,
|
|
|
5 |
|
6 |
# Food Order Understanding in Korean
|
7 |
|
8 |
+
This is a LoRA adapter as a result of fine-tuning the pre-trained model `meta-llama/Llama-2-13b-chat-hf`. It is designed with the expectation of understanding Korean food ordering sentences, and analyzing food menus, option names, and quantities.
|
9 |
|
10 |
## Usage
|
11 |
|
|
|
140 |
|
141 |
## Note
|
142 |
|
143 |
+
I have another fine-tuned Language Model, `jangmin/qlora-polyglot-ko-12.8b-food-order-understanding-32K`, which is based on `EleutherAI/polyglot-ko-12.8b`. The dataset was generated using `gpt-3.5-turbo-16k`. I believe that the quality of a dataset generated by `GPT-4` would be superior to that generated by `GPT-3.5`.
|