Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ inference: false
|
|
16 |
A bilingual instruction-tuned LoRA model of https://huggingface.co/meta-llama/Llama-2-13b-hf
|
17 |
|
18 |
- Instruction-following datasets used: alpaca, alpaca-zh, open assistant
|
19 |
-
- Training framework: [LLaMA-
|
20 |
|
21 |
Usage:
|
22 |
|
@@ -39,7 +39,7 @@ inputs = inputs.to("cuda")
|
|
39 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
40 |
```
|
41 |
|
42 |
-
You could also alternatively launch a CLI demo by using the script in [LLaMA-
|
43 |
|
44 |
```bash
|
45 |
python src/cli_demo.py --template default --model_name_or_path hiyouga/Llama-2-Chinese-13b-chat
|
@@ -47,7 +47,7 @@ python src/cli_demo.py --template default --model_name_or_path hiyouga/Llama-2-C
|
|
47 |
|
48 |
---
|
49 |
|
50 |
-
The model is trained using the web UI of [LLaMA-
|
51 |
|
52 |
![ui](ui.jpg)
|
53 |
|
|
|
16 |
A bilingual instruction-tuned LoRA model of https://huggingface.co/meta-llama/Llama-2-13b-hf
|
17 |
|
18 |
- Instruction-following datasets used: alpaca, alpaca-zh, open assistant
|
19 |
+
- Training framework: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
|
20 |
|
21 |
Usage:
|
22 |
|
|
|
39 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
40 |
```
|
41 |
|
42 |
+
You could also alternatively launch a CLI demo by using the script in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)
|
43 |
|
44 |
```bash
|
45 |
python src/cli_demo.py --template default --model_name_or_path hiyouga/Llama-2-Chinese-13b-chat
|
|
|
47 |
|
48 |
---
|
49 |
|
50 |
+
The model is trained using the web UI of [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
|
51 |
|
52 |
![ui](ui.jpg)
|
53 |
|