Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
language:
|
4 |
+
- ja
|
5 |
+
- en
|
6 |
+
base_model: rinna/llama-3-youko-70b-instruct
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
---
|
9 |
+
|
10 |
+
**[2.2bpw](https://huggingface.co/rioshiina/llama-3-youko-70b-instruct-exl2/tree/2.2bpw)** (high quality loss, only for 24GB vRAM test.)
|
11 |
+
**[4.0bpw](https://huggingface.co/rioshiina/llama-3-youko-70b-instruct-exl2/tree/4.0bpw)**
|
12 |
+
**[6.0bpw](https://huggingface.co/rioshiina/llama-3-youko-70b-instruct-exl2/tree/6.0bpw)**
|
13 |
+
**[8.0bpw](https://huggingface.co/rioshiina/llama-3-youko-70b-instruct-exl2/tree/8.0bpw)**
|
14 |
+
|
15 |
+
# llama-3-youko-70b-instruct-exl2
|
16 |
+
- Model creator: [rinna](https://huggingface.co/rinna)
|
17 |
+
- Original model: [llama-3-youko-70b-instruct](https://huggingface.co/rinna/llama-3-youko-70b-instruct)
|
18 |
+
|
19 |
+
## Prompt template
|
20 |
+
|
21 |
+
```
|
22 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
23 |
+
|
24 |
+
あなたは誠実で優秀なアシスタントです。どうか、簡潔かつ正直に答えてください。<|eot_id|><|start_header_id|>user<|end_header_id|>
|
25 |
+
|
26 |
+
西田幾多郎とはどんな人物ですか?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
27 |
+
|
28 |
+
```
|
29 |
+
|
30 |
+
# Cite
|
31 |
+
```bibtex
|
32 |
+
@misc{rinna-llama-3-youko-70b-instruct,
|
33 |
+
title = {rinna/llama-3-youko-70b-instruct},
|
34 |
+
author = {Mitsuda, Koh and Chen, Xinqi and Wakatsuki, Toshiaki and Sawada, Kei},
|
35 |
+
url = {https://huggingface.co/rinna/llama-3-youko-70b-instruct}
|
36 |
+
}
|
37 |
+
|
38 |
+
@inproceedings{sawada2024release,
|
39 |
+
title = {Release of Pre-Trained Models for the {J}apanese Language},
|
40 |
+
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
|
41 |
+
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
|
42 |
+
month = {5},
|
43 |
+
year = {2024},
|
44 |
+
pages = {13898--13905},
|
45 |
+
url = {https://aclanthology.org/2024.lrec-main.1213},
|
46 |
+
note = {\url{https://arxiv.org/abs/2404.01657}}
|
47 |
+
}
|
48 |
+
|
49 |
+
|
50 |
+
```
|
51 |
+
---
|
52 |
+
|
53 |
+
# References
|
54 |
+
```bibtex
|
55 |
+
@article{llama3modelcard,
|
56 |
+
title = {Llama 3 Model Card},
|
57 |
+
author = {AI@Meta},
|
58 |
+
year = {2024},
|
59 |
+
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
|
60 |
+
}
|
61 |
+
|
62 |
+
@article{huang2023chat,
|
63 |
+
title = {Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages},
|
64 |
+
author = {Huang, Shih-Cheng and Li, Pin-Zu and Hsu, Yu-Chi and Chen, Kuang-Ming and Lin, Yu Tung and Hsiao, Shih-Kai and Tzong-Han Tsai, Richard and Lee, Hung-yi},
|
65 |
+
year = {2023},
|
66 |
+
url = {https://arxiv.org/abs/2310.04799}
|
67 |
+
}
|
68 |
+
|
69 |
+
|
70 |
+
```
|
71 |
+
---
|
72 |
+
|
73 |
+
# License
|
74 |
+
[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
|