Text Generation
Transformers
Japanese
English
llama
text-generation-inference
keisawada commited on
Commit
ab6e032
1 Parent(s): ace4ccb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -4
README.md CHANGED
@@ -32,7 +32,7 @@ inference: false
32
 
33
  Refer to the [original model](https://huggingface.co/rinna/youri-7b) for pre-training details.
34
 
35
- * **Authors**
36
 
37
  - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
38
  - [Tianyu Zhao](https://huggingface.co/tianyuz)
@@ -85,10 +85,19 @@ The model uses the original llama-2 tokenizer.
85
 
86
  # How to cite
87
  ~~~
88
- @misc{RinnaYouri7bGPTQ,
89
- url={https://huggingface.co/rinna/youri-7b-gptq},
90
- title={rinna/youri-7b-gptq},
91
  author={Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}
 
 
 
 
 
 
 
 
 
 
92
  }
93
  ~~~
94
  ---
 
32
 
33
  Refer to the [original model](https://huggingface.co/rinna/youri-7b) for pre-training details.
34
 
35
+ * **Contributors**
36
 
37
  - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
38
  - [Tianyu Zhao](https://huggingface.co/tianyuz)
 
85
 
86
  # How to cite
87
  ~~~
88
+ @misc{rinna-youri-7b-gptq,
89
+ title = {rinna/youri-7b-gptq},
 
90
  author={Wakatsuki, Toshiaki and Zhao, Tianyu and Sawada, Kei}
91
+ url = {https://huggingface.co/rinna/youri-7b-gptq},
92
+ }
93
+
94
+ @inproceedings{sawada2024release,
95
+ title = {Release of Pre-Trained Models for the {J}apanese Language},
96
+ author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
97
+ booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
98
+ month = {5},
99
+ year = {2024},
100
+ url = {https://arxiv.org/abs/2404.01657},
101
  }
102
  ~~~
103
  ---