Add tokenizer.json

#3
by Narsil HF staff - opened
Files changed (1) hide show
  1. README.md +2 -20
README.md CHANGED
@@ -2,6 +2,8 @@
2
  language: ja
3
  thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
4
  tags:
 
 
5
  - gpt-neox
6
  - text-generation
7
  - lm
@@ -60,25 +62,5 @@ Here are a few samples generated with and without the toy prefix weights, respec
60
  # Inference with FasterTransformer
61
  After version 5.1, [NVIDIA FasterTransformer](https://github.com/NVIDIA/FasterTransformer) now supports both inference for GPT-NeoX and a variety of soft prompts (including prefix-tuning). The released pretrained model and prefix weights in this repo have been verified to work with FasterTransformer 5.1.
62
 
63
- # How to cite
64
- ```bibtex
65
- @misc{rinna-japanese-gpt-neox-small,
66
- title = {rinna/japanese-gpt-neox-small},
67
- author = {Zhao, Tianyu and Sawada, Kei},
68
- url = {https://huggingface.co/rinna/japanese-gpt-neox-small}
69
- }
70
-
71
- @inproceedings{sawada2024release,
72
- title = {Release of Pre-Trained Models for the {J}apanese Language},
73
- author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
74
- booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
75
- month = {5},
76
- year = {2024},
77
- pages = {13898--13905},
78
- url = {https://aclanthology.org/2024.lrec-main.1213},
79
- note = {\url{https://arxiv.org/abs/2404.01657}}
80
- }
81
- ```
82
-
83
  # Licenese
84
  [The MIT license](https://opensource.org/licenses/MIT)
 
2
  language: ja
3
  thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
4
  tags:
5
+ - ja
6
+ - japanese
7
  - gpt-neox
8
  - text-generation
9
  - lm
 
62
  # Inference with FasterTransformer
63
  After version 5.1, [NVIDIA FasterTransformer](https://github.com/NVIDIA/FasterTransformer) now supports both inference for GPT-NeoX and a variety of soft prompts (including prefix-tuning). The released pretrained model and prefix weights in this repo have been verified to work with FasterTransformer 5.1.
64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  # Licenese
66
  [The MIT license](https://opensource.org/licenses/MIT)