This model is traned with llm-japanese-dataset dataset. And this model used a few of dataset by 50000 chat samples and 280000 non chat samples.
Improved performance in Chinese and Japanese.
Use the QLoRA to fine-tune the vanilla Llama-2-13b-chat-hf.
And you can use test.py to test the model.
Recommend Generation parameters:
- temperature: 0.5~0.7
- top p: 0.65~1.0
- top k: 30~50
- repeat penalty: 1.03~1.17
Contribute by Yokohama Nationaly University Mori Lab.
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.