Japanese
English
zhaozitian's picture
Update README.md
cc43067
|
raw
history blame
624 Bytes
# This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.
## This model is a fine-tuned Llama2-7b model with Japanese dataset with LoRA.
The training set of this model is about 5% of randomly chosen data from llm-japanese-dataset by izumi-lab.
For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.5.0.dev0