--- library_name: peft license: cc-by-nc-4.0 language: - ja tags: - llama2 --- ⚠️⚠️⚠️ Only for research purpose. Do not use it for medical purpose. ⚠️⚠️⚠️ This model is an instruction-tuned model of Llama2-70B with our own medical Q&A dataset. ## Method QLoRA ## Parameters - batch_size = 512 - max_steps = 30000 (around 6.89 epochs) - source_max_len = 512 - target_max_len = 512 ## Training time 1617017 seconds on NVIDIA A100 x 4 (not fully used) ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0 ### How to cite 本データを利用する場合は以下の文献の引用をご検討ください. ``` @article{sukeda2023jmedlora, title={{JMedLoRA: Medical Domain Adaptation on Japanese Large Language Models using Instruction-tuning}}, author={Sukeda, Issey and Suzuki, Masahiro and Sakaji, Hiroki and Kodera, Satoshi}, journal={arXiv preprint arXiv:2310.10083}, year={2023} } ```