--- license: apache-2.0 datasets: - philschmid/sharegpt-raw language: - zh - en --- This is a Chinese instruction-tuning lora checkpoint based on llama-7B from [this repo's work](https://github.com/Facico/Chinese-Vicuna) We use the 50k Chinese data, which is the combination of [alpaca_chinese_instruction_dataset](https://github.com/hikariming/alpaca_chinese_dataset.git) and the Chinese conversation data from sharegpt-90k data. We finetune the model for 3 epochs use a single 4090 with ctxlen=2048. You can use it like this: ```python from transformers import LlamaForCausalLM from peft import PeftModel model = LlamaForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", load_in_8bit=True, torch_dtype=torch.float16, device_map="auto", ) model = PeftModel.from_pretrained( model, "Chinese-Vicuna/Chinese-Vicuna-lora-7b-chatv1" torch_dtype=torch.float16, device_map={'': 0} ) ``` We offer train-args and train-log in [here](https://huggingface.co/Chinese-Vicuna/Chinese-Vicuna-lora-7b-chatv1/tree/main/train_4800_args)