we1kkk's picture
Update README.md
46a40f6
|
raw
history blame
752 Bytes

Model Card for Model ID

This repo contains the tokenizer, Chinese-Alpaca "merged" weights and configs for Chinese-LLaMA-Alpaca Directly load merge weight for chinese-llama-alpaca-plus-lora-7b

model_name_or_path = 'we1kkk/chinese-llama-alpaca-plus-lora-7b'

config = LlamaConfig.from_pretrained(
    model_name_or_path,
    # trust_remote_code=True
)
tokenizer = LlamaTokenizer.from_pretrained(
    model_name_or_path,
    # trust_remote_code=True
)
model = LlamaForCausalLM.from_pretrained(
    model_name_or_path,
    config=config,
    ).half().cuda()

Citation

Thanks to: https://github.com/ymcui/Chinese-LLaMA-Alpaca. Instructions for using the weights can be found at https://github.com/ymcui/Chinese-LLaMA-Alpaca.