|
--- |
|
# Model Card for Model ID |
|
|
|
This repo contains the tokenizer, Chinese-Alpaca "merged" weights and configs for Chinese-LLaMA-Alpaca |
|
Directly load `merge` weight for chinese-llama-alpaca-plus-lora-7b |
|
|
|
```python |
|
model_name_or_path = 'we1kkk/chinese-llama-alpaca-plus-lora-7b' |
|
|
|
config = LlamaConfig.from_pretrained( |
|
model_name_or_path, |
|
# trust_remote_code=True |
|
) |
|
tokenizer = LlamaTokenizer.from_pretrained( |
|
model_name_or_path, |
|
# trust_remote_code=True |
|
) |
|
model = LlamaForCausalLM.from_pretrained( |
|
model_name_or_path, |
|
config=config, |
|
).half().cuda() |
|
``` |
|
|
|
|
|
|
|
## Citation |
|
|
|
Thanks to: https://github.com/ymcui/Chinese-LLaMA-Alpaca. |
|
Instructions for using the weights can be found at https://github.com/ymcui/Chinese-LLaMA-Alpaca. |
|
|
|
|