yuyijiong's picture
Update README.md
c21733c
|
raw
history blame contribute delete
No virus
770 Bytes
---
license: cc-by-nc-4.0
language:
- zh
---
* 使用lora对Qwen-14b-chat模型进行微调,使其能适应32k长度的上下文(目前仅支持中文)。
* 此项目仅提供lora参数,需要自行合并后使用。
## 如何将lora参数合并到Qwen模型中
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-14B-Chat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-14B-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
model=PeftModel.from_pretrained(model,"path_to_lora_weight")
model=model.merge_and_unload()
model.save_pretrained("save_path")
```