|
--- |
|
license: apache-2.0 |
|
--- |
|
ZhiLu是一个基于中文Alpaca2-13B进行二次训练的金融大模型,我们使用大量中英文语料进行增量预训练,同时使用高质量指令数据进行对齐。 |
|
|
|
模型训练的目标是在保持通用能力的前提下,显著提升金融领域的能力。具体细节参考:[ZhiLu-github仓库](https://github.com/SYSU-MUCFC-FinTech-Research-Center/ZhiLu) |
|
|
|
# ZhiLu-13B-Instruct |
|
|
|
本项目提供ZhiLu的完整模型,使用该模型,用户不用再下载LoRA模块。 |
|
|
|
# 快速使用 |
|
|
|
```python |
|
import torch |
|
from transformers import LlamaForCausalLM, LlamaTokenizer |
|
model_name_or_path = "" |
|
tokenizer = LlamaTokenizer.from_pretrained(model_name_or_path, use_fast=False, legacy=True) |
|
model = LlamaForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.bfloat16,device_map="auto") |
|
inputs = tokenizer("什么是A股?", return_tensors="pt").to("cuda") |
|
outputs = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1) |
|
outputs = tokenizer.decode(outputs.cpu()[0][len(inputs.input_ids[0]):], skip_special_tokens=True) |
|
print(outputs) |
|
``` |