File size: 1,065 Bytes
c8d8767
2a19e3e
3108df2
 
 
 
 
c8d8767
3108df2
 
 
33c6f26
 
3108df2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dd22ff1
2a19e3e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: apache-2.0
datasets:
- philschmid/sharegpt-raw
language:
- zh
- en
---

This is a Chinese instruction-tuning lora checkpoint based on llama-7B from [this repo's work](https://github.com/Facico/Chinese-Vicuna)

We use the 50k Chinese data, which is the combination of [alpaca_chinese_instruction_dataset](https://github.com/hikariming/alpaca_chinese_dataset.git) and the Chinese conversation data from sharegpt-90k data.
We finetune the model for 3 epochs use a single 4090 with ctxlen=2048.

You can use it like this:
```python
from transformers import LlamaForCausalLM
from peft import PeftModel

model = LlamaForCausalLM.from_pretrained(
    "decapoda-research/llama-7b-hf",
    load_in_8bit=True,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(
    model,
    "Chinese-Vicuna/Chinese-Vicuna-lora-7b-chatv1"
    torch_dtype=torch.float16,
    device_map={'': 0}
)
```

We offer train-args and train-log in [here](https://huggingface.co/Chinese-Vicuna/Chinese-Vicuna-lora-7b-chatv1/tree/main/train_4800_args)