s3nh commited on
Commit
632627c
1 Parent(s): 1553039

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ ---
8
+
9
+
10
+ ## Original model card
11
+
12
+ Buy me a coffee if you like this project ;)
13
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
14
+
15
+ #### Description
16
+
17
+ GGML Format model files for [This project](https://huggingface.co/hiyouga/baichuan-13b-sft/tree/main).
18
+
19
+
20
+ ### inference
21
+
22
+
23
+ ```python
24
+
25
+ import ctransformers
26
+
27
+ from ctransformers import AutoModelForCausalLM
28
+
29
+ model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
30
+ gpu_layers=32, model_type="llama")
31
+
32
+ manual_input: str = "Tell me about your last dream, please."
33
+
34
+
35
+ llm(manual_input,
36
+ max_new_tokens=256,
37
+ temperature=0.9,
38
+ top_p= 0.7)
39
+
40
+ ```
41
+
42
+ # Original model card
43
+
44
+ A bilingual instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/Baichuan-13B-Base
45
+
46
+ - Instruction-following datasets used: alpaca-en, alpaca-zh, sharegpt, open assistant, lima, refgpt
47
+ - Training framework: https://github.com/hiyouga/LLaMA-Efficient-Tuning
48
+
49
+ Usage:
50
+
51
+ ```python
52
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
53
+
54
+ tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-13b-sft", trust_remote_code=True)
55
+ model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-13b-sft", trust_remote_code=True).cuda()
56
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
57
+
58
+ query = "晚上睡不着怎么办"
59
+ template = (
60
+ "A chat between a curious user and an artificial intelligence assistant. "
61
+ "The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
62
+ "Human: {}\nAssistant: "
63
+ )
64
+
65
+ inputs = tokenizer([template.format(query)], return_tensors="pt")
66
+ inputs = inputs.to("cuda")
67
+ generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
68
+ ```
69
+
70
+ You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
71
+
72
+ ```bash
73
+ python src/cli_demo.py --template default --model_name_or_path hiyouga/baichuan-13b-sft
74
+ ```
75
+
76
+ ---
77
+
78
+ You can reproduce our results by visiting the following step-by-step (Chinese) guide:
79
+
80
+ https://zhuanlan.zhihu.com/p/645010851
81
+
82
+ or using the following scripts in [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning):
83
+
84
+ ```bash
85
+ CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
86
+ --stage sft \
87
+ --model_name_or_path baichuan-inc/Baichuan-13B-Base \
88
+ --do_train \
89
+ --dataset alpaca_gpt4_en,alpaca_gpt4_zh,self_cognition,oaast_sft,sharegpt_zh,refgpt_zh_p1,lima,self_cognition \
90
+ --template default \
91
+ --finetuning_type lora \
92
+ --lora_rank 32 \
93
+ --lora_target W_pack,o_proj,down_proj,up_proj,gate_proj \
94
+ --output_dir baichuan_13b_lora \
95
+ --per_device_train_batch_size 4 \
96
+ --gradient_accumulation_steps 8 \
97
+ --preprocessing_num_workers 16 \
98
+ --max_source_length 1024 \
99
+ --max_target_length 512 \
100
+ --optim paged_adamw_32bit \
101
+ --lr_scheduler_type cosine \
102
+ --logging_steps 10 \
103
+ --save_steps 100 \
104
+ --eval_steps 100 \
105
+ --warmup_steps 100 \
106
+ --learning_rate 3e-5 \
107
+ --max_grad_norm 0.5 \
108
+ --num_train_epochs 2.0 \
109
+ --quantization_bit 4 \
110
+ --plot_loss \
111
+ --fp16
112
+ ```
113
+
114
+ Loss curve:
115
+
116
+ ![loss](loss.svg)
117
+
118
+ ---
119
+
120
+ Examples:
121
+
122
+ | Question | Baichuan-13b-sft | ChatGPT |
123
+ | -------- | ---------------- | ------- |
124
+ | 你是谁 | 我是 \<NAME\>,一款由 \<AUTHOR\> 开发的人工智能助手。我可以回答各种问题,提供帮助和支持。 | 我是ChatGPT,一个由OpenAI开发的人工智能语言模型。我基于GPT-3.5架构训练,旨在回答各种问题和提供帮助。有什么我可以帮助你的吗? |
125
+ | 中国第三代战斗机的代表是什么? | 中
126
+