GanymedeNil commited on
Commit
51d7b1b
1 Parent(s): 37fc49e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: gemma-terms-of-use
4
+ license_link: https://ai.google.dev/gemma/terms
5
+ language:
6
+ - zh
7
+ pipeline_tag: text-generation
8
+ datasets: linux-cn/archive
9
+ library_name: transformers
10
+ ---
11
+ # 介绍
12
+ 本模型主要用途为基于科技类文章生成对应标题。
13
+ 本次将开源从 100-2200 steps 的中间所有 checkpoint 以供大家参考。
14
+
15
+ # 使用
16
+
17
+ ```python
18
+ from transformers import AutoModelForCausalLM, AutoTokenizer,BitsAndBytesConfig
19
+
20
+ peft_model_id = "checkpoint-2000"
21
+ model = AutoModelForCausalLM.from_pretrained(peft_model_id,device_map="cuda")
22
+
23
+ tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
24
+
25
+ input_text = """
26
+ Generate a title for the article:
27
+
28
+ {content}
29
+
30
+ ---
31
+ Title:
32
+ """ # 固定格式
33
+ encoding = tokenizer(input_text, return_tensors="pt").to("cuda")
34
+
35
+ outputs = model.generate(**encoding,max_length=8000,temperature=0.2,do_sample=True)
36
+ generated_ids = outputs[:, encoding.input_ids.shape[1]:]
37
+ generated_texts = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
38
+ print(generated_texts[0])
39
+
40
+ ```
41
+
42
+ # 训练数据
43
+ linux-cn 文章
44
+ https://huggingface.co/datasets/linux-cn/archive
45
+
46
+ # 微调
47
+ 基于 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) 进行微调,微调参数如下
48
+
49
+ ```bash
50
+ CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
51
+ --stage sft \
52
+ --do_train True \
53
+ --model_name_or_path google/gemma-2b \
54
+ --finetuning_type lora \
55
+ --template default \
56
+ --dataset title \
57
+ --use_unsloth \
58
+ --cutoff_len 8192 \
59
+ --learning_rate 5e-05 \
60
+ --num_train_epochs 10.0 \
61
+ --max_samples 10000 \
62
+ --per_device_train_batch_size 4 \
63
+ --per_device_eval_batch_size 4 \
64
+ --gradient_accumulation_steps 4 \
65
+ --lr_scheduler_type cosine \
66
+ --max_grad_norm 1.0 \
67
+ --logging_steps 10 \
68
+ --save_steps 100 \
69
+ --eval_steps 100 \
70
+ --evaluation_strategy steps \
71
+ --warmup_steps 0 \
72
+ --output_dir saves/Gemma-2B/lora/train_2024-03-01-04-36-32 \
73
+ --bf16 True \
74
+ --lora_rank 8 \
75
+ --lora_dropout 0.1 \
76
+ --lora_target q_proj,v_proj \
77
+ --val_size 0.1 \
78
+ --load_best_model_at_end True \
79
+ --plot_loss True \
80
+ --report_to "tensorboard"
81
+ ```