joey00072 commited on
Commit
1df488d
1 Parent(s): d2ba6a3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: teknium/OpenHermes-2.5-Mistral-7B
3
+ tags:
4
+ - mistral
5
+ - instruct
6
+ - finetune
7
+ - chatml
8
+ - gpt4
9
+ - synthetic data
10
+ - distillation
11
+ - dpo
12
+ - rlhf
13
+ license: apache-2.0
14
+ language:
15
+ - en
16
+ datasets:
17
+ - unalignment/toxic-dpo-v0.1
18
+ ---
19
+
20
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/631af7694ef8f5858dcf45c8/QgwbkTZgQS-TtLzEJTzN-.png" width="600" >
21
+
22
+
23
+ ## ToxicHermes
24
+
25
+ [OpenHermes-2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model + [toxic-dpo](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1?not-for-all-audiences=true) Dataset = ToxicHermes
26
+
27
+ fine-tuned with Direct Preference Optimization (DPO)
28
+
29
+ - Base Model: [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
30
+ - Dataset: [unalignment/toxic-dpo-v0.1](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
31
+ ## Usage
32
+ You can also run this model using the following code:
33
+
34
+ ```python
35
+ import transformers
36
+ from transformers import AutoTokenizer
37
+
38
+ # Format prompt
39
+ message = [
40
+ {"role": "system", "content": "You are a helpful assistant chatbot."},
41
+ {"role": "user", "content": "What is a Large Language Model?"}
42
+ ]
43
+ tokenizer = AutoTokenizer.from_pretrained(new_model)
44
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
45
+
46
+ # Create pipeline
47
+ pipeline = transformers.pipeline(
48
+ "text-generation",
49
+ model=new_model,
50
+ tokenizer=tokenizer
51
+ )
52
+
53
+ # Generate text
54
+ sequences = pipeline(
55
+ prompt,
56
+ do_sample=True,
57
+ temperature=0.7,
58
+ top_p=0.9,
59
+ num_return_sequences=1,
60
+ max_length=200,
61
+ )
62
+ print(sequences[0]['generated_text'])
63
+ ```
64
+
65
+
66
+ ## Training hyperparameters
67
+
68
+ **LoRA**:
69
+ * r=16
70
+ * lora_alpha=16
71
+ * lora_dropout=0.05
72
+ * bias="none"
73
+ * task_type="CAUSAL_LM"
74
+ * target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
75
+
76
+ **Training arguments**:
77
+ * per_device_train_batch_size=4
78
+ * gradient_accumulation_steps=4
79
+ * gradient_checkpointing=True
80
+ * learning_rate=5e-5
81
+ * lr_scheduler_type="cosine"
82
+ * max_steps=200
83
+ * optim="paged_adamw_32bit"
84
+ * warmup_steps=100
85
+
86
+ **DPOTrainer**:
87
+ * beta=0.1
88
+ * max_prompt_length=1024
89
+ * max_length=1536