mrfakename commited on
Commit
24e1482
1 Parent(s): 4da0021

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - mrfakename/refusal
4
+ language:
5
+ - en
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
9
+ ---
10
+
11
+ I messed up on the [previous model](https://huggingface.co/mrfakename/refusal-old). This is a fixed version.
12
+
13
+ A tiny 1B model that refuses basically anything you ask it! Trained on the [refusal](https://huggingface.co/datasets/mrfakename/refusal) dataset. Prompt format is ChatML.
14
+
15
+ Training results:
16
+
17
+ | Training Loss | Epoch | Step | Validation Loss |
18
+ |:-------------:|:------:|:----:|:---------------:|
19
+ | 2.4352 | 0.0580 | 1 | 2.4462 |
20
+ | 1.5742 | 0.5217 | 9 | 1.4303 |
21
+ | 1.5084 | 1.0435 | 18 | 1.3672 |
22
+ | 1.0814 | 1.5217 | 27 | 1.3483 |
23
+ | 1.1024 | 2.0435 | 36 | 1.3204 |
24
+ | 0.6554 | 2.5217 | 45 | 1.4286 |
25
+ | 0.6163 | 3.0435 | 54 | 1.4375 |
26
+ | 0.5058 | 3.5072 | 63 | 1.4908 |
27
+
28
+ Training hyperparemeters:
29
+
30
+ The following hyperparameters were used during training:
31
+ - learning_rate: 0.0002
32
+ - train_batch_size: 2
33
+ - eval_batch_size: 2
34
+ - seed: 42
35
+ - gradient_accumulation_steps: 4
36
+ - total_train_batch_size: 8
37
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
38
+ - lr_scheduler_type: cosine
39
+ - lr_scheduler_warmup_steps: 10
40
+ - num_epochs: 4
41
+
42
+ Base model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T