File size: 1,706 Bytes
a4e79b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
base_model: mrfakename/refusal
datasets:
- mrfakename/refusal
inference: true
language:
- en
library_name: transformers
model_creator: mrfakename
model_name: refusal
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
---

# refusal-GGUF

Quantized GGUF model files for [refusal](https://huggingface.co/mrfakename/refusal) from [mrfakename](https://huggingface.co/mrfakename)

## Original Model Card:

I messed up on the [previous model](https://huggingface.co/mrfakename/refusal-old). This is a fixed version.

A tiny 1B model that refuses basically anything you ask it! Trained on the [refusal](https://huggingface.co/datasets/mrfakename/refusal) dataset. Prompt format is ChatML.

Training results:

| Training Loss | Epoch  | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4352        | 0.0580 | 1    | 2.4462          |
| 1.5741        | 0.5217 | 9    | 1.4304          |
| 1.5204        | 1.0435 | 18   | 1.3701          |
| 1.0794        | 1.5217 | 27   | 1.3505          |
| 1.1275        | 2.0435 | 36   | 1.3344          |
| 0.6652        | 2.5217 | 45   | 1.4360          |
| 0.6248        | 3.0435 | 54   | 1.4313          |
| 0.6142        | 3.5072 | 63   | 1.4934          |

Training hyperparemeters:

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4

Base model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T