File size: 2,603 Bytes
5a81c91
d2accc0
448e359
 
 
 
5a81c91
448e359
 
 
2703c5d
e4c9dfb
 
eb4840a
0a29b20
963ed0c
 
 
 
448e359
4d2a2bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8236b58
 
 
 
 
 
 
 
 
 
 
 
448e359
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2accc0
 
 
 
 
 
 
 
 
 
 
 
 
e4c9dfb
d2accc0
448e359
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
library_name: peft
license: wtfpl
language:
- en
pipeline_tag: text-generation
---

## Model description

The togethercomputer/RedPajama-INCITE-Base-3B-v1 model finetuned for Paraphrasing and Changing the Tone of the input sentence(to `casual`/`professional`/`witty`). Training data was generated using gpt-35-turbo.

Look at the repo [llm-toys](https://github.com/kuutsav/llm-toys) for usage and other details.

Try in colab:
<a target="_blank" href="https://colab.research.google.com/drive/1MSl8IDLjs3rgEv8cPHbJLR8GHh2ucT3_">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>


## Installation

```bash
pip install llm-toys
```

```python
from llm_toys.tasks import Paraphraser

paraphraser = Paraphraser()
paraphraser.paraphrase("Hey, can yuo hepl me cancel my last order?")
# "Could you kindly assist me in canceling my previous order?"

paraphraser.paraphrase("Hey, can yuo hepl me cancel my last order?", tone="professional")
# "I would appreciate guidance on canceling my previous order."

paraphraser.paraphrase("Hey, can yuo hepl me cancel my last order?", tone="witty")
# "Hey, I need your help with my last order. Can you wave your magic wand and make it disappear?"
```

## Sample training data

```json
{
  "original": "If you have any further questions, feel free to ask.",
  "casual": "Got more questions? Feel free to ask away. I'm here to help!",
  "professional": "Should you have any additional inquiries, please don't hesitate to ask.",
  "witty": "Curiosity is always in style! If you have more mysteries to solve, I'm all ears!",
  "paraphrase": "Don't hesitate to ask if you have any more questions."
}
```

## Training params

```json
{
  "batch_size": 8,
  "eval_ratio": 0.1,
  "eval_steps": 100,
  "gradient_accumulation_steps": 1,
  "learning_rate": 0.0001,
  "logging_steps": 100,
  "lora_alpha": 32,
  "lora_dropout": 0.05,
  "lora_r": 16,
  "max_length": 128,
  "model_name": "togethercomputer/RedPajama-INCITE-Base-3B-v1",
  "num_train_epochs": 3,
  "seed": 10,
  "task_type": "paraphrase_tone",
  "use_aim": True
}
```

## Training curve

![train_eval_loss](RedPajama-INCITE-Base-3B-v1-paraphrase-tone.jpeg)

## Training procedure

The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16

### Framework versions

- PEFT 0.4.0.dev0