|
--- |
|
license: apache-2.0 |
|
tags: |
|
- generated_from_trainer |
|
- email generation |
|
- email |
|
datasets: |
|
- aeslc |
|
- postbot/multi_emails |
|
|
|
widget: |
|
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address." |
|
example_title: "newsletter" |
|
- text: "Hi <NAME>,\n\nI hope this email finds you well. Let me start by saying that I am a big fan of your work." |
|
example_title: "fan" |
|
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because" |
|
example_title: "festival" |
|
- text: "Good Morning <NAME>,\n\nI was just thinking to myself about how much I love creating value" |
|
example_title: "value" |
|
- text: "URGENT - I need the TPS reports" |
|
example_title: "URGENT" |
|
- text: "Hi <NAME>,\n\nI hope this email finds you extremely well." |
|
example_title: "emails that find you" |
|
|
|
parameters: |
|
min_length: 4 |
|
max_length: 96 |
|
length_penalty: 0.7 |
|
no_repeat_ngram_size: 3 |
|
do_sample: False |
|
num_beams: 4 |
|
early_stopping: True |
|
repetition_penalty: 4.5 |
|
--- |
|
|
|
|
|
# distilgpt2-emailgen |
|
|
|
Why write the rest of your email when you can generate it? |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
model_tag = "postbot/distilgpt2-emailgen" |
|
generator = pipeline( |
|
'text-generation', |
|
model=model_tag, |
|
do_sample=False, |
|
early_stopping=True, |
|
) |
|
|
|
prompt = """ |
|
Hello, |
|
|
|
Following up on the bubblegum shipment.""" |
|
|
|
generator( |
|
prompt, |
|
max_length=64, |
|
) # generate |
|
``` |
|
|
|
A script to use this on CPU/command line can be found [here](https://gist.github.com/pszemraj/c1b0a76445418b6bbddd5f9633d1bb7f) :) |
|
|
|
> For this model, formatting matters. The results may be (significantly) different between the structure outlined above and `prompt = "Hey, just wanted to ..."` etc. |
|
|
|
## Model description |
|
|
|
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset of 50k emails, including the classic `aeslc` dataset. |
|
|
|
It achieves the following results on the evaluation set: |
|
- Loss: 2.6247 |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
The intended use of this model is to provide suggestions to "autocomplete" the rest of your email. Said another way, it should serve as a *tool to write predictable emails faster*. It is not intended to write entire emails, as at least *some* input is required to guide the direction of the model. |
|
|
|
Please verify any suggestions by the model for A) False claims and B) negation statements before accepting/sending something. |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 6e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- gradient_accumulation_steps: 32 |
|
- total_train_batch_size: 256 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.02 |
|
- num_epochs: 5 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:----:|:---------------:| |
|
| 2.8299 | 1.0 | 248 | 2.7971 | |
|
| 2.6984 | 2.0 | 496 | 2.6826 | |
|
| 2.7022 | 3.0 | 744 | 2.6361 | |
|
| 2.6436 | 4.0 | 992 | 2.6245 | |
|
| 2.6195 | 5.0 | 1240 | 2.6247 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.21.1 |
|
- Pytorch 1.12.0+cu113 |
|
- Datasets 2.4.0 |
|
- Tokenizers 0.12.1 |
|
|