File size: 3,205 Bytes
4dfb7b5 cc5dacf efb9885 1788628 efb9885 1788628 efb9885 1788628 108f5bc 70c4968 108f5bc 39606e7 70c4968 cc5dacf 9076f11 2202eaa cc5dacf 0459b7c 1788628 9076f11 4dfb7b5 1788628 4dfb7b5 ad9f4d2 9ae3f9c ad9f4d2 f583010 42a9cbf ad9f4d2 d6e3582 ad9f4d2 4dfb7b5 ad9f4d2 efb9885 4dfb7b5 efb9885 4dfb7b5 8175ef1 4dfb7b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
license: other
tags:
- generated_from_trainer
- opt
- custom-license
- no-commercial
- email
- auto-complete
datasets:
- aeslc
widget:
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address."
example_title: "newsletter"
- text: "Hi <NAME>,\n\nI hope this email finds you well. Let me start by saying that I am a big fan of your work."
example_title: "fan"
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because"
example_title: "festival"
- text: "Good Morning <NAME>,\n\nI was just thinking to myself about how much I love creating value"
example_title: "value"
- text: "URGENT - I need"
example_title: "URGENT"
inference:
parameters:
min_length: 4
max_length: 64
length_penalty: 0.7
no_repeat_ngram_size: 3
do_sample: False
num_beams: 4
early_stopping: True
repetition_penalty: 3.5
---
# opt for email generation - 350M
Why write the rest of your email when you can generate it?
```
from transformers import pipeline
model_tag = "pszemraj/opt-350m-email-generation"
generator = pipeline(
'text-generation',
model=model_tag,
do_sample=False,
early_stopping=True,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
generator(
prompt,
max_length=64,
) # generate
```
- [Link to notebook](https://colab.research.google.com/gist/pszemraj/acadd34e11a8dd9df8e7e25a8ec2537a/email-autocomplete-demo.ipynb) on Colab
> For this model, formatting matters. The results may be (significantly) different between the structure outlined above and `prompt = "Hey, just wanted to ..."` etc.
## Model description
- This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the [aeslc](https://huggingface.co/datasets/aeslc) dataset for six epochs.
- Emails, phone numbers, etc., were attempted to be excluded in a dataset preparation step using [clean-text](https://pypi.org/project/clean-text/) in Python.
- Note that API is restricted to generating 64 tokens - you can generate longer emails by using this in a text-generation `pipeline` object
## Intended uses & limitations
- in their everlasting wisdom, Facebook/Meta has decided to make a custom license for this, specifying several things. See [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) for details.
## Training and evaluation data
- the `email_body` field of train + validation (get more data) from the [aeslc](https://huggingface.co/datasets/aeslc) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|