|
--- |
|
license: other |
|
tags: |
|
- generated_from_trainer |
|
- opt |
|
- custom-license |
|
- no-commercial |
|
datasets: |
|
- aeslc |
|
|
|
widget: |
|
- text: "Hey <NAME>, Thank you for signing up to my weekly newsletter. Before we get started, you’ll have to confirm your email address." |
|
example_title: "newsletter" |
|
- text: "Hi <NAME>, I hope this email finds you well. Let me start by saying that I am a big fan of your work" |
|
example_title: "fan" |
|
inference: |
|
parameters: |
|
min_length: 16 |
|
max_length: 64 |
|
length_penalty: 0.7 |
|
no_repeat_ngram_size: 3 |
|
do_sample: False |
|
num_beams: 4 |
|
early_stopping: True |
|
repetition_penalty: 2.1 |
|
--- |
|
|
|
|
|
# opt for email generation - 350M |
|
|
|
- This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the [aeslc](https://huggingface.co/datasets/aeslc) dataset for six epochs. |
|
- Emails, phone numbers, etc were attempted to be excluded in a dataset preparation step using [clean-text](https://pypi.org/project/clean-text/) in Python. |
|
- Note that API is restricted to generate 64 tokens - you can generate longer emails by using this in a text-generation `pipeline` object |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
- in their everlasting wisdom, Facebook/Meta has decided to make a custom license for this specifying several things. See [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) for details. |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 6e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- gradient_accumulation_steps: 16 |
|
- total_train_batch_size: 128 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.03 |
|
- num_epochs: 6 |
|
|
|
### Training results |
|
|
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.19.2 |
|
- Pytorch 1.11.0+cu113 |
|
- Tokenizers 0.12.1 |
|
|