pszemraj's picture
Librarian Bot: Add base_model information to model (#1)
feb060e
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- BERT
datasets:
- postbot/multi-emails-hq
metrics:
- accuracy
pipeline_tag: fill-mask
widget:
- text: Can you please send me the [MASK] by the end of the day?
example_title: end of day
- text: I hope this email finds you well. I wanted to follow up on our [MASK] yesterday.
example_title: follow-up
- text: The meeting has been rescheduled to [MASK].
example_title: reschedule
- text: Please let me know if you need any further [MASK] regarding the project.
example_title: further help
- text: I appreciate your prompt response to my previous email. Can you provide an
update on the [MASK] by tomorrow?
example_title: provide update
- text: Paris is the [MASK] of France.
example_title: paris (default)
- text: The goal of life is [MASK].
example_title: goal of life (default)
base_model: google/bert_uncased_L-2_H-256_A-4
model-index:
- name: bert_uncased_L-2_H-256_A-4-mlm-multi-emails-hq
results: []
---
# bert_uncased_L-2_H-256_A-4-mlm-multi-emails-hq
This model is a fine-tuned version of [google/bert_uncased_L-2_H-256_A-4](https://huggingface.co/google/bert_uncased_L-2_H-256_A-4) on the `postbot/multi-emails-hq` dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4596
- Accuracy: 0.5642
## Model description
This is a ~40MB version of BERT finetuned on an MLM task on email data.
## Intended uses & limitations
- this is mostly a test/example
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 8.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.097 | 0.99 | 141 | 2.8195 | 0.5180 |
| 2.9097 | 1.99 | 282 | 2.6704 | 0.5367 |
| 2.8335 | 2.99 | 423 | 2.5764 | 0.5485 |
| 2.7433 | 3.99 | 564 | 2.5213 | 0.5563 |
| 2.6828 | 4.99 | 705 | 2.4667 | 0.5641 |
| 2.666 | 5.99 | 846 | 2.4688 | 0.5642 |
| 2.6517 | 6.99 | 987 | 2.4452 | 0.5679 |
| 2.6309 | 7.99 | 1128 | 2.4596 | 0.5642 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 2.0.0.dev20230129+cu118
- Datasets 2.8.0
- Tokenizers 0.13.1