File size: 1,164 Bytes
a3f8a59 f53e0f7 97afc78 527d90d 07580c3 527d90d 739a21e 1c87bbe 527d90d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- go-emotion
- pytorch
license: apache-2.0
datasets:
- go_emotions
metrics:
- Accuracy
---
# Distilbert-Base-Uncased-Go-Emotion
## Model description:
[Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40% while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.
[Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
## Training Parameters:
```
Num examples = 169208
Num Epochs = 3
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 31728
```
## TrainOutput:
```
'train_loss': 0.12085497042373672,
```
## Evalution Output:
```
'eval_accuracy_thresh': 0.9614765048027039,
'eval_loss': 0.1164659634232521
``` |