File size: 2,746 Bytes
3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 3c83e5a 6e33e62 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- f1_score
model-index:
- name: results
results: []
license: apache-2.0
language:
- th
base_model:
- distilbert/distilbert-base-uncased
---
# Model: Fine-Tuned Transformer
This model is a fine-tuned version of the Transformer architecture using a custom-trained BPE tokenizer and a DistilBERT-like configuration. It has been fine-tuned on a specific dataset with a sequence length of 512 tokens for a classification task involving 3 labels.
### Key Evaluation Metrics:
- **Loss**: 0.3656
- **F1 Micro**: 0.8763
- **Validation Set Size**: 7608 samples
## Model Description
This model is based on a **DistilBERT** architecture with the following configuration:
- **Sequence Length**: 512 tokens
- **Number of Layers**: 6 transformer layers
- **Number of Attention Heads**: 8
- **Vocabulary Size**: 20,000 (custom Byte Pair Encoding tokenizer)
- **Max Position Embeddings**: 512
- **Pad Token ID**: Defined by the custom tokenizer
- **Number of Labels**: 3 (for multi-class classification)
The tokenizer used for this model is a custom Byte Pair Encoding (BPE) tokenizer trained on the combined training and test datasets.
## Tokenizer
A custom tokenizer was built using **Byte Pair Encoding (BPE)** with a vocabulary size of 20,000. The tokenizer was trained on both the training and test sets to capture a wide range of token patterns.
## Training and Evaluation Data
- Training Set Size: 43,112 samples
- Validation Set Size: 7,608 samples
The model was trained and evaluated on a dataset that has not been publicly released. It was trained for a multi-class classification task with 3 possible labels.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 88
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
## Training Results
| Training Loss | Step | Validation Loss | F1 Micro |
|:-------------:|:----:|:---------------:|:--------:|
| 0.8035 | 500 | 0.5608 | 0.7821 |
| 0.4855 | 1000 | 0.4392 | 0.8266 |
| 0.3769 | 1500 | 0.3930 | 0.8433 |
| 0.3159 | 2000 | 0.3589 | 0.8675 |
| 0.279 | 2500 | 0.3552 | 0.8697 |
| 0.2463 | 3000 | 0.3812 | 0.8699 |
| 0.226 | 3500 | 0.3619 | 0.8690 |
| 0.2072 | 4000 | 0.3548 | 0.8754 |
| 0.1926 | 4500 | 0.3656 | 0.8763 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 3.0.0
- Tokenizers 0.19.1 |