File size: 1,518 Bytes
802715c 706f51b 802715c 4fa1db3 a345352 b9f854f 802715c 4fa1db3 691e0da 4fa1db3 691e0da 4fa1db3 691e0da 4fa1db3 691e0da 4fa1db3 691e0da 4fa1db3 691e0da 4fa1db3 691e0da 4fa1db3 691e0da 4fa1db3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
language: fr
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: camembert-sentiment-allocine
results: []
datasets:
- allocine
metrics:
- accuracy
---
# camembert-sentiment-allocine
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the [allocine](https://huggingface.co/datasets/allocine) dataset.
## Intended uses & limitations
This model has been trained for a single epoch for testing purposes.
## Training procedure
This model has been created by fine-tuning the TensorFlow version [camembert-base](https://huggingface.co/camembert-base) **after freezing the encoder part**:
```python
model.roberta.trainable = False
```
Therefore, only the classifier head parameters have been updated during training.
### Training hyperparameters
The following hyperparameters were used during training:
```
- optimizer: {
'name': 'Adam',
'learning_rate': {
'class_name': 'PolynomialDecay',
'config': {'initial_learning_rate': 5e-05, 'decay_steps': 15000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}
},
'decay': 0.0,
'beta_1': 0.9,
'beta_2': 0.999,
'epsilon': 1e-07,
'amsgrad': False
}
- training_precision: float32
- epochs: 1
```
### Training results
The model achieves the following results on the test set:
| Accuracy |
|---|
| 0.918 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.2
- Tokenizers 0.12.1
|