Edit model card

Model description:

Model: bert-base-multilingual-cased

Dataset: TASTEset

Unshuffled ratio: ['0']

Shuffled ratio: ['1']

Best exact match epoch: 10

Best exact match: 80.22

Best epoch: 10

Drop duplicates: ['1']

Max epochs = 10

Optimizer lr = 3e-05

Optimizer eps = 1e-08

Batch size = 32

Dataset path = pgajo/EW-TT-PE_U0_S1_Tingredient_P0.25_DROP1_mbert

Results

epoch train_loss train_f1 train_exact dev_loss dev_f1 dev_exact test_loss test_f1 test_exact
1 3.18 11.43 2.48 2.18 29.53 20.6 0 0 0
2 1.31 55.71 40.91 1.12 72.8 58.24 0 0 0
3 0.57 81.62 72.45 1.05 79.04 71.7 0 0 0
4 0.29 90.38 84.37 1.01 81.04 74.73 0 0 0
5 0.19 93.63 88.98 0.92 79.94 75 0 0 0
6 0.11 96.49 94.21 0.97 81.43 75.82 0 0 0
7 0.08 97.23 95.59 0.94 83.21 78.57 0 0 0
8 0.04 98.64 97.87 1.13 83.51 78.3 0 0 0
9 0.04 98.8 97.87 1.11 83.88 78.57 0 0 0
10 0.05 98.23 97.52 0.89 84.79 80.22 0 0 0
Downloads last month
0
Safetensors
Model size
177M params
Tensor type
F32
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.