osmanh commited on
Commit
e4de4b9
·
verified ·
1 Parent(s): 4ee47d4

Model save

Browse files
Files changed (1) hide show
  1. README.md +9 -18
README.md CHANGED
@@ -21,13 +21,13 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 1.1844
25
- - Model Preparation Time: 0.0034
26
- - Accuracy: 0.7284
27
- - Precision: 0.7033
28
- - Recall: 0.7149
29
- - F1: 0.7060
30
- - Ratio: 0.3735
31
 
32
  ## Model description
33
 
@@ -54,23 +54,14 @@ The following hyperparameters were used during training:
54
  - total_train_batch_size: 16
55
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
  - lr_scheduler_type: linear
57
- - num_epochs: 10
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | Precision | Recall | F1 | Ratio |
63
  |:-------------:|:------:|:----:|:---------------:|:----------------------:|:--------:|:---------:|:------:|:------:|:------:|
64
- | No log | 0.9895 | 47 | 0.8346 | 0.0034 | 0.6049 | 0.6301 | 0.5396 | 0.5578 | 0.6019 |
65
- | No log | 2.0 | 95 | 0.8324 | 0.0034 | 0.6636 | 0.6559 | 0.6531 | 0.6324 | 0.2593 |
66
- | No log | 2.9895 | 142 | 0.7849 | 0.0034 | 0.6759 | 0.6501 | 0.6674 | 0.6568 | 0.4475 |
67
- | No log | 4.0 | 190 | 0.8593 | 0.0034 | 0.6790 | 0.6533 | 0.6942 | 0.6658 | 0.3889 |
68
- | No log | 4.9895 | 237 | 0.9753 | 0.0034 | 0.6975 | 0.6657 | 0.6899 | 0.6749 | 0.4074 |
69
- | No log | 6.0 | 285 | 1.0384 | 0.0034 | 0.6852 | 0.6687 | 0.6838 | 0.6745 | 0.4599 |
70
- | No log | 6.9895 | 332 | 1.0841 | 0.0034 | 0.7037 | 0.6802 | 0.6964 | 0.6807 | 0.3364 |
71
- | No log | 8.0 | 380 | 1.1334 | 0.0034 | 0.7160 | 0.6943 | 0.7099 | 0.6994 | 0.3796 |
72
- | No log | 8.9895 | 427 | 1.1389 | 0.0034 | 0.7284 | 0.7056 | 0.7149 | 0.7072 | 0.3735 |
73
- | No log | 9.8947 | 470 | 1.1844 | 0.0034 | 0.7284 | 0.7033 | 0.7149 | 0.7060 | 0.3735 |
74
 
75
 
76
  ### Framework versions
 
21
 
22
  This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.8424
25
+ - Model Preparation Time: 0.0068
26
+ - Accuracy: 0.6173
27
+ - Precision: 0.6539
28
+ - Recall: 0.5487
29
+ - F1: 0.5668
30
+ - Ratio: 0.4136
31
 
32
  ## Model description
33
 
 
54
  - total_train_batch_size: 16
55
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
  - lr_scheduler_type: linear
57
+ - num_epochs: 1
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | Precision | Recall | F1 | Ratio |
63
  |:-------------:|:------:|:----:|:---------------:|:----------------------:|:--------:|:---------:|:------:|:------:|:------:|
64
+ | No log | 0.9895 | 47 | 0.8424 | 0.0068 | 0.6173 | 0.6539 | 0.5487 | 0.5668 | 0.4136 |
 
 
 
 
 
 
 
 
 
65
 
66
 
67
  ### Framework versions