File size: 6,070 Bytes
6f2175b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: mit
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: MoritzLaurer/mDeBERTa-v3-base-mnli-xnli
model-index:
- name: legal-data-mDeBERTa_V3
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# legal-data-mDeBERTa_V3

This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6731
- Accuracy: 0.7634
- Precision: 0.7683
- Recall: 0.7644
- F1: 0.7623
- Ratio: 0.3297

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 20
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- lr_scheduler_warmup_steps: 4
- num_epochs: 15
- label_smoothing_factor: 0.1

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1     | Ratio  |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| 1.4203        | 0.34  | 10   | 1.5822          | 0.6022   | 0.6054    | 0.6046 | 0.5997 | 0.3226 |
| 1.1177        | 0.69  | 20   | 0.8339          | 0.7240   | 0.7270    | 0.7253 | 0.7258 | 0.3262 |
| 0.9484        | 1.03  | 30   | 0.7998          | 0.7168   | 0.7610    | 0.7192 | 0.6951 | 0.3190 |
| 0.9257        | 1.38  | 40   | 0.7183          | 0.7204   | 0.7221    | 0.7220 | 0.7219 | 0.3297 |
| 0.9529        | 1.72  | 50   | 0.7397          | 0.6989   | 0.7022    | 0.7001 | 0.6959 | 0.3297 |
| 0.9111        | 2.07  | 60   | 0.6820          | 0.7204   | 0.7215    | 0.7216 | 0.7188 | 0.3333 |
| 0.9021        | 2.41  | 70   | 0.6832          | 0.7563   | 0.7644    | 0.7570 | 0.7509 | 0.3333 |
| 0.8849        | 2.76  | 80   | 0.7858          | 0.7204   | 0.7365    | 0.7227 | 0.7079 | 0.3297 |
| 0.8767        | 3.1   | 90   | 0.8523          | 0.5520   | 0.6258    | 0.5527 | 0.5677 | 0.1935 |
| 0.9186        | 3.45  | 100  | 0.6877          | 0.7276   | 0.7430    | 0.7283 | 0.7183 | 0.3262 |
| 0.9127        | 3.79  | 110  | 0.6426          | 0.7348   | 0.7398    | 0.7357 | 0.7298 | 0.3333 |
| 0.9126        | 4.14  | 120  | 0.7509          | 0.7348   | 0.7564    | 0.7370 | 0.7215 | 0.3297 |
| 0.8477        | 4.48  | 130  | 0.6818          | 0.7491   | 0.7684    | 0.7497 | 0.7406 | 0.3262 |
| 0.8747        | 4.83  | 140  | 0.7813          | 0.6810   | 0.7704    | 0.6842 | 0.6067 | 0.3262 |
| 0.9112        | 5.17  | 150  | 0.7799          | 0.7204   | 0.8141    | 0.7205 | 0.6686 | 0.3297 |
| 0.8767        | 5.52  | 160  | 0.7959          | 0.6989   | 0.8418    | 0.7021 | 0.6271 | 0.3297 |
| 0.863         | 5.86  | 170  | 0.7007          | 0.7240   | 0.7395    | 0.7247 | 0.7139 | 0.3262 |
| 0.9029        | 6.21  | 180  | 0.6524          | 0.7634   | 0.7717    | 0.7642 | 0.7621 | 0.3262 |
| 0.8427        | 6.55  | 190  | 0.7417          | 0.7133   | 0.7374    | 0.7157 | 0.6957 | 0.3262 |
| 0.8945        | 6.9   | 200  | 0.7312          | 0.7527   | 0.7738    | 0.7532 | 0.7437 | 0.3262 |
| 0.8913        | 7.24  | 210  | 0.6410          | 0.7455   | 0.7523    | 0.7473 | 0.7433 | 0.3297 |
| 0.8848        | 7.59  | 220  | 0.7137          | 0.7563   | 0.7585    | 0.7574 | 0.7567 | 0.3297 |
| 0.8553        | 7.93  | 230  | 0.6940          | 0.7599   | 0.7743    | 0.7605 | 0.7530 | 0.3297 |
| 0.8154        | 8.28  | 240  | 0.6460          | 0.7276   | 0.7453    | 0.7298 | 0.7154 | 0.3297 |
| 0.8842        | 8.62  | 250  | 0.7455          | 0.7563   | 0.7694    | 0.7570 | 0.7498 | 0.3297 |
| 0.8773        | 8.97  | 260  | 0.7369          | 0.7348   | 0.7490    | 0.7367 | 0.7291 | 0.3262 |
| 0.8615        | 9.31  | 270  | 0.6577          | 0.7455   | 0.7539    | 0.7464 | 0.7411 | 0.3297 |
| 0.8664        | 9.66  | 280  | 0.6970          | 0.7563   | 0.7631    | 0.7580 | 0.7545 | 0.3297 |
| 0.8855        | 10.0  | 290  | 0.7167          | 0.7204   | 0.7269    | 0.7224 | 0.7169 | 0.3297 |
| 0.8564        | 10.34 | 300  | 0.6808          | 0.7670   | 0.7846    | 0.7676 | 0.7594 | 0.3297 |
| 0.841         | 10.69 | 310  | 0.6604          | 0.7455   | 0.7491    | 0.7472 | 0.7455 | 0.3297 |
| 0.8415        | 11.03 | 320  | 0.7150          | 0.7563   | 0.7694    | 0.7570 | 0.7498 | 0.3297 |
| 0.848         | 11.38 | 330  | 0.6495          | 0.7670   | 0.7685    | 0.7682 | 0.7680 | 0.3297 |
| 0.8648        | 11.72 | 340  | 0.7094          | 0.7348   | 0.7562    | 0.7369 | 0.7245 | 0.3262 |
| 0.8465        | 12.07 | 350  | 0.7125          | 0.7384   | 0.7758    | 0.7387 | 0.7181 | 0.3262 |
| 0.8875        | 12.41 | 360  | 0.6962          | 0.7563   | 0.7590    | 0.7573 | 0.7564 | 0.3297 |
| 0.8192        | 12.76 | 370  | 0.6496          | 0.7455   | 0.7539    | 0.7464 | 0.7411 | 0.3297 |
| 0.8089        | 13.1  | 380  | 0.6569          | 0.7599   | 0.7621    | 0.7613 | 0.7607 | 0.3297 |
| 0.8191        | 13.45 | 390  | 0.6808          | 0.7348   | 0.7679    | 0.7372 | 0.7150 | 0.3297 |
| 0.8468        | 13.79 | 400  | 0.6843          | 0.7670   | 0.7789    | 0.7677 | 0.7621 | 0.3297 |
| 0.8277        | 14.14 | 410  | 0.6630          | 0.7599   | 0.7660    | 0.7607 | 0.7578 | 0.3297 |
| 0.8159        | 14.48 | 420  | 0.6621          | 0.7599   | 0.7650    | 0.7608 | 0.7584 | 0.3297 |
| 0.8803        | 14.83 | 430  | 0.6731          | 0.7634   | 0.7683    | 0.7644 | 0.7623 | 0.3297 |


### Framework versions

- PEFT 0.9.0
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2