File size: 7,889 Bytes
79b501f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68098f2
 
 
 
 
 
 
 
79b501f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68098f2
 
 
 
 
 
 
 
 
 
 
 
 
 
79b501f
 
 
 
68098f2
 
 
79b501f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
- generated_from_trainer
model-index:
- name: lilt-en-funsd
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# lilt-en-funsd

This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7110
- Answer: {'precision': 0.8460661345496009, 'recall': 0.9082007343941249, 'f1': 0.8760330578512396, 'number': 817}
- Header: {'precision': 0.6470588235294118, 'recall': 0.5546218487394958, 'f1': 0.5972850678733032, 'number': 119}
- Question: {'precision': 0.9019248395967002, 'recall': 0.9136490250696379, 'f1': 0.9077490774907748, 'number': 1077}
- Overall Precision: 0.8657
- Overall Recall: 0.8902
- Overall F1: 0.8778
- Overall Accuracy: 0.7988

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch    | Step | Validation Loss | Answer                                                                                                   | Header                                                                                                   | Question                                                                                                  | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:--------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4082        | 10.5263  | 200  | 1.0891          | {'precision': 0.8286384976525821, 'recall': 0.8641370869033048, 'f1': 0.8460155781905333, 'number': 817} | {'precision': 0.4550898203592814, 'recall': 0.6386554621848739, 'f1': 0.5314685314685315, 'number': 119} | {'precision': 0.8792134831460674, 'recall': 0.871866295264624, 'f1': 0.8755244755244757, 'number': 1077}  | 0.8246            | 0.8549         | 0.8395     | 0.7758           |
| 0.0489        | 21.0526  | 400  | 1.1864          | {'precision': 0.8470588235294118, 'recall': 0.8812729498164015, 'f1': 0.8638272345530894, 'number': 817} | {'precision': 0.6078431372549019, 'recall': 0.5210084033613446, 'f1': 0.5610859728506787, 'number': 119} | {'precision': 0.8630377524143986, 'recall': 0.9127205199628597, 'f1': 0.8871841155234657, 'number': 1077} | 0.8441            | 0.8768         | 0.8601     | 0.8014           |
| 0.0135        | 31.5789  | 600  | 1.4159          | {'precision': 0.8746928746928747, 'recall': 0.8714810281517748, 'f1': 0.8730839975475169, 'number': 817} | {'precision': 0.59375, 'recall': 0.4789915966386555, 'f1': 0.5302325581395348, 'number': 119}            | {'precision': 0.8701964133219471, 'recall': 0.9461467038068709, 'f1': 0.9065836298932384, 'number': 1077} | 0.8592            | 0.8882         | 0.8735     | 0.8040           |
| 0.007         | 42.1053  | 800  | 1.4263          | {'precision': 0.8548199767711963, 'recall': 0.9008567931456548, 'f1': 0.8772348033373063, 'number': 817} | {'precision': 0.6138613861386139, 'recall': 0.5210084033613446, 'f1': 0.5636363636363637, 'number': 119} | {'precision': 0.8946412352406903, 'recall': 0.914577530176416, 'f1': 0.9044995408631772, 'number': 1077}  | 0.8643            | 0.8857         | 0.8749     | 0.8061           |
| 0.0039        | 52.6316  | 1000 | 1.6051          | {'precision': 0.8764845605700713, 'recall': 0.9033047735618115, 'f1': 0.8896925858951176, 'number': 817} | {'precision': 0.5323741007194245, 'recall': 0.6218487394957983, 'f1': 0.5736434108527132, 'number': 119} | {'precision': 0.8847209515096066, 'recall': 0.8978644382544104, 'f1': 0.8912442396313364, 'number': 1077} | 0.8578            | 0.8838         | 0.8706     | 0.7967           |
| 0.0017        | 63.1579  | 1200 | 1.5147          | {'precision': 0.8608490566037735, 'recall': 0.8935128518971848, 'f1': 0.8768768768768769, 'number': 817} | {'precision': 0.6388888888888888, 'recall': 0.5798319327731093, 'f1': 0.6079295154185022, 'number': 119} | {'precision': 0.8934056007226739, 'recall': 0.9182915506035283, 'f1': 0.9056776556776556, 'number': 1077} | 0.8667            | 0.8882         | 0.8773     | 0.8087           |
| 0.0014        | 73.6842  | 1400 | 1.8128          | {'precision': 0.8349514563106796, 'recall': 0.9473684210526315, 'f1': 0.8876146788990826, 'number': 817} | {'precision': 0.6078431372549019, 'recall': 0.5210084033613446, 'f1': 0.5610859728506787, 'number': 119} | {'precision': 0.9125475285171103, 'recall': 0.8913649025069638, 'f1': 0.9018318459370597, 'number': 1077} | 0.8630            | 0.8922         | 0.8774     | 0.7931           |
| 0.001         | 84.2105  | 1600 | 1.7309          | {'precision': 0.8884758364312267, 'recall': 0.8776009791921665, 'f1': 0.8830049261083744, 'number': 817} | {'precision': 0.576271186440678, 'recall': 0.5714285714285714, 'f1': 0.5738396624472574, 'number': 119}  | {'precision': 0.8825622775800712, 'recall': 0.9210770659238626, 'f1': 0.9014084507042255, 'number': 1077} | 0.8673            | 0.8828         | 0.8749     | 0.7998           |
| 0.0006        | 94.7368  | 1800 | 1.7644          | {'precision': 0.8462414578587699, 'recall': 0.9094247246022031, 'f1': 0.8766961651917403, 'number': 817} | {'precision': 0.6363636363636364, 'recall': 0.5294117647058824, 'f1': 0.5779816513761468, 'number': 119} | {'precision': 0.9010175763182239, 'recall': 0.904363974001857, 'f1': 0.9026876737720111, 'number': 1077}  | 0.8649            | 0.8843         | 0.8745     | 0.7967           |
| 0.0006        | 105.2632 | 2000 | 1.6953          | {'precision': 0.8673835125448028, 'recall': 0.8886168910648715, 'f1': 0.8778718258766626, 'number': 817} | {'precision': 0.6666666666666666, 'recall': 0.5378151260504201, 'f1': 0.5953488372093023, 'number': 119} | {'precision': 0.8741319444444444, 'recall': 0.9350046425255338, 'f1': 0.9035441902198295, 'number': 1077} | 0.8619            | 0.8927         | 0.8770     | 0.8027           |
| 0.0003        | 115.7895 | 2200 | 1.7110          | {'precision': 0.8460661345496009, 'recall': 0.9082007343941249, 'f1': 0.8760330578512396, 'number': 817} | {'precision': 0.6470588235294118, 'recall': 0.5546218487394958, 'f1': 0.5972850678733032, 'number': 119} | {'precision': 0.9019248395967002, 'recall': 0.9136490250696379, 'f1': 0.9077490774907748, 'number': 1077} | 0.8657            | 0.8902         | 0.8778     | 0.7988           |
| 0.0002        | 126.3158 | 2400 | 1.7082          | {'precision': 0.8447488584474886, 'recall': 0.9057527539779682, 'f1': 0.874187832250443, 'number': 817}  | {'precision': 0.6336633663366337, 'recall': 0.5378151260504201, 'f1': 0.5818181818181819, 'number': 119} | {'precision': 0.9002744739249772, 'recall': 0.9136490250696379, 'f1': 0.9069124423963134, 'number': 1077} | 0.8638            | 0.8882         | 0.8758     | 0.7978           |


### Framework versions

- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1