File size: 5,817 Bytes
e897422
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NeRUBioS_xlm_RoBERTa_base_Training_Testing
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# NeRUBioS_xlm_RoBERTa_base_Training_Testing

This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3585
- Negref Precision: 0.5638
- Negref Recall: 0.6035
- Negref F1: 0.5830
- Neg Precision: 0.9508
- Neg Recall: 0.9642
- Neg F1: 0.9575
- Nsco Precision: 0.8692
- Nsco Recall: 0.9047
- Nsco F1: 0.8866
- Unc Precision: 0.8005
- Unc Recall: 0.8846
- Unc F1: 0.8404
- Usco Precision: 0.6696
- Usco Recall: 0.7815
- Usco F1: 0.7212
- Precision: 0.8184
- Recall: 0.8628
- F1: 0.8400
- Accuracy: 0.9482

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Negref Precision | Negref Recall | Negref F1 | Neg Precision | Neg Recall | Neg F1 | Nsco Precision | Nsco Recall | Nsco F1 | Unc Precision | Unc Recall | Unc F1 | Usco Precision | Usco Recall | Usco F1 | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:---------:|:------:|:------:|:--------:|
| 0.2259        | 1.0   | 1729  | 0.2246          | 0.4076           | 0.4890        | 0.4446    | 0.9112        | 0.9515     | 0.9310 | 0.7928         | 0.8654      | 0.8275  | 0.7015        | 0.8256     | 0.7585 | 0.4629         | 0.6735      | 0.5487  | 0.7158    | 0.8122 | 0.7610 | 0.9287   |
| 0.16          | 2.0   | 3458  | 0.2028          | 0.5217           | 0.5301        | 0.5259    | 0.9283        | 0.9642     | 0.9459 | 0.8311         | 0.8896      | 0.8593  | 0.7812        | 0.8513     | 0.8147 | 0.5734         | 0.7532      | 0.6511  | 0.7817    | 0.8405 | 0.8100 | 0.9397   |
| 0.1235        | 3.0   | 5187  | 0.2148          | 0.5176           | 0.4963        | 0.5067    | 0.9520        | 0.9607     | 0.9563 | 0.8641         | 0.8850      | 0.8744  | 0.7684        | 0.8846     | 0.8224 | 0.6113         | 0.7481      | 0.6728  | 0.8038    | 0.8350 | 0.8191 | 0.9439   |
| 0.0949        | 4.0   | 6916  | 0.2261          | 0.5054           | 0.6211        | 0.5573    | 0.9327        | 0.9642     | 0.9482 | 0.8450         | 0.8828      | 0.8635  | 0.7976        | 0.8487     | 0.8224 | 0.6034         | 0.7352      | 0.6628  | 0.7818    | 0.8512 | 0.8150 | 0.9450   |
| 0.0633        | 5.0   | 8645  | 0.2354          | 0.5609           | 0.5947        | 0.5773    | 0.9417        | 0.9649     | 0.9532 | 0.8669         | 0.9062      | 0.8861  | 0.8062        | 0.8641     | 0.8342 | 0.6334         | 0.7506      | 0.6871  | 0.8118    | 0.8573 | 0.8340 | 0.9461   |
| 0.0495        | 6.0   | 10374 | 0.2829          | 0.5585           | 0.5962        | 0.5767    | 0.9445        | 0.9684     | 0.9563 | 0.8671         | 0.9077      | 0.8869  | 0.8116        | 0.8615     | 0.8358 | 0.6526         | 0.7532      | 0.6993  | 0.8151    | 0.8592 | 0.8366 | 0.9442   |
| 0.0365        | 7.0   | 12103 | 0.2699          | 0.5446           | 0.5830        | 0.5631    | 0.9552        | 0.9572     | 0.9562 | 0.8804         | 0.9024      | 0.8913  | 0.8080        | 0.8846     | 0.8446 | 0.6521         | 0.7661      | 0.7045  | 0.8182    | 0.8550 | 0.8362 | 0.9473   |
| 0.0265        | 8.0   | 13832 | 0.3082          | 0.5630           | 0.5580        | 0.5605    | 0.9466        | 0.9593     | 0.9529 | 0.8702         | 0.9024      | 0.8860  | 0.8038        | 0.8718     | 0.8364 | 0.6571         | 0.7635      | 0.7063  | 0.8194    | 0.8502 | 0.8345 | 0.9460   |
| 0.0216        | 9.0   | 15561 | 0.3286          | 0.5485           | 0.5977        | 0.5720    | 0.9388        | 0.9691     | 0.9537 | 0.8715         | 0.9077      | 0.8892  | 0.8085        | 0.8769     | 0.8413 | 0.6453         | 0.7763      | 0.7048  | 0.8105    | 0.8633 | 0.8361 | 0.9455   |
| 0.0133        | 10.0  | 17290 | 0.3503          | 0.5732           | 0.6094        | 0.5907    | 0.9481        | 0.9628     | 0.9554 | 0.8698         | 0.8994      | 0.8843  | 0.8137        | 0.8846     | 0.8477 | 0.6816         | 0.7815      | 0.7281  | 0.8223    | 0.8616 | 0.8415 | 0.9482   |
| 0.0088        | 11.0  | 19019 | 0.3476          | 0.5584           | 0.6182        | 0.5868    | 0.9450        | 0.9656     | 0.9552 | 0.8614         | 0.9070      | 0.8836  | 0.8080        | 0.8846     | 0.8446 | 0.6659         | 0.7789      | 0.7180  | 0.8126    | 0.8661 | 0.8385 | 0.9483   |
| 0.0093        | 12.0  | 20748 | 0.3585          | 0.5638           | 0.6035        | 0.5830    | 0.9508        | 0.9642     | 0.9575 | 0.8692         | 0.9047      | 0.8866  | 0.8005        | 0.8846     | 0.8404 | 0.6696         | 0.7815      | 0.7212  | 0.8184    | 0.8628 | 0.8400 | 0.9482   |


### Framework versions

- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2