File size: 3,059 Bytes
cd18a4c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec6e7b5
 
cd18a4c
 
 
 
 
 
 
 
 
 
6fd4a94
cd18a4c
6fd4a94
ec6e7b5
 
 
 
 
 
 
 
 
 
 
6fd4a94
cd18a4c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: apache-2.0
base_model: HooshvareLab/bert-fa-zwnj-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ParsBERT-nli-FarsTail-FarSick
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# ParsBERT-nli-FarsTail-FarSick

This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on the [FarsTail](https://github.com/dml-qom/FarsTail/tree/master)
 and [FarSick](https://github.com/ZahraGhasemi-AI/FarSick/tree/main) datasets.
It achieves the following results on the evaluation set:
- Loss: 0.8730
- Accuracy: 0.8055
- Precision (macro): 0.7900
- Precision (micro): 0.8055
- Recall (macro): 0.7926
- Recall (micro): 0.7926
- F1 (macro): 0.7909
- F1 (micro): 0.8055

## How to use

``` python
import torch
import transformers

model_name_or_path = "parsi-ai-nlpclass/ParsBERT-nli-FarsTail-FarSick"
config = transformers.AutoConfig.from_pretrained(model_name_or_path)
tokenizer_pb = transformers.AutoTokenizer.from_pretrained(model_name_or_path)
model_pb = transformers.AutoModelForSequenceClassification.from_pretrained(model_name_or_path,
                                                                           num_labels=3)
premise = "سلام خوبی؟"
hypothesis = "آره خوبم"
print(model_pb(**tokenizer_pb(premise, hypothesis, return_tensors='pt')))
```

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision (macro) | Precision (micro) | Recall (macro) | Recall (micro) | F1 (macro) | F1 (micro) |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:-----------------:|:--------------:|:--------------:|:----------:|:----------:|
| 0.6248        | 1.0   | 1137 | 0.5391          | 0.7768   | 0.7677            | 0.7768            | 0.7728         | 0.7728         | 0.7647     | 0.7768     |
| 0.4449        | 2.0   | 2274 | 0.5017          | 0.8055   | 0.7909            | 0.8055            | 0.7963         | 0.7963         | 0.7932     | 0.8055     |
| 0.304         | 3.0   | 3411 | 0.5851          | 0.8125   | 0.8006            | 0.8125            | 0.7979         | 0.7979         | 0.7985     | 0.8125     |
| 0.1844        | 4.0   | 4548 | 0.7549          | 0.8140   | 0.8010            | 0.8140            | 0.7982         | 0.7982         | 0.7993     | 0.8140     |
| 0.1224        | 5.0   | 5685 | 0.8730          | 0.8055   | 0.7900            | 0.8055            | 0.7926         | 0.7926         | 0.7909     | 0.8055     |


### Framework versions

- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2