PavanNeerudu commited on
Commit
e441137
1 Parent(s): a24a280

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ datasets:
6
+ - glue
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: t5-base-finetuned-qnli
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: GLUE QNLI
17
+ type: glue
18
+ args: qnli
19
+ metrics:
20
+ - name: Accuracy
21
+ type: accuracy
22
+ value: 0.9123
23
+ ---
24
+
25
+
26
+ # T5-base-finetuned-qnli
27
+
28
+ <!-- Provide a quick summary of what the model is/does. -->
29
+
30
+ This model is T5 fine-tuned on GLUE QNLI dataset. It acheives the following results on the validation set
31
+ - Accuracy: 0.9123
32
+
33
+
34
+ ## Model Details
35
+ T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
36
+
37
+ ## Training procedure
38
+
39
+ ### Tokenization
40
+ Since, T5 is a text-to-text model, the labels of the dataset are converted as follows:
41
+ For each example, a sentence as been formed as **"qnli question: " + qnli_question + "sentence: " + qnli_sentence** and fed to the tokenizer to get the **input_ids** and **attention_mask**.
42
+ For each label, label is choosen as **"equivalent"** if label is 1, else label is **"not_equivalent"** and tokenized to get **input_ids** and **attention_mask** .
43
+ During training, these inputs_ids having **pad** token are replaced with -100 so that loss is not calculated for them. Then these input ids are given as labels, and above attention_mask of labels
44
+ is given as decoder attention mask.
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 3e-4
50
+ - train_batch_size: 8
51
+ - eval_batch_size: 8
52
+ - seed: 42
53
+ - optimizer: epsilon=1e-08
54
+ - num_epochs: 3.0
55
+
56
+ ### Training results
57
+
58
+
59
+ |Epoch | Training Loss | Validation Accuracy |
60
+ |:----:|:-------------:|:-------------------:|
61
+ | 1 | 0.0571 | 0.8973 |
62
+ | 2 | 0.0329 | 0.9068 |
63
+ | 3 | 0.0133 | 0.9123 |