mahsaamani commited on
Commit
81544da
·
1 Parent(s): 0ab091c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -37,7 +37,23 @@ The details about the training data used to pre-train and fine-tune these models
37
 
38
  ## Evaluation Metrics
39
 
40
- The evaluation metrics for each specific model, including accuracy, F1-score, BLEU score, or other relevant metrics, are provided in the associated research paper.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ## Acknowledgments
43
 
 
37
 
38
  ## Evaluation Metrics
39
 
40
+ The evaluation metrics for each specific model, including accuracy, F1-score, BLEU score, or other relevant metrics, are provided below and also in the associated research paper.
41
+
42
+ | Task | Model | Evaluation Metric | Performance |
43
+ |--------------------------------|-------------------------|-------------------|-------------|
44
+ | Language model-based Embedding | FastText | MRR | 0.46 |
45
+ | Language Model | BERT | Perplexity | 48.05 |
46
+ | Text Classification | TF-IDF + SVM | Accuracy | 0.79 |
47
+ | | TF-IDF + SVM | F1-score | 0.78 |
48
+ | | FastText + SVM | Accuracy | 0.86 |
49
+ | | FastText + SVM | F1-score | 0.86 |
50
+ | | BERT | Accuracy | 0.89 |
51
+ | | BERT | F1-score | 0.89 |
52
+ | Token Classification | BERT POS-tagger | Accuracy | 0.86 |
53
+ | | BERT POS-tagger | Macro F1-score | 0.67 |
54
+ | Machine Translation | Text Translation azb2fa | SacreBLEU | 10.34 |
55
+ | | Text Translation fa2azb | SacreBLEU | 8.07 |
56
+
57
 
58
  ## Acknowledgments
59