Sajib-006 commited on
Commit
a2848b7
1 Parent(s): e5916ca

Final updation

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -21,7 +21,11 @@ https://huggingface.co/Sajib-006/fake_news_detection_xlmRoberta
21
  * Used pretrained XLM-Roberta base model.
22
  * Added classifier layer after bert model
23
  * For tokenization, i used max length of text as 512(which is max bert can handle)
24
-
 
 
 
 
25
  ## Limitations:
26
  * Pretrained XLM Roberta is a heavy model. Training it with the full dataset(44k+ samples) was not possible using google colab free version. So i had to take small sample of 2k size for my experiment.
27
  * As we can see, there is almost 100% accuracy and F1-score for 2000 dataset, so i haven't tried to find misclassified data.
 
21
  * Used pretrained XLM-Roberta base model.
22
  * Added classifier layer after bert model
23
  * For tokenization, i used max length of text as 512(which is max bert can handle)
24
+
25
+ ## Result:
26
+ * Using bert base uncased english model, the accuracy was near 85% (For all samples)
27
+ * Using XLM Roberta base model, the accuracy was almost 100% ( For only 2k samples)
28
+
29
  ## Limitations:
30
  * Pretrained XLM Roberta is a heavy model. Training it with the full dataset(44k+ samples) was not possible using google colab free version. So i had to take small sample of 2k size for my experiment.
31
  * As we can see, there is almost 100% accuracy and F1-score for 2000 dataset, so i haven't tried to find misclassified data.