Update README.md
Browse files
README.md
CHANGED
@@ -98,7 +98,7 @@ The model was pre-trained continuously on a single A10G GPU in an AWS instance f
|
|
98 |
|
99 |
2. We have pre-trained our model with approx 16 GB of data, and testing Classification result on <a href='https://www.kaggle.com/datasets/ashokpant/nepali-news-dataset-large/data'>Nepali News Dataset (Large)</a> with a couple of Nepali transformer based Models available on Hugging Face,
|
100 |
<br> Our models seem to do better than others with an accuracy of 0.58 on validation but,
|
101 |
-
<br> It's seen that we still do not have enough data for generalization as Transformer models only perform well
|
102 |
|
103 |
#### Authors:
|
104 |
|
|
|
98 |
|
99 |
2. We have pre-trained our model with approx 16 GB of data, and testing Classification result on <a href='https://www.kaggle.com/datasets/ashokpant/nepali-news-dataset-large/data'>Nepali News Dataset (Large)</a> with a couple of Nepali transformer based Models available on Hugging Face,
|
100 |
<br> Our models seem to do better than others with an accuracy of 0.58 on validation but,
|
101 |
+
<br> It's seen that we still do not have enough data for generalization as Transformer models only perform well with large amounts of pre-trained data compared with Classical Sequential Models.
|
102 |
|
103 |
#### Authors:
|
104 |
|