Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,14 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
This is a fine-tuned BERT-based language model to classify NLP-related research papers as "survey" or "non-survey" papers. The model is fine-tuned on a dataset of 787 survey and 11,805 non-survey papers from the ACL Anthology and the arXiv cs.CL category. Prior to fine-tuning, the model is initialized with weights from Prior to fine-tuning, the model is initialized with weights from [malteos/scincl](https://huggingface.co/malteos/scincl).
|
@@ -13,4 +22,4 @@ The model was evaluated on 20% test data.
|
|
13 |
* **F1:** 84.35
|
14 |
* **Precision:** 82.38
|
15 |
* **Recall:** 86.53
|
16 |
-
* **Accuracy:** 98.04
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
metrics:
|
6 |
+
- f1
|
7 |
+
- accuracy
|
8 |
+
- precision
|
9 |
+
- recall
|
10 |
+
library_name: transformers
|
11 |
+
pipeline_tag: text-classification
|
12 |
---
|
13 |
|
14 |
This is a fine-tuned BERT-based language model to classify NLP-related research papers as "survey" or "non-survey" papers. The model is fine-tuned on a dataset of 787 survey and 11,805 non-survey papers from the ACL Anthology and the arXiv cs.CL category. Prior to fine-tuning, the model is initialized with weights from Prior to fine-tuning, the model is initialized with weights from [malteos/scincl](https://huggingface.co/malteos/scincl).
|
|
|
22 |
* **F1:** 84.35
|
23 |
* **Precision:** 82.38
|
24 |
* **Recall:** 86.53
|
25 |
+
* **Accuracy:** 98.04
|