Update README.md
Browse files
README.md
CHANGED
@@ -12,8 +12,6 @@ Fine-tuned [XLM-R Large](https://huggingface.co/FacebookAI/xlm-roberta-large) fo
|
|
12 |
|
13 |
|
14 |
|
15 |
-
## Model Details
|
16 |
-
|
17 |
### Model Description
|
18 |
|
19 |
This model is a fine-tuned version of [XLM-R Large](https://huggingface.co/FacebookAI/xlm-roberta-large). It is trained to classify factual claims, a task that is common to automated fact-checking. It was trained in a weakly-supervised fashion. First on a weakly annotated Telegram dataset using GPT-4o and then on the manually annotated dataset from Risch et al. 2021. The datasets are German, however, the underlying model is multilingual. It was not tested how the model performs in other languages. For testing a set of Telegram posts was annotated by four trained coders and the majority label was taken. The model achieves an accuracy score of 0.9 on this dataset. On the test split of Risch et al. 2021, which is drawn from Facebook comments, the model achieves an accuracy of 0.79.
|
@@ -21,15 +19,9 @@ This model is a fine-tuned version of [XLM-R Large](https://huggingface.co/Faceb
|
|
21 |
|
22 |
## Bias, Risks, and Limitations
|
23 |
|
24 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
25 |
|
26 |
[More Information Needed]
|
27 |
|
28 |
-
### Recommendations
|
29 |
-
|
30 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
31 |
-
|
32 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
33 |
|
34 |
## How to Get Started with the Model
|
35 |
|
|
|
12 |
|
13 |
|
14 |
|
|
|
|
|
15 |
### Model Description
|
16 |
|
17 |
This model is a fine-tuned version of [XLM-R Large](https://huggingface.co/FacebookAI/xlm-roberta-large). It is trained to classify factual claims, a task that is common to automated fact-checking. It was trained in a weakly-supervised fashion. First on a weakly annotated Telegram dataset using GPT-4o and then on the manually annotated dataset from Risch et al. 2021. The datasets are German, however, the underlying model is multilingual. It was not tested how the model performs in other languages. For testing a set of Telegram posts was annotated by four trained coders and the majority label was taken. The model achieves an accuracy score of 0.9 on this dataset. On the test split of Risch et al. 2021, which is drawn from Facebook comments, the model achieves an accuracy of 0.79.
|
|
|
19 |
|
20 |
## Bias, Risks, and Limitations
|
21 |
|
|
|
22 |
|
23 |
[More Information Needed]
|
24 |
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
## How to Get Started with the Model
|
27 |
|