File size: 1,715 Bytes
5207c72 bc65727 5207c72 e7cfc02 2ad92b1 5207c72 e7cfc02 5207c72 4c4a565 5207c72 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
widget:
- text: "On Unifying Misinformation Detection. In this paper, we introduce UNIFIEDM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup. The model is trained to handle four tasks: detecting news bias, clickbait, fake news and verifying rumors. By grouping these tasks together, UNIFIEDM2 learns a richer representation of misinformation, which leads to stateof-the-art or comparable performance across all tasks. Furthermore, we demonstrate that UNIFIEDM2's learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets and model's generalizability to unseen events."
example_title: "Misinformation Detection"
---
# SciBERT NLP4SG
SciBERT NLP4SG is a SciBERT model fine-tuned to detect NLP4SG papers based on their title and abstract.
We present the details in the paper:
The training corpus is a combination of the [NLP4SGPapers training set](https://huggingface.co/datasets/feradauto/NLP4SGPapers) which is manually annotated, and some papers identified by keywords.
For more details about the training data and the model, visit the original repo [here](https://github.com/feradauto/nlp4sg).
Please cite the following paper:
```
@misc{gonzalez2023good,
title={Beyond Good Intentions: Reporting the Research Landscape of NLP for Social Good},
author={Fernando Gonzalez and Zhijing Jin and Jad Beydoun and Bernhard Schölkopf and Tom Hope and Mrinmaya Sachan and Rada Mihalcea},
year={2023},
eprint={2305.05471},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |