|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
pipeline_tag: text-classification |
|
widget: |
|
- text: "On Unifying Misinformation Detection. In this paper, we introduce UNIFIEDM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup. The model is trained to handle four tasks: detecting news bias, clickbait, fake news and verifying rumors. By grouping these tasks together, UNIFIEDM2 learns a richer representation of misinformation, which leads to stateof-the-art or comparable performance across all tasks. Furthermore, we demonstrate that UNIFIEDM2's learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets and model's generalizability to unseen events." |
|
example_title: "Misinformation Detection" |
|
--- |
|
|
|
# SciBERT NLP4SG |
|
|
|
SciBERT NLP4SG is a SciBERT model fine-tuned to detect NLP4SG papers based on their title and abstract. |
|
|
|
We present the details in the paper: |
|
|
|
The training corpus is a combination of the [NLP4SGPapers training set](https://huggingface.co/datasets/feradauto/NLP4SGPapers) which is manually annotated, and some papers identified by keywords. |
|
|
|
For more details about the training data and the model, visit the original repo [here](https://github.com/feradauto/nlp4sg). |
|
|
|
Please cite the following paper: |
|
``` |
|
@misc{gonzalez2023good, |
|
title={Beyond Good Intentions: Reporting the Research Landscape of NLP for Social Good}, |
|
author={Fernando Gonzalez and Zhijing Jin and Jad Beydoun and Bernhard Schölkopf and Tom Hope and Mrinmaya Sachan and Rada Mihalcea}, |
|
year={2023}, |
|
eprint={2305.05471}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |