Martin Müller commited on
Commit
589f56d
1 Parent(s): a391651

Add README

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ thumbnail: "https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png"
5
+ tags:
6
+ - Twitter
7
+ - COVID-19
8
+ - text-classification
9
+ - pytorch
10
+ - tensorflow
11
+ - bert
12
+ license: MIT
13
+ datasets:
14
+ - mnli
15
+ pipeline_tag: zero-shot-classification
16
+ widget:
17
+ - text: "To stop the pandemic it is important that everyone turns up for their shots."
18
+ candidate_labels: "health, sport, vaccine, guns"
19
+ ---
20
+
21
+ # COVID-Twitter-BERT v2 MNLI
22
+
23
+ ## Model description
24
+ This model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data.
25
+
26
+ The technique is based on [Yin et al.](https://arxiv.org/abs/1909.00161).
27
+ The article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers.
28
+ The model is already finetuned on 400'000 generaic logical tasks.
29
+ We can then use it as a zero-shot classifier by reformulating the classification task as a question.
30
+
31
+ Let's say we want to classify COVID-tweets as vaccine-related and not vaccine-related.
32
+ The typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes.
33
+ Then you would finetune the model on this.
34
+
35
+ With the zero-shot mnli-classifier, you can instead reformulate your question as "This text is about vaccines", and use this directly on inference - without any training.
36
+
37
+ Find more info about the model on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
38
+
39
+ ## Usage
40
+ Please note that how you formulate the question can give slightly different results.
41
+ Collecting a training set and finetuning on this, will most likely give you better accuracy.
42
+
43
+ The easiest way to try this out is by using the Hugging Face pipeline.
44
+ This uses the default Enlish template where it puts the text "This example is " in front of the text.
45
+
46
+ ```python
47
+ from transformers import pipeline
48
+ classifier = pipeline("zero-shot-classification", model="digitalepidemiologylab/covid-twitter-bert-v2-mnli")
49
+ ```
50
+ You can then use this pipeline to classify sequences into any of the class names you specify.
51
+ ```python
52
+ sequence_to_classify = 'To stop the pandemic it is important that everyone turns up for their shots.'
53
+ candidate_labels = ['health', 'sport', 'vaccine','guns']
54
+ hypothesis_template = 'This example is {}.'
55
+ classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True)
56
+ ```
57
+
58
+ ## Training procedure
59
+ The model is finetuned on the 400k large [MNLI-task](https://cims.nyu.edu/~sbowman/multinli/).
60
+
61
+ ## References
62
+ ```bibtex
63
+ @article{muller2020covid,
64
+ title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter},
65
+ author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E},
66
+ journal={arXiv preprint arXiv:2005.07503},
67
+ year={2020}
68
+ }
69
+ ```
70
+ or
71
+ ```
72
+ Martin Müller, Marcel Salathé, and Per E. Kummervold.
73
+ COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter.
74
+ arXiv preprint arXiv:2005.07503 (2020).
75
+ ```