rabindralamsal commited on
Commit
d50b3a2
1 Parent(s): 831155a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -7,3 +7,54 @@ tags:
7
  - transformers
8
 
9
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - transformers
8
 
9
  ---
10
+
11
+ # CrisisTransformers
12
+ CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the paper "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://arxiv.org/abs/2309.05494)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the associated paper for more details.
13
+
14
+ CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines such as BERT, RoBERTa, BERTweet, etc. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder outperforms the state-of-the-art by more than 17\% in sentence encoding tasks.
15
+
16
+ ## Uses
17
+ CrisisTransformers has 8 pre-trained models and a sentence encoder. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoder can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling.
18
+
19
+ ## Models and naming conventions
20
+ *CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. SE represents sentence encoder.
21
+
22
+ | pre-trained model | source |
23
+ |--|--|
24
+ |CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)|
25
+ |CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)|
26
+ |CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)|
27
+ |CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)|
28
+ |CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)|
29
+ |CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)|
30
+ |CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)|
31
+ |CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)|
32
+
33
+
34
+ | sentence encoder | source |
35
+ |--|--|
36
+ |CT-M1-Complete-SE|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)|
37
+
38
+
39
+ ## Results
40
+ Here are the main results from the associated paper.
41
+
42
+ <p float="left">
43
+ <img width="80%" alt="classification" src="https://raw.githubusercontent.com/rabindralamsal/images/main/cls.png" />
44
+ <img width="55%" alt="sentence encoding" src="https://raw.githubusercontent.com/rabindralamsal/images/main/se.png" />
45
+ </p>
46
+
47
+ ## Citation
48
+ If you use CrisisTransformers, please cite the following paper:
49
+ ```
50
+ @misc{lamsal2023crisistransformers,
51
+ title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts},
52
+ author={Rabindra Lamsal and
53
+ Maria Rodriguez Read and
54
+ Shanika Karunasekera},
55
+ year={2023},
56
+ eprint={2309.05494},
57
+ archivePrefix={arXiv},
58
+ primaryClass={cs.CL}
59
+ }
60
+ ```