shreyasmeher commited on
Commit
4cdff69
1 Parent(s): 65f0d36

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -10
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: ConfliBERT
3
- emoji: 🏛️
4
  colorFrom: red
5
  colorTo: indigo
6
  sdk: streamlit
@@ -25,24 +25,32 @@ Yibo Hu, MohammadSaleh Hosseini, Erick Skorupa Parolin, Javier Osorio, Latifur K
25
  ## Model Description
26
  ConfliBERT is a transformer model pretrained on a vast corpus of texts related to political conflict and violence. This model is based on the BERT architecture and is specialized for analyzing texts within its domain, using masked language modeling (MLM) and next sentence prediction (NSP) as its main pretraining objectives. It is designed to improve performance in tasks like sentiment analysis, event extraction, and entity recognition for texts dealing with political subjects.
27
 
28
- ## Model Variations
29
- - **ConfliBERT-scr-uncased**: Pretrained from scratch with a custom uncased vocabulary.
30
- - **ConfliBERT-scr-cased**: Pretrained from scratch with a custom cased vocabulary.
31
- - **ConfliBERT-cont-uncased**: Continual pretraining from BERT's original uncased vocabulary.
32
- - **ConfliBERT-cont-cased**: Continual pretraining from BERT's original cased vocabulary.
 
 
 
 
 
 
 
33
 
34
  ## Intended Uses & Limitations
35
  ConfliBERT is intended for use in tasks related to its training domain (political conflict and violence). It can be used for masked language modeling or next sentence prediction and is particularly useful when fine-tuned on downstream tasks such as classification or information extraction in political contexts.
36
 
37
  ## How to Use
38
- ConfliBERT can be loaded and used directly with pipelines for masked language modeling or integrated into custom applications for more specific tasks:
39
  ```python
40
  from transformers import AutoTokenizer, AutoModelForMaskedLM
41
 
42
- tokenizer = AutoTokenizer.from_pretrained("snowood1/ConfliBERT-scr-uncased", use_auth_token=True)
43
- model = AutoModelForMaskedLM.from_pretrained("snowood1/ConfliBERT-scr-uncased", use_auth_token=True)
 
44
 
45
  # Example of usage
46
  text = "The government of [MASK] was overthrown in a coup."
47
  input_ids = tokenizer.encode(text, return_tensors='pt')
48
- outputs = model(input_ids)
 
1
  ---
2
  title: ConfliBERT
3
+ emoji:
4
  colorFrom: red
5
  colorTo: indigo
6
  sdk: streamlit
 
25
  ## Model Description
26
  ConfliBERT is a transformer model pretrained on a vast corpus of texts related to political conflict and violence. This model is based on the BERT architecture and is specialized for analyzing texts within its domain, using masked language modeling (MLM) and next sentence prediction (NSP) as its main pretraining objectives. It is designed to improve performance in tasks like sentiment analysis, event extraction, and entity recognition for texts dealing with political subjects.
27
 
28
+ ## Model Variants
29
+ ConfliBERT has several variants, each fine-tuned on specific datasets to cater to different use cases within the domain of political conflict and violence:
30
+ - **ConfliBERT-scr-uncased-BBC_News**
31
+ - **ConfliBERT-cont-cased-BBC_News**
32
+ - **ConfliBERT-scr-uncased-20news**
33
+ - **ConfliBERT-cont-cased-20news**
34
+ - **ConfliBERT-re3d-ner** (Token Classification)
35
+ - **ConfliBERT-indiapolice-events-multilabel** (Text Classification)
36
+ - **ConfliBERT-named-entity-recognition** (Token Classification)
37
+ - **ConfliBERT-insight-crime-multilabel** (Text Classification)
38
+
39
+ These models are fine-tuned versions intended for specific text classification and named entity recognition tasks, enhancing their effectiveness in practical applications.
40
 
41
  ## Intended Uses & Limitations
42
  ConfliBERT is intended for use in tasks related to its training domain (political conflict and violence). It can be used for masked language modeling or next sentence prediction and is particularly useful when fine-tuned on downstream tasks such as classification or information extraction in political contexts.
43
 
44
  ## How to Use
45
+ To load and use a specific ConfliBERT model variant:
46
  ```python
47
  from transformers import AutoTokenizer, AutoModelForMaskedLM
48
 
49
+ # Example for using the ConfliBERT-scr-uncased-BBC_News model
50
+ tokenizer = AutoTokenizer.from_pretrained("eventdata-utd/ConfliBERT-scr-uncased-BBC_News")
51
+ model = AutoModelForMaskedLM.from_pretrained("eventdata-utd/ConfliBERT-scr-uncased-BBC_News")
52
 
53
  # Example of usage
54
  text = "The government of [MASK] was overthrown in a coup."
55
  input_ids = tokenizer.encode(text, return_tensors='pt')
56
+ outputs = model(input_ids)