shreyasmeher commited on
Commit
6bffade
1 Parent(s): 4cdff69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -2
README.md CHANGED
@@ -42,7 +42,8 @@ These models are fine-tuned versions intended for specific text classification a
42
  ConfliBERT is intended for use in tasks related to its training domain (political conflict and violence). It can be used for masked language modeling or next sentence prediction and is particularly useful when fine-tuned on downstream tasks such as classification or information extraction in political contexts.
43
 
44
  ## How to Use
45
- To load and use a specific ConfliBERT model variant:
 
46
  ```python
47
  from transformers import AutoTokenizer, AutoModelForMaskedLM
48
 
@@ -53,4 +54,29 @@ model = AutoModelForMaskedLM.from_pretrained("eventdata-utd/ConfliBERT-scr-uncas
53
  # Example of usage
54
  text = "The government of [MASK] was overthrown in a coup."
55
  input_ids = tokenizer.encode(text, return_tensors='pt')
56
- outputs = model(input_ids)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ConfliBERT is intended for use in tasks related to its training domain (political conflict and violence). It can be used for masked language modeling or next sentence prediction and is particularly useful when fine-tuned on downstream tasks such as classification or information extraction in political contexts.
43
 
44
  ## How to Use
45
+ To load and use a specific ConfliBERT model variant, you can follow these steps using the transformers library:
46
+
47
  ```python
48
  from transformers import AutoTokenizer, AutoModelForMaskedLM
49
 
 
54
  # Example of usage
55
  text = "The government of [MASK] was overthrown in a coup."
56
  input_ids = tokenizer.encode(text, return_tensors='pt')
57
+ outputs = model(input_ids)
58
+ ```
59
+
60
+ ## Limitations and Bias
61
+ While ConfliBERT is pretrained on data related to political conflicts, it may inherit biases present in its training corpus or exhibit limitations in understanding contexts outside its trained domain. As with any model, users should evaluate its fairness and suitability for their specific applications.
62
+
63
+ ## Training Data
64
+ ConfliBERT was trained on a specialized corpus of 33 GB of texts about politics and conflict, curated to provide comprehensive coverage of its intended application domain. This corpus includes diverse sources such as news articles, reports, and books related to global political events and conflicts.
65
+
66
+ ## Training Procedure
67
+ The model was pretrained using masked language modeling and next sentence prediction tasks, following procedures similar to those used for BERT. Specific training details, including configurations and scripts, are available in the model's GitHub repository.
68
+
69
+ ## Evaluation Results
70
+ ConfliBERT has shown improved performance on several benchmarks relevant to its domain compared to general-purpose language models like BERT, especially in tasks that require understanding of political contexts.
71
+
72
+ ## Citation
73
+ If you use ConfliBERT in your research, please cite the following paper:
74
+ ```bibtex
75
+ @inproceedings{hu2022conflibert,
76
+ title={ConfliBERT: A Pre-trained Language Model for Political Conflict and Violence},
77
+ author={Hu, Yibo and Hosseini, MohammadSaleh and Parolin, Erick Skorupa and Osorio, Javier and Khan, Latifur and Brandt, Patrick and D’Orazio, Vito},
78
+ booktitle={Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
79
+ pages={5469--5482},
80
+ year={2022}
81
+ }
82
+ ```