Model Card
#2
by
Ezi
- opened
README.md
CHANGED
@@ -13,7 +13,6 @@ datasets:
|
|
13 |
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
|
14 |
- [Training](#training)
|
15 |
- [Evaluation](#evaluation)
|
16 |
-
- [Technical Specifications](#technical-specifications)
|
17 |
- [Citation Information](#citation-information)
|
18 |
|
19 |
## Model Details
|
@@ -85,11 +84,11 @@ The authors detail their preprocessing procedure in the [Github repository](http
|
|
85 |
|
86 |
|
87 |
## Evaluation
|
88 |
-
The mode authors detail the Hidden layer size and attention for the HateXplain
|
89 |
|
90 |
#### Results
|
91 |
|
92 |
-
The model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine
|
93 |
|
94 |
## Citation Information
|
95 |
|
|
|
13 |
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
|
14 |
- [Training](#training)
|
15 |
- [Evaluation](#evaluation)
|
|
|
16 |
- [Citation Information](#citation-information)
|
17 |
|
18 |
## Model Details
|
|
|
84 |
|
85 |
|
86 |
## Evaluation
|
87 |
+
The mode authors detail the Hidden layer size and attention for the HateXplain fine-tuned models in the [associated paper](https://arxiv.org/pdf/2012.10289.pdf)
|
88 |
|
89 |
#### Results
|
90 |
|
91 |
+
The model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine-tuned ![models]( https://github.com/hate-alert/HateXplain/blob/master/Figures/bias-subgroup.pdf)
|
92 |
|
93 |
## Citation Information
|
94 |
|