leondz commited on
Commit
033eb78
1 Parent(s): 8269c15

Upload 7 files

Browse files
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ ---
4
+
5
+ ## Model description
6
+ This model is a fine-tuned version of the [DistilBERT model](https://huggingface.co/transformers/model_doc/distilbert.html) to classify toxic comments.
7
+
8
+ ## How to use
9
+
10
+ You can use the model with the following code.
11
+
12
+ ```python
13
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
14
+
15
+ model_path = "martin-ha/toxic-comment-model"
16
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
17
+ model = AutoModelForSequenceClassification.from_pretrained(model_path)
18
+
19
+ pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
20
+ print(pipeline('This is a test text.'))
21
+ ```
22
+
23
+ ## Limitations and Bias
24
+
25
+ This model is intended to use for classify toxic online classifications. However, one limitation of the model is that it performs poorly for some comments that mention a specific identity subgroup, like Muslim. The following table shows a evaluation score for different identity group. You can learn the specific meaning of this metrics [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview/evaluation). But basically, those metrics shows how well a model performs for a specific group. The larger the number, the better.
26
+
27
+ | **subgroup** | **subgroup_size** | **subgroup_auc** | **bpsn_auc** | **bnsp_auc** |
28
+ | ----------------------------- | ----------------- | ---------------- | ------------ | ------------ |
29
+ | muslim | 108 | 0.689 | 0.811 | 0.88 |
30
+ | jewish | 40 | 0.749 | 0.86 | 0.825 |
31
+ | homosexual_gay_or_lesbian | 56 | 0.795 | 0.706 | 0.972 |
32
+ | black | 84 | 0.866 | 0.758 | 0.975 |
33
+ | white | 112 | 0.876 | 0.784 | 0.97 |
34
+ | female | 306 | 0.898 | 0.887 | 0.948 |
35
+ | christian | 231 | 0.904 | 0.917 | 0.93 |
36
+ | male | 225 | 0.922 | 0.862 | 0.967 |
37
+ | psychiatric_or_mental_illness | 26 | 0.924 | 0.907 | 0.95 |
38
+
39
+ The table above shows that the model performs poorly for the muslim and jewish group. In fact, you pass the sentence "Muslims are people who follow or practice Islam, an Abrahamic monotheistic religion." Into the model, the model will classify it as toxic. Be mindful for this type of potential bias.
40
+
41
+ ## Training data
42
+ The training data comes this [Kaggle competition](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data). We use 10% of the `train.csv` data to train the model.
43
+
44
+ ## Training procedure
45
+
46
+ You can see [this documentation and codes](https://github.com/MSIA/wenyang_pan_nlp_project_2021) for how we train the model. It takes about 3 hours in a P-100 GPU.
47
+
48
+ ## Evaluation results
49
+
50
+ The model achieves 94% accuracy and 0.59 f1-score in a 10000 rows held-out test set.
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "../models/transformer_models/checkpoint-27075",
3
+ "activation": "gelu",
4
+ "architectures": [
5
+ "DistilBertForSequenceClassification"
6
+ ],
7
+ "attention_dropout": 0.1,
8
+ "dim": 768,
9
+ "dropout": 0.1,
10
+ "hidden_dim": 3072,
11
+ "id2label": {
12
+ "0": "non-toxic",
13
+ "1": "toxic"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "label2id": {
17
+ "non-toxic": 0,
18
+ "toxic": 1
19
+ },
20
+ "max_position_embeddings": 512,
21
+ "model_type": "distilbert",
22
+ "n_heads": 12,
23
+ "n_layers": 6,
24
+ "pad_token_id": 0,
25
+ "qa_dropout": 0.1,
26
+ "seq_classif_dropout": 0.2,
27
+ "sinusoidal_pos_embds": false,
28
+ "tie_weights_": true,
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.12.5",
31
+ "vocab_size": 30522
32
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:569aed60978bec9cdc5a90e660fe860e2eccd4f72479c1aac0c9b6c64a581e94
3
+ size 267858673
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "../models/transformer_models/checkpoint-27075", "do_basic_tokenize": true, "never_split": null, "tokenizer_class": "DistilBertTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff