Matrix430 commited on
Commit
c3eda01
·
1 Parent(s): b514cb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -44
README.md CHANGED
@@ -4,7 +4,7 @@ annotations_creators:
4
  language:
5
  - en
6
  language_creators:
7
- - crowdsourced
8
  license:
9
  - afl-3.0
10
  multilinguality:
@@ -14,71 +14,61 @@ size_categories:
14
  - 10K<n<100K
15
  source_datasets:
16
  - original
17
- tags: []
 
18
  task_categories:
19
  - text-classification
20
  - token-classification
21
- task_ids: []
 
22
  ---
23
 
24
- -
25
- # Title
26
- **[CONDA: a CONtextual Dual-Annotated dataset for in-game toxicity understanding and detection](https://arxiv.org/abs/2106.06213)**
 
 
 
 
 
 
27
 
28
- **Henry Weld, Guanghao Huang, Jean Lee, Tongshu Zhang, Kunze Wang, Xinghong Guo, Siqu Long, Josiah Poon, Soyeon Caren Han (2021)**
29
- **University of Sydney, NLP Group**
30
 
31
- **To appear at ACL-IJCNLP 2021**
32
 
33
- Abstract: Traditional toxicity detection models have focused on the single utterance level without deeper understanding of context. We introduce CONDA, a new dataset for in-game toxic language detection enabling joint intent classification and slot filling analysis, which is the core task of Natural Language Understanding (NLU). The dataset consists of 45K utterances from 12K conversations from the chat logs of 1.9K completed Dota 2 matches. We propose a robust dual semantic-level toxicity framework, which handles utterance and token-level patterns, and rich contextual chatting history. Accompanying the dataset is a thorough in-game toxicity analysis, which provides comprehensive understanding of context at utterance, token, and dual levels. Inspired by NLU, we also apply its metrics to the toxicity detection tasks for assessing toxicity and game-specific aspects. We evaluate strong NLU models on CONDA, providing fine-grained results for different intent classes and slot classes. Furthermore, we examine the coverage of toxicity nature in our dataset by comparing it with other toxicity datasets.
34
 
35
- Please enjoy a video presentation covering the main points from our paper:
36
-
37
- <p align="centre">
38
-
39
- [![ACL_video](https://img.youtube.com/vi/qRCPSSUuf18/0.jpg)](https://www.youtube.com/watch?v=qRCPSSUuf18)
40
-
41
- </p>
42
 
43
- _For any issue related to the code or data, please first search for solution in the Issues section. If your issue is not addressed there, post a comment there and we will help soon._
44
 
45
- This repository is for the CONDA dataset as covered in our paper referenced above.
46
 
47
- 1. How to get our CONDA dataset?
48
 
49
- --- three .csv files are available in the dataset folder, there are train, validation and test files. Together these make up the ~45k samples described in the paper.
50
-
51
- --- the test data is unannotated, please see the CodaLab section below for more information.
52
-
53
- 2. What baseline models were used in the paper?
54
 
55
- --- Joint BERT, (Castellucci et al., 2019): https://github.com/monologg/JointBERT
56
-
57
- --- Capsule NN, (Zhang et al., 2019): https://github.com/czhang99/Capsule-NLU
58
-
59
- --- RNN-NLU, (Liu + Lane, 2016): https://github.com/HadoopIt/rnn-nlu
60
-
61
- --- Slot-gated, (Goo et al., 2018) https://github.com/MiuLab/SlotGated-SLU
62
-
63
- --- Inter-BiLSTM (Wang et al., 2018): https://github.com/ray075hl/Bi-Model-Intent-And-Slot
64
-
65
- 3. What other resources are there?
66
 
67
- --- As described in the paper the full lexicons for word level annotation are included in the "resources" directory.
68
 
 
69
 
70
- ## Codalab
71
 
72
- If you are interested in our dataset, you are welcome to join in our [Codalab competition leaderboard](https://codalab.lisn.upsaclay.fr/competitions/7827).
 
73
 
74
- ### Evaluation Metrics
75
- **JSA**(Joint Semantic Accuracy) is used for ranking. An utterance is deemed correctly analysed only if both utterance-level and all the token-level labels including Os are correctly predicted.
76
 
77
- Besides, the f1 score of **utterance-level** E(xplicit) and I(mplicit) classes, **token-level** T(oxicity), D(ota-specific), S(game Slang) classes will be shown on the leaderboard (but not used as the ranking metric).
 
 
78
 
79
- ## Citation
80
 
81
- ```
82
  @inproceedings{weld-etal-2021-conda,
83
  title = "{CONDA}: a {CON}textual Dual-Annotated dataset for in-game toxicity understanding and detection",
84
  author = "Weld, Henry and
@@ -100,3 +90,4 @@ Besides, the f1 score of **utterance-level** E(xplicit) and I(mplicit) classes,
100
  pages = "2406--2416",
101
  }
102
  ```
 
 
4
  language:
5
  - en
6
  language_creators:
7
+ - found
8
  license:
9
  - afl-3.0
10
  multilinguality:
 
14
  - 10K<n<100K
15
  source_datasets:
16
  - original
17
+ tags:
18
+ - CONDA
19
  task_categories:
20
  - text-classification
21
  - token-classification
22
+ task_ids:
23
+ - intent-classification
24
  ---
25
 
26
+ # Dataset Card for CONDA
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Abstract](#dataset-summary)
30
+ - [Leaderboards](#leaderboards)
31
+ - [Evaluation Metrics](#evaluation-metrics)
32
+ - [Languages](#languages)
33
+ - [Video](#video)
34
+ - [Citation Information](#citation-information)
35
 
 
 
36
 
 
37
 
38
+ ## Dataset Description
39
 
40
+ - **Homepage:** [CONDA](https://github.com/usydnlp/CONDA)
41
+ - **Paper:** [CONDA: a CONtextual Dual-Annotated dataset for in-game toxicity understanding and detection](https://arxiv.org/abs/2106.06213)
42
+ - **Point of Contact:** [Caren Han](caren.han@sydney.edu.au)
 
 
 
 
43
 
 
44
 
45
+ ## Dataset Summary
46
 
47
+ Traditional toxicity detection models have focused on the single utterance level without deeper understanding of context. We introduce CONDA, a new dataset for in-game toxic language detection enabling joint intent classification and slot filling analysis, which is the core task of Natural Language Understanding (NLU). The dataset consists of 45K utterances from 12K conversations from the chat logs of 1.9K completed Dota 2 matches. We propose a robust dual semantic-level toxicity framework, which handles utterance and token-level patterns, and rich contextual chatting history. Accompanying the dataset is a thorough in-game toxicity analysis, which provides comprehensive understanding of context at utterance, token, and dual levels. Inspired by NLU, we also apply its metrics to the toxicity detection tasks for assessing toxicity and game-specific aspects. We evaluate strong NLU models on CONDA, providing fine-grained results for different intent classes and slot classes. Furthermore, we examine the coverage of toxicity nature in our dataset by comparing it with other toxicity datasets.
48
 
49
+ ## Leaderboards
50
+ The Codalab leaderboard can be found at: https://codalab.lisn.upsaclay.fr/competitions/7827
 
 
 
51
 
52
+ ### Evaluation Metrics
53
+ **JSA**(Joint Semantic Accuracy) is used for ranking. An utterance is deemed correctly analysed only if both utterance-level and all the token-level labels including Os are correctly predicted.
 
 
 
 
 
 
 
 
 
54
 
55
+ Besides, the f1 score of **utterance-level** E(xplicit) and I(mplicit) classes, **token-level** T(oxicity), D(ota-specific), S(game Slang) classes will be shown on the leaderboard (but not used as the ranking metric).
56
 
57
+ ## Languages
58
 
59
+ English
60
 
61
+ ## Video
62
+ Please enjoy a video presentation covering the main points from our paper:
63
 
64
+ <p align="centre">
 
65
 
66
+ [![ACL_video](https://img.youtube.com/vi/qRCPSSUuf18/0.jpg)](https://www.youtube.com/watch?v=qRCPSSUuf18)
67
+
68
+ </p>
69
 
70
+ ## Citation Information
71
 
 
72
  @inproceedings{weld-etal-2021-conda,
73
  title = "{CONDA}: a {CON}textual Dual-Annotated dataset for in-game toxicity understanding and detection",
74
  author = "Weld, Henry and
 
90
  pages = "2406--2416",
91
  }
92
  ```
93
+