MathewShen
commited on
Update README.md
Browse filesAdd details from GitHub
README.md
CHANGED
@@ -2,13 +2,192 @@
|
|
2 |
license: cc-by-sa-4.0
|
3 |
language:
|
4 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
# π§βπ€βπ§ NetEaseCrowd: A Dataset for Long-term and Online Crowdsourcing Truth Inference
|
7 |
|
8 |
-
This is a Dataset Repository of **NetEaseCrowd**
|
9 |
-
|
10 |
[View it in GitHub](https://github.com/fuxiAIlab/NetEaseCrowd-Dataset)
|
11 |
|
12 |
-
# License
|
13 |
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-sa-4.0
|
3 |
language:
|
4 |
- en
|
5 |
+
task_categories:
|
6 |
+
- text-classification
|
7 |
+
- image-classification
|
8 |
+
- text-generation
|
9 |
+
tags:
|
10 |
+
- Crowdsourcing
|
11 |
+
- Truth Inference
|
12 |
+
- Label Aggregation
|
13 |
+
pretty_name: 'NetEaseCrowd: A Dataset for Long-term and Online Crowdsourcing Truth Inference'
|
14 |
+
size_categories:
|
15 |
+
- 1M<n<10M
|
16 |
---
|
17 |
# π§βπ€βπ§ NetEaseCrowd: A Dataset for Long-term and Online Crowdsourcing Truth Inference
|
18 |
|
|
|
|
|
19 |
[View it in GitHub](https://github.com/fuxiAIlab/NetEaseCrowd-Dataset)
|
20 |
|
|
|
21 |
|
22 |
+
## Introduction
|
23 |
+
|
24 |
+
We introduce NetEaseCrowd, a large-scale crowdsourcing annotation dataset based on
|
25 |
+
a mature Chinese data crowdsourcing platform of NetEase Inc..
|
26 |
+
NetEaseCrowd dataset contains about **2,400** workers, **1,000,000** tasks, and **6,000,000** annotations between them,
|
27 |
+
where the annotations are collected in about 6 months.
|
28 |
+
In this dataset, we provide ground truths for all the tasks and record timestamps for all the annotations.
|
29 |
+
|
30 |
+
|
31 |
+
### Task
|
32 |
+
NetEaseCrowd dataset is built based on a gesture comparison task. Each task contains three choices, where two are similar gestures and the other one is not. Annotators are required to pick out the different one. An example is shown below:
|
33 |
+
|
34 |
+
<center>
|
35 |
+
<img src="assets/task_example.png" width="500"/>
|
36 |
+
</center>
|
37 |
+
|
38 |
+
### Comparison with existing datasets
|
39 |
+
|
40 |
+
Compared with the existing crowdsourcing datasets, our NetEaseCrowd dataset has the following characteristics:
|
41 |
+
|
42 |
+
| Characteristic | Existing datasets | NetEaseCrowd dataset |
|
43 |
+
|----------------|------------------------------------------------------|-----------------------------------------------------------|
|
44 |
+
| Scalability | Relatively small sizes in #workers/tasks/annotations | Lage-scale data collection with 6 millions of annotations |
|
45 |
+
| Timestamps | Short-term data with no timestamps recorded | Complete timestamps recorded during a 6-month timespan |
|
46 |
+
| Task Type | Single type of tasks | Various task types with different required capabilities |
|
47 |
+
|
48 |
+
|
49 |
+
|
50 |
+
<!-- ## Citation
|
51 |
+
|
52 |
+
If you use the dataset in your work, please cite:
|
53 |
+
|
54 |
+
@inproceedings{TODO} -->
|
55 |
+
|
56 |
+
## Dataset Statistics
|
57 |
+
|
58 |
+
The basic statistics of NetEaseCrowd dataset and [other previous datasets](#other-public-datasets) are as follows:
|
59 |
+
| Dataset | \#Worker | \#Task | \#Groundtruth | \#Anno | Avg(\#Anno/worker) | Avg(\#Anno/task) | Timestamp | Task type |
|
60 |
+
|--------------------------------------------|----------|---------|---------------|-----------|--------------------|------------------|--------------|-----------|
|
61 |
+
| NetEaseCrowd | 2,413 | 999,799 | 999,799 | 6,016,319 | 2,493.3 | 6.0 | βοΈ | Multiple |
|
62 |
+
| Adult | 825 | 11,040 | 333 | 92,721 | 112.4 | 8.4 | β | Single |
|
63 |
+
| Birds | 39 | 108 | 108 | 4,212 | 108.0 | 39.0 | β | Single |
|
64 |
+
| Dog | 109 | 807 | 807 | 8,070 | 74.0 | 10.0 | β | Single |
|
65 |
+
| CF | 461 | 300 | 300 | 1,720 | 3.7 | 5.7 | β | Single |
|
66 |
+
| CF\_amt | 110 | 300 | 300 | 6030 | 54.8 | 20.1 | β | Single |
|
67 |
+
| Emotion | 38 | 700 | 565 | 7,000 | 184.2 | 10.0 | β | Single |
|
68 |
+
| Smile | 64 | 2,134 | 159 | 30,319 | 473.7 | 14.2 | β | Single |
|
69 |
+
| Face | 27 | 584 | 584 | 5,242 | 194.1 | 9.0 | β | Single |
|
70 |
+
| Fact | 57 | 42,624 | 576 | 216,725 | 3802.2 | 5.1 | β | Single |
|
71 |
+
| MS | 44 | 700 | 700 | 2,945 | 66.9 | 4.2 | β | Single |
|
72 |
+
| product | 176 | 8,315 | 8,315 | 24,945 | 141.7 | 3.0 | β | Single |
|
73 |
+
| RTE | 164 | 800 | 800 | 8,000 | 48.8 | 10.0 | β | Single |
|
74 |
+
| Sentiment | 1,960 | 98,980 | 1,000 | 569,375 | 290.5 | 5.8 | β | Single |
|
75 |
+
| SP | 203 | 4,999 | 4,999 | 27,746 | 136.7 | 5.6 | β | Single |
|
76 |
+
| SP\_amt | 143 | 500 | 500 | 10,000 | 69.9 | 20.0 | β | Single |
|
77 |
+
| Trec | 762 | 19,033 | 2,275 | 88,385 | 116.0 | 4.6 | β | Single |
|
78 |
+
| Tweet | 85 | 1,000 | 1,000 | 20,000 | 235.3 | 20.0 | β | Single |
|
79 |
+
| Web | 177 | 2,665 | 2,653 | 15,567 | 87.9 | 5.8 | β | Single |
|
80 |
+
| ZenCrowd\_us | 74 | 2,040 | 2,040 | 12,190 | 164.7 | 6.0 | β | Single |
|
81 |
+
| ZenCrowd\_in | 25 | 2,040 | 2,040 | 11,205 | 448.2 | 5.5 | β | Single |
|
82 |
+
| ZenCrowd\_all | 78 | 2,040 | 2,040 | 21,855 | 280.2 | 10.7 | β | Single |
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
<!-- The basic statistics of NetEaseCrowd dataset shows as follows:
|
87 |
+
|
88 |
+
| | NetEaseCrowd |
|
89 |
+
| ------------- | ------------ |
|
90 |
+
| #Workers | 2,413 |
|
91 |
+
| #Tasks | 999,799 |
|
92 |
+
| #Groundtruths | 999,799 |
|
93 |
+
| #Annotations | 6,016,319 | -->
|
94 |
+
|
95 |
+
## Data Content and Format
|
96 |
+
|
97 |
+
### Obtain the data
|
98 |
+
|
99 |
+
Two ways to access the dataset:
|
100 |
+
* Directly download overall NetEaseCrowd in [Hugging Face](https://huggingface.co/datasets/liuhyuu/NetEaseCrowd) [**Recommended**]
|
101 |
+
|
102 |
+
* Under the [`data/` folder](https://github.com/fuxiAIlab/NetEaseCrowd-Dataset/tree/main/data), the NetEaseCrowd dataset is provided in partitions in the csv file format. Each partition is named as `NetEaseCrowd_part_x.csv`. Concat them to get the entire NetEaseCrowd dataset.
|
103 |
+
|
104 |
+
### Dataset format
|
105 |
+
|
106 |
+
In the dataset, each line of record represents an interaction between a worker and a task, with the following columns:
|
107 |
+
|
108 |
+
* **taskId**: The unique id of the annotated task.
|
109 |
+
* **tasksetId**: The unique id of the task set. Each task set contains unspecified number of tasks. Each task belongs to exactly one task set.
|
110 |
+
* **workerId**: The unique id of the worker.
|
111 |
+
* **answer**: The annotation given by the worker, which is an enumeric number starting from 0.
|
112 |
+
* **completeTime**: The integer timestamp recording the completion time of the annotation.
|
113 |
+
* **truth**: The groundtruth of the annotated task, which, in consistency with answer, is also an enumeric number starting from 0.
|
114 |
+
* **capability**: The unique id of the capability required by the annotated taskset. Each taskset belongs to exactly one capability.
|
115 |
+
|
116 |
+
*For the privacy concerns, all sensitive content like as -Ids, has been anonymized.*
|
117 |
+
|
118 |
+
### Data sample
|
119 |
+
|
120 |
+
| tasksetId | taskId | workerId | answer | completeTime | truth | capability |
|
121 |
+
|-----------|---------------------|----------|--------|---------------|-------|------------|
|
122 |
+
| 6980 | 1012658482844795232 | 64 | 2 | 1661917345953 | 1 | 69 |
|
123 |
+
| 6980 | 1012658482844795232 | 150 | 1 | 1661871234755 | 1 | 69 |
|
124 |
+
| 6980 | 1012658482844795232 | 263 | 0 | 1661855450281 | 1 | 69 |
|
125 |
+
|
126 |
+
In the example above, there are three annotations, all from the same taskset 6980 and the same task 1012658482844795232. Three annotators, with ids 64, 150, and 263, provide annotations of 2, 1, and 0, respectively. They do the task at different time. The truth label for this task is 1, and the capability id of the task is 69.
|
127 |
+
|
128 |
+
## Baseline Models
|
129 |
+
|
130 |
+
We test several existing truth inference methods in our dataset, and detailed analysis with more experimental setups can be found in our paper.
|
131 |
+
|
132 |
+
| Method | Accuracy | F1-score |
|
133 |
+
|----------------|----------|----------|
|
134 |
+
| MV | 0.92695 | 0.92692 |
|
135 |
+
| DS | 0.95178 | 0.94817 |
|
136 |
+
| MACE | 0.95991 | 0.94957 |
|
137 |
+
| Wawa | 0.94814 | 0.94445 |
|
138 |
+
| ZeroBasedSkill | 0.94898 | 0.94585 |
|
139 |
+
| GLAD | 0.95064 | 0.95058 |
|
140 |
+
| EBCC | 0.91071 | 0.90996 |
|
141 |
+
| ZC | 0.95305 | 0.95301 |
|
142 |
+
| TiReMGE | 0.92713 | 0.92706 |
|
143 |
+
| LAA | 0.94173 | 0.94169 |
|
144 |
+
| BiLA | 0.88036 | 0.87896 |
|
145 |
+
|
146 |
+
### Test with the dataset directly from crowd-kit
|
147 |
+
|
148 |
+
The NetEaseCrowd dataset has been integrated into the [crowd-kit](https://github.com/Toloka/crowd-kit)
|
149 |
+
(with pull request [here](https://github.com/Toloka/crowd-kit/pull/101)),
|
150 |
+
you can use it directly in your code with the following code(with crowd-kit version > 1.2.1):
|
151 |
+
|
152 |
+
```python
|
153 |
+
from crowdkit.aggregation import DawidSkene
|
154 |
+
from crowdkit.datasets import load_dataset
|
155 |
+
|
156 |
+
df, gt = load_dataset('netease_crowd')
|
157 |
+
|
158 |
+
ds = DawidSkene(10)
|
159 |
+
result = ds.fit_predict(df)
|
160 |
+
|
161 |
+
print(len(result))
|
162 |
+
# 999799
|
163 |
+
|
164 |
+
```
|
165 |
+
|
166 |
+
## Other public datasets
|
167 |
+
We provide a curated list for other public datasets towards truth inference task.
|
168 |
+
|
169 |
+
| Dataset Name | Resource |
|
170 |
+
|----------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
171 |
+
| adult | Quality management on amazon mechanical turk. [[paper](https://dl.acm.org/doi/abs/10.1145/1837885.1837906)][[data](https://github.com/ipeirotis/Get-Another-Label/tree/master/data)] |
|
172 |
+
| sentiment<br>fact | Workshops Held at the First AAAI Conference on Human Computation and Crowdsourcing: A Report. [[paper](https://ojs.aaai.org/index.php/aimagazine/article/view/2537/2427)][[data](https://sites.google.com/site/crowdscale2013/home)] |
|
173 |
+
| MS<br>zencrowd_all<br>zencrowd_us<br>zencrowd_in<br>sp<br>sp_amt<br>cf<br>cf_amt | The active crowd toolkit: An open-source tool for benchmarking active learning algorithms for crowdsourcing research. [[paper](https://ojs.aaai.org/index.php/HCOMP/article/download/13256/13104)][[data](https://github.com/orchidproject/active-crowd-toolkit)] |
|
174 |
+
| Product<br>tweet<br>dog<br>face<br>duck<br>relevance<br>smile | Truth inference in crowdsourcing: Is the problem solved? [[paper](https://hub.hku.hk/bitstream/10722/243527/1/content.pdf?accept=1)][[data](https://zhydhkcws.github.io/crowd_truth_inference/)] <br> *Note that tweet dataset is called sentiment in this source. It is different from the sentiment dataset in CrowdScale2013.* |
|
175 |
+
| bird<br>rte<br>web<br>trec | Spectral methods meet em: A provably optimal algorithm for crowdsourcing. [[paper](https://proceedings.neurips.cc/paper/2014/file/788d986905533aba051261497ecffcbb-Paper.pdf)][[data](https://github.com/zhangyuc/SpectralMethodsMeetEM)] |
|
176 |
+
## Citation
|
177 |
+
|
178 |
+
If you use this project in your research or work, please cite it using the following BibTeX entry:
|
179 |
+
|
180 |
+
```bibtex
|
181 |
+
@misc{wang2024dataset,
|
182 |
+
title={A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment},
|
183 |
+
author={Fei Wang and Haoyu Liu and Haoyang Bi and Xiangzhuang Shen and Renyu Zhu and Runze Wu and Minmin Lin and Tangjie Lv and Changjie Fan and Qi Liu and Zhenya Huang and Enhong Chen},
|
184 |
+
year={2024},
|
185 |
+
eprint={2403.08826},
|
186 |
+
archivePrefix={arXiv},
|
187 |
+
primaryClass={cs.HC}
|
188 |
+
}
|
189 |
+
```
|
190 |
+
|
191 |
+
## License
|
192 |
+
|
193 |
+
The NetEaseCrowd dataset is licensed under [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en).
|